Top 5 ASCII Games on Linux – Linux Hint

ASCII graphics have been admired by most players especially those who prefer large pixels and old-school gaming. Even with the visually impressive games such as the Rise of the Tomb Raider or Forza Horizon 3, there are some classic ASCII games still out there which are more popular. This article is for all those who either already love ASCII games or would like to try these out for a change.

Want to find out which of these ASCII games are the most addictive? Here is a list of all of the most addictive ASCII games which are bound to keep you hooked in front of the computer screen for hours in a row:

1. Curse of War

Curse of War rightfully deserves its place as the first one on our list. You might not be able to understand it very well for the first few times you play the game. However, in just a few tries you will get the hang of it after which you will find just how addicting it can get.

Instead of using control units, for the most part of the game you will be gathering infrastructure, collecting resources, and guiding an army to take action. Since it is a strategy game rather than a fighting game, you will be pushing your skills as a planner who manages, gathers and handles resources for an upcoming war. How you manage these resources and how you guide your army to fight is what will determine the output of the war. The output of the war is naturally what determines your score in the game.

Instead of jumping right into the action, spend a few minutes learning about the game. The rules, tricks and the techniques which will help you play the game can be easily found on the official website for Curse of War. You will also find the instructions to install it into your computer on this official link.

It runs as a single player game by default. If you want to play the game with another person, you can do so easily by connecting two servers and using the following commands:

$ curseofwar -E 2
$ curseofwar -C <server’s IP>

Downloading the Curse of War is very simple. You just need to type in the following command:

sudo apt install curseofwar

2. ASII Sector

Another truly addicting ASCII game is ASCII sector. This is a free trading/space combating game with lots of exploration and action. Although the game itself is influenced with the original version called Wing Commander, it makes it even more interesting to play. If you have played Wing Commander you would know that the player is casted as the owner of an old space ship. You explore space, make alliances and complete missions to raise your reputation. ASCII sector is simply a replication in the ASCII environment which surprisingly makes it even more addicting.

In game you have an aerial view of the space with a small avatar of you in a spaceship. You own a basic space ship initially which upgrades as you complete quests and trade goods. The ASCII version of the game keeps alive the theme of the original version of the game. However, the missions, characters, capabilities of the spaceship have improved considerably.

Besides completing quests built by others, you can also create a fully customized quest on your own. Creating your own quest isn’t hard at all with the simple syntax and the built-in compiler. Although the game isn’t available in the repository, you can download it just as easily from the official website of ASCII sector.

3. VMS Empire

With VMS Empire consider yourself an emperor and the computer is another emperor. Besides being emperors, you are also each other’s rivals. You and the computer both play using the same set of rules with the ultimate mission to destroy each other.

It is a classic text-based game which incorporates the use of characters to represent the world inside the game. The commands to play the game are also sent in the form of characters selected on the keyboard.

The interface is a large rectangle with cities, seas and land. At the start of the game, you and the computer are each given a single city to control. As you expand your empire, your cities will be marked by O while those which belong to the computer will be marked by an X. Your mission is simple: destroy everything owned by the enemy and capture everything else.

You can download the game here. At the same link, you will also find instructions to play it.

4. DoomRL

Have you enjoyed the famous first person shooter game, Doom? Although the game is a blast, you might have played it too many times to find it interesting any more. Here is a variant for you based on ASCII characters.

In DoomRL (or Doom, the Roguelike), the player is the single marine that survives in a squad. This squad was sent in response to a distress call from Phobos. Your mission is to investigate the scenario, locate the evil mastermind and end it.

The game features a simple interface and multiple difficulty levels. You will never be bored owing to the continuous developments in the game and with the total of 25 levels which you have to complete. The final level is the most exciting one since you will encounter the Cyberdemon and two other rivals you have to end.

Download the game here.

5. Dwarf Fortress

Another highly addicting ASCII game is this single player game called dwarf fortress. In the game, you have to control a team of dwarfs who are on an adventure in a randomly generated world. You have to provide shelter, sustenance and comfort for each of the dwarfs, defend your fortress and gather wealth. You can even customize the game by adding new plants, weapons, creatures and objects by modifying the text files of the game. You can download the game here.

Conclusion

ASCII might not be your first thought when it comes to linux computer games, but if you prefer the terminal in general you might want to trying ASCII gaming.

Source

Bash uniq Command – Linux Hint

Linux users need to create or read the text file in regular basis for many purposes. A text file can contain different types of numeric and character data. Same data can be stored multiple times in a text file. Sometimes, you may require reading any text file by omitting duplicate lines of data. Bash

uniq

command is a useful command line utility tool that is used to read a text file by filtering or removing adjacent duplicate lines from the text file.

uniq

command is used to detect the adjacent lines from a file and write the content of the file by filtering the duplicate values or write only the duplicate lines into another file.

uniq [OPTION] [ INPUT [OUTPUT] ]

Here, OPTION, INPUT, and OUTPUT are optional. If you use only uniq command without any option or input/output filename then this command will apply on the standard input data. Many types of options can be used with this command to filter duplicate data in various ways from any text file. If you use an input file name with this command then the data will filter from that file. If you execute the command with the option, input filename, and output filename then the data will filter from input file based on the option and write the output into the output file.

Options:

Some major options of uniq command are discussed below.

  • -f N or –skip-fields=N

It is used to skip N fields before detecting the uniqueness of data. Fields are the group of characters separated by whitespace or tab.

  • -s N or –skip-chars=N

It is used to skip N characters before detecting the uniqueness of data.

  • -w N or –check-chars=N

It is used to compare N characters only in a line.

  • -c or –count

It is used to count how many times a line repeated in the searching data and the values are shown as the prefix of that line.

  • -z or –zero-terminated

It is used to terminate the line with 0 bytes instead of using newline.

  • -d or –repeated

It is used to print all repeated lines only.

  • -D or –all-repeated[=METHOD]

It is used to print all repeated lines based on the used method. The following methods can be used with this option.

none: It is the default method and doesn’t delimit duplicate lines.
prepend: It adds a blank line before each set of duplicate lines.
separate: It adds a blank line between two duplicate lines.

  • -u or –unique

It is used to print the unique lines only.

  • -i or –ignore-case

It is used for case-insensitive comparison.

Examples of uniq command

Create a text file named uniq_test.txt with the following content:

Bash Programming
Bash Programming
Python Programming
I like PHP Programming
I like Java Programming

Example#1: Using -f option

The following command will apply uniq command by skipping first two fields of each line from uniq_test.txt file.

$ uniq -f 2 uniq_test.txt

Example#2: Using -s option

The following command will apply uniq command by skipping 4 characters from each line of uniq_test.txt file.

$ uniq -s 4 uniq_test.txt

Example#3: Using –w option

The following command will apply uniq command by comparing the first two characters of each line.

$ uniq -w 2 uniq_test.txt

Example#4: Using –c option

The following command will count the appearance of each line in the file and displays the number at the front of each line of the output.

Example#5: Using –d option

The following command displays those lines from the file only that appeared multiple times in the file. Only one line has appeared two times in uniq_test.txt file which is displayed as output.

Example#6: Using –D option

The following command will print all duplicate lines from the file.

Example#7: Using –all-repeated option with prepend method

Three methods can be used with –all-repeated option which are mentioned earlier of this tutorial. Here, the prepend method is used with this option that prints duplicate lines by appending blank lines at the beginning of duplicate lines.

$ uniq –all-repeated=prepend uniq_test.txt

Example#8: Using –u option

The following command will find out all the unique lines from the file. There are three unique lines in uniq_test.txt file which are printed as output.

Conclusion

The uses of uniq command are explained and shown by using various examples in this tutorial. Hope, you will be able to use uniq command properly after reading this tutorial.

Source

Ashes of the Singularity: Escalation inches closer to a Linux release with Vulkan

Some fun weekend news for those wanting another RTS to play, as Ashes of the Singularity: Escalation is getting closer to a Linux release.

As a real-time strategy game nut, I’ve been waiting to play it since I first laid my eyes on it. Back in May of 2017, Stardock Entertainment put up a Steam post themselves asking to see requests for a Linux version which caused some more excitement.

Back in September this year, they mentioned the base game engine was on Linux but not the actual game itself. Seems it’s moving along, as yesterday they updated that Steam post to say this:

Update: 12/28/2018:

We now have the core engine compiling under Debian Linux and running via Vulkan. We still have a long, long way to go but this is a major step. Thanks for your continued interest and support!

Only noticing it now, as it’s a post I follow that the developer has no replied to mention it. Their wording isn’t too different to what was said in September though, so keep that in mind.

I like their honesty with it, that we still have some ways to go but they’re still working on it so that’s great stuff.

Source

Red Hat Enterprise Linux ported to Windows 10 as WLinux Enterprise – Software

Red Hat Enterprise Linux is now even more accessible to Windows 10 users with the help of open-source software developer Whitewater Foundry.

Dubbed WLinux Enterprise, the $149.95 per-seat solution is the business version of the $29.95 consumer version of WLinux, which was made available through the Microsoft Store last month.

WLinux was developed to help Windows 10 users run various GNU/Linux distributions inside the OS as Microsoft Store apps, providing access to the likes of Ubuntu, Debian, Fedora, among others.

“WLinux Enterprise unleashes developers and IT staff productivity by giving them access to the Linux command line and development tools they need in today’s cloud, hybrid, and cross-platform environments, including Git, OpenSSH, Node.js, Python, Go, Ruby, AWS and Azure cloud command-line tools, and more, directly on Windows 10, alongside existing Windows applications,” Whitewater Foundry said in a statement.

“WLinux Enterprise accomplishes this in a cost-effective and secure approach by deploying Linux on Windows devices companies already own within Windows networks they already have deployed, reducing the burden of managing a mixed OS environment and eliminating unsecure device usage.”

The company said the software could be deployed to Windows devices through multiple channels, including the Microsoft Store for Business, InTune, DISM, ICD, SCCM and offline sideloading, with or without automatic updates.

Microsoft introduced Windows Subsystem for Linux (WSL) in Windows 10’s Anniversary Update in 2016, providing an internally developed Linux-compatible kernel interface, which can then run Linux distros.

Source

Total Chaos Guide – GamersOnLinux

 

totalchaos90.jpg

Arriving at Fort Oasis was no vacation. The community of coal miners have mysteriously dissapeared and left facility looking like a wasteland. You receive a transmission on your radio about a sole survivor…. Explore the remains of the island, craft weapons and gear to survive, slay evil minions and more.

totalchaos103.jpg

Follow my step-by-step guide on installing, configuring and optimizing Total Chaos in GZDoom

Note: This guide applies to the Moddb Doom II version of Total Chaos. Other versions may require additional steps.Tips & Specs:
To learn more about GZDoom, see the online manual: https://zdoom.org/wiki/GZDoom

Mint 19 64-bit

GeForce GTX 1060
Nvidia 396.54
GZDoom 3.6.0

Download GZDoom for your distro

https://zdoom.org/downloadstotalchaos01.png

Save it to your computer

totalchaos02.png

Double-click to install GZDoom

Click Install (Mint)
totalchaos03.png

Go to your Menu and run GZDoom one time

Click OK
totalchaos04.png

Download the Total Chaos Standalone

https://www.moddb.com/mods/total-chaos/downloads/total-chaos-10totalchaos05.png

Click “Download Now”

totalchaos06.png

Note: There is also a “Retro Edition” for improved performance on lower spec computers.
Open “totalchaos_standalone_1000b.zip” with your Archive Manager

Select all files
Click Extract
totalchaos07.png

Navigate to the GZDoom directory

Full path:

Code:

/home/username/.config/gzdoom
Click Extract
totalchaos08.png

Click “Show the Files”

totalchaos09.png

The gzdoom directory now has the Total Chaos standalone installed

Here is the .pk3 we will use to launch it with GZDoom
totalchaos10.png

Note: I was not able to find a way to put all of the game files in a single folder and launch it with GZDoom
Test GZDoom and make sure Total Chaos launches

Open Terminal (Ctrl+Alt+T)
Type:

Code:

gzdoom -file totalchaos.pk3

totalchaos11.png

Code should start scrolling up and the game should launch after 10 seconds…

totalchaos12.png

Then you will see the title screen in fullscreen

totalchaos13.png

Controls
Total Chaos is using the standard Doom controls

Update the controls to WSAD and jump, crouch, etc.
totalchaos14.png

Grain Effect
The film grain effect might be a bit intense. You can turn it off in the Options, Post Processing menu

Shortcut to Total Chaos (Mint)
Right-click the menu

Click Configure
totalchaos15.png

Click “Open the menu editor”

totalchaos16.png

Select “Games”

Click “New Item”
totalchaos17.png

Name: Total Chaos

Command: gzdoom -file totalchaos.pk3
Click the icon
Navigate to the Total Chaos icon folder
Select the Total Chaos icon
Click OK
totalchaos18.png

Now you can launch Total Chaos from the Menu

totalchaos19.png

Conclusion:
Total Chaos runs pretty well, but has a few stutters… It didn’t affect game-play and the developer is working on optimizing it. In fact, there is a “Retro Edition” that is already optimized for lower spec computers like laptops and netbooks.

I found the game to be a fun survival exploration action game. I love the levels, interaction, crafting and action mechanics. The developer spent a lot of time dialing in the game so it was a challenge, but still possible to have fun. The graphics and ambiance is amazing considering they are using a modified Doom engine.

Gameplay Video:

Screenshots:totalchaos81.jpg

totalchaos85.jpg

totalchaos88.jpg

totalchaos91.jpg

totalchaos93.jpg

totalchaos95.jpg

totalchaos100.jpg

Source

Top advice for securing your systems in 2019

Take action to secure your passwords, containers, and more with these top articles from Opensource.com this year.

It’s been an interesting year for security and users. It all kicked off at the beginning of the year with Facebook and Cambridge Analytica causing people suddenly to think more seriously about their data and what they share on social media. In fact, the threat against personal data has been an important theme for the year. We’ve seen breaches at companies such as Marriott (in December) and British Airways (September) and Under Armour (March). What’s interesting about these is that the criminals seem to be targeting all levels of the stack, from the enterprise backend to the web app to the mobile app on people’s phones.

And once data is leaked, it will be put to use. There’s been an enormous rise in extortion attempts based on account data allegedly used on “adult sites” and hijacked webcam footage. This brings us inexorably to cryptocurrencies. Besides being the payment method of choice for criminals, cryptocurrency has also suffered this year, with a $13.5 million wallet compromise at Bancor in July. Bitcoin has seen huge peaks and troughs as confidence in the currency has oscillated.

Another story that won’t go away is hardware. Bloomberg Businessweek published a much-disputed story suggesting that a Chinese military agency convinced or forced Supermicro to insert tiny chips on motherboards for companies such as Apple and Amazon. Whether the story is true or not, it has opened people’s minds to the realisation that we have less control over the supply chain than we thought we did. Alongside that was another realisation: chip-related security issues such as Meltdown and Spectre, which were revealed at the very beginning of January, are likely to be joined by a never-ending set of similar or related vulnerabilities that the average user has little capability to mitigate.

With all that said, we’ve had numerous articles on Opensource.com to help you secure your passwords, containers, and more.

Top 6 Opensource.com security articles of 2018

Lock
Here’s how to quickly and easily reset a root password on Fedora, CentOS, and similar Linux distros.
Password
We all want our passwords to be safe and secure. To do that, many people turn to password…
Lock
42 answers to the big questions about life, the universe, and everything else about Security-…
Use these tools to build security testing into the software development process.
tree roots breaking through brick wall
Even smart admins can make bad decisions.
Password laptop
Do you ever feel you have more passwords than you can keep track of? It’s probably more than just a…

Source

Setting Up Zabbix Server on Debian 9.0 – Linux Hint

Zabbix is very popular, easy to use, fast monitoring tool. It support monitoring Linux, Unix, windows environments with agents, SNMP v1,v2c,c3, agentless remote monitoring. It can also monitor remote environment with a proxy without opening port for remote environments. You can send email, sms, IM message, run sny type of script to automate daily or emergency tasks based on any scenario.

Zabbix 4 is the latest version. New version supports php7, mysql 8, encryption between host and clients, new graphical layout, trend analysis and many more. With zabbix you can use zabbix_sender and zabbix_get tools to send any type of data to zabbix system and trigger alarm for any value. With these capabilities Zabbix is programmable and your monitoring is limited to your creativity and capability.

Installing from Zabbix repository is the easiest way. In order to setup from source file you need to setup compilers and make decisions about which directories and features get used for your environment. The Zabbix repository files provide all features enable and ready to go environment for your needs.

If you had the chance to use the setup we have select xfce for desktop environment. If you have not rest of the installation steps will perfectly work even if you had minimal setup environment which is the cleanest environment you find for Debian.

Security First!

Login to the root user and add the guest user to soders file simple adding.

Username ALL=(ALL:ALL) ALL

Into the configuration file /etc/sudoers

You can also use

To directly edit the file with the default text editor (nano in my case)

Install Mysql

Once you create the guest user and give root privileges we can login to the user with

and start to add sudo in front of the commands to send root commands with control.

Install Mysql with following command

$ sudo apt-get install mysql-server

Press ‘Y’ in order to download and install.

Right after the installation add mysql to the startup sequence so when system reboots your mysql server will be up.

$ sudo systemctl enable mariadb

$ sudo systemctl start mariadb

You can test if mysql is up with the following command

You should be able to login to the database server without entering a password.

Type quit to log out of the server

Install Zabbix from Repository

Once the database server installation has finished we can start installing zabbix application.

Download apt repo package to the system

$sudo wget https://repo.zabbix.com/zabbix/4.0/debian/pool/main/z/zabbix-release/zabbix-release_4.0-2+stretch_all.deb

$ sudo dpkg -i zabbix-release_4.0-2+stretch_all.deb

$sudo apt update

Lets install Zabbix server and front end packages.

$ sudo apt install zabbix-server-mysql zabbix-frontend-php zabbix-agent

Add Zabbix Services to Startup

Once all packages are installed enable Zabbix services but don’t start yet. We need modifications on the configuration file.

$ sudo systemctl enable apache2

$ sudo systemctl enable zabbix-server

$ sudo systemctl enable zabbix-agent

Create Database and Deploy Zabbix Database Tables

Now it is time to create database for Zabbix. Please note you can create a database with any name and a user. All you need is replace apropirate value with the commands we provided below.

In our case we will pickup (all are case sensitive)

We create zabbix database and user with mysql root user

After creating database and users we create the Zabbix database tables in our new database with the following command

# zcat /usr/share/doc/zabbix-server-mysql*/create.sql.gz | mysql -uzabbix -p -B Zabbix

Enter your database password in next step

Process may take about 1-10 minutes depending on your performance of server.

Configure Zabbix Server

In order to have our Zabbix server start and get ready for business we must define database parameters into the zabbix_server.conf

$ sudo nano /etc/zabbix/zabbix_server.conf

DBHost=localhost
DBUser=zabbix
DBPassword=VerySecretPassword
DBName=zabbix

Time zone needs to be entered into /etc/zabbix/apache.conf file in order not to face any time related inconsistency in our environment. Also this step is a must for a errorless environment. If this parameter is not set Zabbix web interface will warn us every time. In my case the time zone is Europe/Istanbul.

You can get full list of PHP time zones here.

Please also note there are php7 and php5 segments here. In our setup php 7 was installed so modifying the php_value date.timezone in the php7.c segment was enough but we recommend modifying the php5 for compatibility issues.

Save the file.

Now stop and start services in order to have all changes in affect.

$ sudo systemctl restart apache2 zabbix-server zabbix-agent

Setting up Web Server

Now database and Zabbix services are up. In order to check whats going in our systems we should setup web interface with mysql support. This is our last step before going online and start checking some stats.

Welcome Screen.

Check if everything in ok with Green color.

Define user name and password we defined in setting up database section.

DBHost=localhost
DBUser=zabbix
DBPassword=VerySecretPassword
DBName=zabbix

You can define Zabbix-server name in this step. You want to have it called something like watch tower or monitoring server something like it too.

Note: You can change this setting from

/etc/zabbix/web/zabbix.conf.php

You can change the $ZBX_SERVER_NAME parameter in the file.

Verify setting and press Next Step

Default username and password are (case sensitive)

Now you can check your system stats.

Go to Monitoring -> Latest data

And select Zabbix Server from Host groups and check if stats are coming live.

Conclusion

We have setup the database server in the beginning because a system with already installed packages can prevent any version or mysql version we want to download because of conflicts. You can also download mysql server from the mysql.com site.

Later on we continued with Zabbix binary package installation and created database and user. Next step was to configure Zabbix configuration files and install web interface. In later stages you can install SSL, modify configuration for a specific web domain, proxy through nginx or directly run from nginx with php-fpm, upgrade PHP and such things like things. You may also disable Zabbix-agent in order to save from database space. It is all up to you.

Now you can enjoy monitoring with Zabbix. Have a Nice Day.

Source

Btrfs vs OpenZFS – Linux Hint

Btrfs or B-tree file system is the newest competitor against OpenZFS, arguably the most resilient file system out there. Both the file systems share some commonalities such as having checksum on data blocks, transaction groups and copy-on-write mechanism, making them both target the user groups. So what’s the difference and which one should you use?

1. Copy-on-Write (COW) Mechanism

Both the file systems use copy-on-write mechanism. This means that, if you are trying to modify a file, neither of the file systems will try to overwrite the existing data on the disk with the newer data. Instead, the newer data is written elsewhere and once the write operation is complete, the file system simply points to the newer data blocks and the old blocks get recycled over time. This mechanism allows both the file systems to have features like snapshots and cloning.

COW also prevents edge cases like partial writes,which can happen due to kernel panic or power failure and potentially corrupt your entire entire file system. With COW in place, a write has either happened or not happened, there’s no in between.

2. Pooling and RAID

Both the file systems intend on eliminating the need of a volume manager, raid and other abstractions that sit between the file system and the disks. This is more robust and reliable than having a hardware RAID controller, simply because it eliminates a single point of failure — The RAID controller itself.

OpenZFS offers a stable, reliable and user-friendly RAID mechanism. You can mirror between drives, use RAIDZ1 which spreads your data across 3 or more disk with one parity block. So it can withstand upton 1 disk’s failure per Vdev. Similarly, RAIDZ2 can use 4 or more disks and withstand upto 2 disks failing and similarly we have RAIDZ3.

Btrfs too has these features implemented, the difference is simply that it calls them RAID, instead of RAIDZ and so on. Some more complicated RAID array setups like RAID56 are buggy and not fit for use, at the time of this writing.

3. Licensing

One of the reasons OpenZFS came so late on the GNU/Linux ecosystem is because of its license incompatibility with GNU GPL. Without getting into too much details, Btrfs is under GPL which allows users to take source code and modify it, but the modifications should also be published under GPL and stay open source.

OpenZFS on the other hand, is licensed under CDDL which is much more permissive and allows users to modify and distribute code with a greater degree of freedom.

4. Communities and Companies Behind Them

OpenZFS has a massive community behind it. FreeBSD community, Illumos community and many other open source projects rely on OpenZFS and thus contribute back to the file system. It has grown several fold in terms of code base, user base, features and flexibility ever since its inception. Companies like Delphix, iXsystems, Joyent and many more rely on it and have their developers work on because it is a core component of their business. Many more organizations might be using OpenZFS without our knowledge, thanks to the CDDL license, they don’t have to come forth and say out-right that they use it.

Btrfs had Red Hat as one of the main steward of its community. However, that recieved a major blow a while back when Red Hat deprecated the filesystem this means you won’t be seeing it in any future RHEL and the company won’t provide commercial support for it out-of-the-box. SUSE, however, has gone so far as to make it their default and their is still a thriving community behind the file system with contributions from Facebook, Intel and other 800 pound gorillas of the Silicon Valley.

5. Reliability

ZFS was designed to be reliable right from the beginning. People have zpools dating back to the early 2000s that are still usable and guaranteed to not return erroneous data silently. Yes, there has been a few snafus with files disappearing on for OpenZFS on Linux but given its long history the track record has been surprising clean.

Btrfs, on the other hand, has had issues right from the beginning. With buggy interfaces to straight up data loss and file corruption. Even now, it is bit of a laughing stock in the community. Make of that what you will.

6. Supported OSes

Btrfs has had its origin has a file system for Linux while ZFS was conceived inside Sun, for Solaris OS. However, OpenZFS has long since been ported to FreeBSD, Apple’s OS X, open source derivatives of Solaris. It’s support for Linux came a little later than one would have predicted, but it is here and corporations rely on it. A project for making it run on Microsoft Windows is also making quite a bit of progress, although it is not quite there yet.

Conclusion: A Note on Monocultures

All of this talk may convince you to use OpenZFS to keep your data safe, and that is not a bad course of action. It is objectively better than Btrfs in terms of features, reliability, community and much more. However, in the long run this might not be good for the open source community, in general.

In a post titled similar to this one, the author talks about the dangerous of monocultures. I encourage you to go through this post. The gist of it is this — Options are important. One of the greatest strength of Open Source software (and software, in general)is that we have multiple options to adopt. There’s Apache and then there’s Nginx, there are BSDs and Linux, there is OpenSSL and there is LibreSSL.

If there is a fatal flaw in any of these key technologies, the world will not stop spinning. But with the prevalence of OpenZFS, the storage technology has turned into something of a monoculture. So, I would very much like for the developers and system programmers who are reading this, to adopt not OpenZFS but projects like Btrfs and HAMMER.

Source

GCC 9.0 Compiler Benchmarks Against GCC7/GCC8 At The End Of 2018

In early 2019 we will see the first stable release of GCC 9 as the annual update to the GNU Compiler Collection that is bringing the D language front-end, more C2X and C++ additions, various microarchitecture optimizations from better Znver1 support to Icelake, and a range of other additions we’ll provide a convenient recap of shortly. But for those wondering how the GCC 9 performance is looking, here are some fresh benchmarks when benchmarking the latest daily GCC 9.0 compiler against GCC 7.4 and GCC 8.2 atop Clear Linux using an Intel Core i9 7980XE Skylake-X system.

 

 

Similar to the few other tests we’ve done at different times throughout the years and on different hardware, this article is a last look as we end out 2018 to see how the GCC9 performance is looking on Intel x86_64 compared to the past two major releases. When the formal GCC 9.1.0 compiler release nears its debut around the end of Q1-2019, I’ll be back with plenty more compiler benchmarks on different CPUs. Of course, there will also be benchmarks of the upcoming LLVM Clang 8.0 release that should be out roughly around the same time as GCC9 stable.

 

All of this testing was done when building GCC 7.4 / 8.2 / 9.0 from source on Clear Linux and the compiler releases configured using “–disable-multilib –enable-checking=release” and keeping the CFLAGS/CXXFLAGS the same throughout building all of the open-source benchmarks used for evaluating the performance of the resulting binaries. Going back further than GCC 7 was not possible on this system due to Glibc issues. The Phoronix Test Suite was used for automating this process, as always.

 

Source

How to do a Port Scan in Linux – Linux Hint

Port scanning is a process to check open ports of a PC or a Server. Port scanners are often used by gamers and hackers to check for available ports and to fingerprint services. There are two types of ports to scan for in TCP/IP Internet Protocol, TCP(Transmission Control Protocol) and UDP(User Datagram Protocol). Both TCP and UDP have their own way of scanning. In this article, we’ll look at how to do port scan in Linux environment but first we’ll take a look at how port scanning works. Note that port scanning is illegal in often countries, make sure to check for permissions before scanning your target.

TCP Scanning

TCP is stateful protocol because it maintains the state of connections. TCP connection involves a three-way handshaking of Server socket and client-side socket. While a server-socket is listening, the client sends a SYN and then Server responds back with SYN-ACK. Client then, sends ACK to complete the handshake for the connection

To scan for a TCP open port, a scanner sends a SYN packet to the server. If SYN-ACK is sent back, then the port is open. And if server doesn’t complete the handshake and responds with an RST then the port is closed.

UDP Scanning

UDP on the other hand, is a stateless protocol and doesn’t maintain the state of connection. It also doesn’t involve three-way handshake.

To scan for a UDP port, a UDP scanner sends a UDP packet to the port. If that port is closed, an ICMP packet is generated and sent back to the origin. If this doesn’t happen, that means port is open.

UDP port scanning is often unreliable because ICMP packets are dropped by firewalls, generating false positives for port scanners.

Port Scanners

Now that we’ve looked at how port scanning works, we can move forward to different port scanners and their functionality.

Nmap

Nmap is the most versatile and comprehensive port scanner available till now. It can do everything from port scanning to fingerprinting Operating systems and vulnerability scanning. Nmap has both CLI and GUI interfaces, the GUI is called Zenmap. It has a lot of varying options to do quick and effective scans. Here’s how to install Nmap in Linux.

sudo apt-get update
sudo apt-get upgrade -y
sudo apt-get install nmap -y

Now we’ll use Nmap to scan a server (hackme.org) for open ports and to list services available on those ports, its really easy. Just type nmap and the server address.

To scan for UDP ports, include -sU option with sudo because it requires root privileges.

There are a lot of other options available in Nmap such as:

-p- : Scan for all 65535 ports
-sT : TCP connect scan
-O : Scans for operating system running
-v : Verbose scan
-A : Aggressive scan, scans for everything
-T[1-5] : To set the scanning speed
-Pn : In case the server blocks ping

Zenmap

Zenmap is a GUI interface of Nmap for click-kiddies so that you won’t have to remember its commands. To install it, type

sudo apt-get install -y zenmap

To scan a server, just type its address and select from available scan options.

Netcat

Netcat is a raw TCP and UDP port writer which can also be used as a port scanner. It uses connect scan that’s why it is not so fast like Network Mapper. To install it, type

ubuntu@ubuntu:~$ sudo apt install netcat-traditional -y

To check for an open port, write

ubuntu@ubuntu:~$ nc -z -v hackme.org 80
…snip…
hackme.org [217.78.1.155] 80 (http) open

To scan for a range of ports, type

ubuntu@ubuntu:~$ nc -z -nv 127.0.0.1 20-80
(UNKNOWN) [127.0.0.1] 80 (http) open
(UNKNOWN) [127.0.0.1] 22 (ssh) open

Unicornscan

Unicornscan is a comprehensive and fast port scanner, built for vulnerability researchers. Unlike Network Mapper, it uses its own User-land Distributed TCP/IP stack. It has a lot of features that Nmap doesn’t, some of them are given,

  • Asynchronous stateless TCP scanning with all variations of TCP Flags.
  • Asynchronous stateless TCP banner grabbing
  • Asynchronous protocol specific UDP Scanning (sending enough of a signature to elicit a response).
  • Active and Passive remote OS, application, and component identification by analyzing responses.
  • PCAP file logging and filtering
  • Relational database output
  • Custom module support
  • Customized data-set views

To install Unicornscan, type

ubuntu@ubuntu:~$ sudo apt-get install unicornscan -y

To run a scan, write

ubuntu@ubuntu:~$ sudo us 127.0.0.1
TCP open ftp[ 21] from 127.0.0.1 ttl 128
TCP open smtp[ 25] from 127.0.0.1 ttl 128
TCP open http[ 80] from 127.0.0.1 ttl 128
…snip…

Conclusion

Ports scanners come in handy whether you are a DevOp, Gamer or a Hacker. There is no real comparison between these scanners, none of them is perfect, each of them has its benefits and drawbacks. It completely depends upon your requirements and how you use them.

Source

WP2Social Auto Publish Powered By : XYZScripts.com