How to Install Latest Nvidia Drivers on Ubuntu

With the recent advancements in Linux desktop distributions, gaming on Linux is coming to life. Linux users are beginning to enjoy gaming like Windows or Mac OSX users, with amazing performance.

Nvidia makes top-rated gaming graphics cards. However, for a long time, updating Nvidia drivers on Linux desktops was not so easy. Luckily, now the Proprietary GPU Drivers PPA packages updated nvidia-graphics-drivers for Ubuntu ready for installation.

Although this PPA is currently in testing, you can get fresh drivers from upstream, currently shipping Nvidia from it. If you are using Nvidia graphics card, this article will show you how to install the latest Nvidia drivers on Ubuntu and its derivatives such as Linux Mint.

How to Install Nvidia Drivers in Ubuntu

First start by adding the Proprietary GPU Drivers PPA to your system package sources and update your system package cache using apt command.

$ sudo add-apt-repository ppa:graphics-drivers/ppa
$ sudo apt update

Then install the latest stable nvidia graphics (which is nvidia-387 at the time of writing this article) using the following command.

$ sudo apt install nvidia-387

Alternatively, open Software & Updates under System Settings and go to Additional Drivers tab, select the required driver version and click “Apply Changes”.

Next, reboot your computer for the new driver to start working. Then use the lsmod command to check your installation status with the following command.

It will list all currently loaded kernel modules in Linux, then filter only nvidia using grep command.

$ lsmod | grep nvidia 

Some times updates do not work well as expected. If you face any issues with the latest drivers installation such as black screen on startup, you can remove it as follows.

$ sudo apt-get purge nvidia*

If you want to completely remove graphics-drivers PPA as well, run the following command to remove PPA.

$ sudo apt-add-repository --remove ppa:graphics-drivers/ppa

You might also like to read these following related articles on Gaming.

  1. 5 Best Linux Gaming Distributions That You Should Give a Try
  2. 12 Amazing Terminal Based Games for Linux Enthusiasts

That’s all! You can ask questions or share any useful additional information via the feedback form below.

Source

How to Open, Extract and Create RAR Files in Linux

RAR is a most popular tool for creating and extracting compressed archive (.rar) files. When we download an archive file from the web, we required a rar tool to extract them.

RAR is available freely under Windows operating systems to handle compressed files, but unfortunately, rar tool doesn’t pre-installed under Linux systems.

This article explains how to install unrar and rar command-line tools using official binary tar file under Linux systems to open, extract, uncompress or unrar an archive file.

Step 1: How to Install Unrar in Linux

On Debian and Ubuntu based distributions, you can easily install unrar package using the apt-get or apt program as shown.

$ sudo apt-get install unrar
Or
$ sudo apt install unrar

If you are using Fedora distribution, you can use the dnf command to install it.

$ sudp dnf install unrar

If you are using a CentOS / RHEL distribution, you need to download the latest unrar/rar file and install it using following commands.

--------------- On 64-bit --------------- 
# cd /tmp
# wget https://www.rarlab.com/rar/rarlinux-x64-5.6.0.tar.gz
# tar -zxvf rarlinux-x64-5.6.0.tar.gz
# cd rar
# sudo cp -v rar unrar /usr/local/bin/

--------------- On 32-bit --------------- 
# cd /tmp
# wget https://www.rarlab.com/rar/rarlinux-5.6.0.tar.gz
# tar -zxvf rarlinux-5.6.0.tar.gz
# cd rar
# sudo cp -v rar unrar /usr/local/bin/

Step 2: How to Open/Extract a RAR File in Linux

To open/extract a RAR file in current working directory, just use the following command with unrar e option.

# unrar e tecmint.rar

UNRAR 4.20 beta 3 freeware      Copyright (c) 1993-2012 Alexander Roshal

Extracting from tecmint.rar

Extracting  index.php                                                 OK
Extracting  index.html                                                OK
Extracting  xyz.txt                                                   OK
Extracting  abc.txt                                                   OK
All OK

To open/extract a RAR file in specific path or destination directory, just use the unrar e option, it will extract all the files in specified destination directory.

# unrar e tecmint.rar /home/

UNRAR 4.20 beta 3 freeware      Copyright (c) 1993-2012 Alexander Roshal

Extracting from tecmint.rar

Extracting  /home/index.php                                           OK
Extracting  /home/index.html                                          OK
Extracting  /home/xyz.txt                                             OK
Extracting  /home/abc.txt                                             OK
All OK

To open/extract a RAR file with their original directory structure. just issue below command with unrar x option. It will extract according their folder structure see below output of the command.

# unrar x tecmint.rar

UNRAR 4.20 beta 3 freeware      Copyright (c) 1993-2012 Alexander Roshal

Extracting from tecmint.rar

Creating    tecmint                                                   OK
Extracting  tecmint/index.php                                         OK
Extracting  tecmint/index.html                                        OK
Extracting  tecmint/xyz.txt                                           OK
Extracting  tecmint/abc.txt                                           OK
Creating    default                                                   OK
Extracting  default/index.php                                         OK
Extracting  default/index.html                                        OK
Creating    include                                                   OK
Extracting  include/abc.txt                                           OK
Creating    php                                                       OK
Extracting  php/xyz.txt                                               OK
All OK

Step 3: How to List a RAR File in Linux

To list a files inside a archive file use unrar l option. It will display the list of files with their sizesdatetime and permissions.

unrar l tecmint.rar

UNRAR 4.20 beta 3 freeware      Copyright (c) 1993-2012 Alexander Roshal

Archive tecmint.rar

 Name             Size   Packed Ratio  Date   Time     Attr      CRC   Meth Ver
-------------------------------------------------------------------------------
 index.php           0        8   0% 18-08-12 19:11 -rw-r--r-- 00000000 m3b 2.9
 index.html          0        8   0% 18-08-12 19:11 -rw-r--r-- 00000000 m3b 2.9
 xyz.txt             0        8   0% 18-08-12 19:11 -rw-r--r-- 00000000 m3b 2.9
 abc.txt             0        8   0% 18-08-12 19:11 -rw-r--r-- 00000000 m3b 2.9
 index.php           0        8   0% 18-08-12 19:22 -rw-r--r-- 00000000 m3b 2.9
 index.html          0        8   0% 18-08-12 19:22 -rw-r--r-- 00000000 m3b 2.9
 abc.txt             0        8   0% 18-08-12 19:22 -rw-r--r-- 00000000 m3b 2.9
 xyz.txt             0        8   0% 18-08-12 19:22 -rw-r--r-- 00000000 m3b 2.9
-------------------------------------------------------------------------------
    8                0       64   0%

Step 4: How to Test a RAR File in Linux

To test an integrity of a archive file, use option unrar t. The below command will perform a complete integrity check for each file and displays the status of the file.

unrar t tecmint.rar

UNRAR 4.20 beta 3 freeware      Copyright (c) 1993-2012 Alexander Roshal

Testing archive tecmint.rar

Testing     tecmint/index.php                                         OK
Testing     tecmint/index.html                                        OK
Testing     tecmint/xyz.txt                                           OK
Testing     tecmint/abc.txt                                           OK
Testing     default/index.php                                         OK
Testing     default/index.html                                        OK
Testing     include/abc.txt                                           OK
Testing     php/xyz.txt                                               OK
All OK

The unrar command is used to extract, list or test archive files only. It has no any option for creating RAR files under Linux. So, here we need to install RAR command-line utility to create archive files.

Step 5: How to Install Rar in Linux

To install RAR command option in Linux, just execute following command.

# sudo apt-get install rar
# sudo dnf install rar
# yum install rar
Sample Output
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Dependencies Resolved
=========================================================================================
 Package			Arch			Version				Repository			Size
=========================================================================================
Installing:
 rar				i386            3.8.0-1.el5.rf      rpmforge			264 k

Transaction Summary
=========================================================================================
Install       1 Package(s)
Upgrade       0 Package(s)

Total download size: 264 k
Is this ok [y/N]: y
Downloading Packages:
rar-3.8.0-1.el5.rf.i386.rpm										| 264 kB     00:01
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing     : rar                                          1/1

Installed:
  rar.i386 0:3.8.0-1.el5.rf

Complete!

Step 6: How to Create Rar File in Linux

To create a archive(RAR) file in Linux, run the following command with rar a option. It will create archive file for a tecmint directory.

rar a tecmint.rar tecmint

RAR 3.80   Copyright (c) 1993-2008 Alexander Roshal   16 Sep 2008
Shareware version         Type RAR -? for help

Evaluation copy. Please register.

Creating archive tecmint.rar

Adding    tecmint/index.php                                           OK
Adding    tecmint/index.html                                          OK
Adding    tecmint/xyz.txt                                             OK
Adding    tecmint/abc.txt                                             OK
Adding    tecmint                                                     OK
Done

Step 7: How to Delete files from Archive

To delete a file from a archive file, run the command.

rar d filename.rar

Step 8: How to Recover Archives

To recover or fix a archive file or files, run the command with option rar r.

rar r filename.rar

RAR 3.80   Copyright (c) 1993-2008 Alexander Roshal   16 Sep 2008
Shareware version         Type RAR -? for help

Building fixed.tecmint.rar
Scanning...
Data recovery record not found
Reconstructing tecmint.rar
Building rebuilt.tecmint.rar
Found  tecmint\index.php
Found  tecmint\index.html
Found  tecmint\xyz.txt
Found  tecmint\abc.txt
Found  tecmint
Done

Step 9: How to Update Archives

To update or add files to existing archive file, use the following command with option rar u.

rar u tecmint.rar tecmint.sql

RAR 3.80   Copyright (c) 1993-2008 Alexander Roshal   16 Sep 2008
Shareware version         Type RAR -? for help

Evaluation copy. Please register.

Updating archive tecmint.rar

Adding    tecmint.sql                                                 OK
Done

Now, verify that the file tecmint.sql is added to archive file.

rar l tecmint.rar

RAR 3.80   Copyright (c) 1993-2008 Alexander Roshal   16 Sep 2008
Shareware version         Type RAR -? for help

Archive tecmint.rar

 Name             Size   Packed Ratio  Date   Time     Attr      CRC   Meth Ver
-------------------------------------------------------------------------------
 index.php           0        8   0% 18-08-12 19:11 -rw-r--r-- 00000000 m3b 2.9
 index.html          0        8   0% 18-08-12 19:11 -rw-r--r-- 00000000 m3b 2.9
 xyz.txt             0        8   0% 18-08-12 19:11 -rw-r--r-- 00000000 m3b 2.9
 abc.txt             0        8   0% 18-08-12 19:11 -rw-r--r-- 00000000 m3b 2.9
 tecmint             0        0   0% 18-08-12 19:23 drwxr-xr-x 00000000 m0  2.0
 tecmint.sql 0 8 0% 18-08-12 19:46 -rw-r--r-- 00000000 m3b 2.9
-------------------------------------------------------------------------------
    6                0       40   0%

Step 10: How to Set Password to Archives

This is very interesting feature from Rar tool, it allows us to set a password to archive file. To password protect archive file use option rar a -p.

rar a -p tecmint.rar

Enter password (will not be echoed):

Reenter password:

AR 3.80   Copyright (c) 1993-2008 Alexander Roshal   16 Sep 2008
Shareware version         Type RAR -? for help

Evaluation copy. Please register.

Updating archive tecmint.rar

Updating  tecmint.sql                                                 OK
Done

Now verify it by extracting the archive file and see whether it will prompt us to enter password that we have set above.

rar x tecmint.rar

RAR 3.80   Copyright (c) 1993-2008 Alexander Roshal   16 Sep 2008
Shareware version         Type RAR -? for help

Extracting from tecmint.rar

Creating    tecmint                                                   OK
Extracting  tecmint/index.php                                         OK
Extracting  tecmint/index.html                                        OK
Extracting  tecmint/xyz.txt                                           OK
Extracting  tecmint/abc.txt                                           OK
Enter password (will not be echoed) for tecmint.sql:

Extracting  tecmint.sql                                               OK
All OK

Step 11: How to Lock Archives

Another interesting lock feature from rar tool, it provides a option to lock a particular archive file from extracting it.

rar k tecmint.rar

RAR 3.80   Copyright (c) 1993-2008 Alexander Roshal   16 Sep 2008
Shareware version         Type RAR -? for help

Processing archive tecmint.rar
Locking archive
Done

Conclusion

For mor RAR and Unrar options and usage, run the following command it will display a list of options with their description.

# man unrar
# man rar

We have presented almost all of the options above for rar and unrar commands with their examples. If you feel that we’ve missed anything in this list and you would like us to add, please update us using comment form below.

Source

How to Change Runlevels (targets) in SystemD

Systemd is a modern init system for Linux: a system and service manager which is compatible with the popular SysV init system and LSB init scripts. It was intended to overcome the shortcomings of SysV init as explained in the following article.

  1. The Story Behind ‘init’ and ‘systemd’: Why ‘init’ Needed to be Replaced with ‘systemd’ in Linux

On Unix-like systems such as Linux, the current operating state of the operating system is known as a runlevel; it defines what system services are running. Under popular init systems like SysV init, runlevels are identified by numbers. However, in systemd runlevels are referred to as targets.

Suggested Read: Managing System Startup Process and Services (SysVinit, Systemd and Upstart)

In this article, we will explain how to change runlevels (targets) with systemd. Before we move any further, let’s briefly under the relationship between runlevels numbers and targets.

  • Run level 0 is matched by poweroff.target (and runlevel0.target is a symbolic link to poweroff.target).
  • Run level 1 is matched by rescue.target (and runlevel1.target is a symbolic link to rescue.target).
  • Run level 3 is emulated by multi-user.target (and runlevel3.target is a symbolic link to multi-user.target).
  • Run level 5 is emulated by graphical.target (and runlevel5.target is a symbolic link to graphical.target).
  • Run level 6 is emulated by reboot.target (and runlevel6.target is a symbolic link to reboot.target).
  • Emergency is matched by emergency.target.

How to View Current target (run level) in Systemd

When the system boots, by default systemd activates the default.target unit. It’s main work is to activate services and other units by pulling them in via dependencies.

To view the default target, type the command below.

#systemctl get-default 

graphical.target

To set the default target, run the command below.

# systemctl set-default multi-user.target  

How to Change the target (runlevel) in Systemd

While the system is running, you can switch the target (run level), meaning only services as well as units defined under that target will now run on the system.

To switch to runlevel 3, run the following command.

# systemctl isolate multi-user.target 

To change the system to runlevel 5, type the command below.

# systemctl isolate graphical.target

For more information about systemd, read through these useful articles:

  1. How to Manage ‘Systemd’ Services and Units Using ‘Systemctl’ in Linux
  2. How to Create and Run New Service Units in Systemd Using Shell Script
  3. Managing System Startup Process and Services (SysVinit, Systemd and Upstart)
  4. Manage Log Messages Under Systemd Using Journalctl [Comprehensive Guide]

In this guide, we showed how to change runlevels (targets) with systemd. Use the comment form below to send us any questions or thoughts concerning this article.

Source

Tuned – Automatic Performance Tuning of CentOS/RHEL Servers

To maximize the end-to-end performance of services, applications and databases on a server, system administrators usually carry out custom performance tunning, using various tools, both generic operating system tools as well as third-party tools. One of the most useful performance tuning tools on CentOS/RHEL/Fedora Linux is Tuned.

Read Also20 Commad Line Tools Monitor Linux Performance

Tuned is a powerful daemon for dynamically auto-tuning Linux server performance based on information it gathers from monitoring use of system components, to squeeze maximum performance out of a server.

It does this by tuning system settings dynamically on the fly depending on system activity, using tuning profiles. Tuning profiles include sysctl configs, disk-elevators configs, transparent hugepages, power management options and your custom scripts.

By default tuned will not dynamically adjust system settings, but you can modify how the tuned daemon operates and allow it to dynamically alter settings based on system usage. You can use the tuned-admcommand-line tool to manage the daemon once it is running.

How to Install Tuned on CentOS/RHEL & Fedora

On CentOS/RHEL 7 and Fedoratuned comes pre-installed and activated by default, but on older version of CentOS/RHEL 6.x, you need to install it using the following yum command.

# yum install tuned

After the installation, you will find following important tuned configuration files.

  • /etc/tuned – tuned configuration directory.
  • /etc/tuned/tuned-main.conf– tuned mail configuration file.
  • /usr/lib/tuned/ – stores a sub-directory for all tuning profiles.

Now you can start or manage the tuned service using following commands.

--------------- On RHEL/CentOS 7 --------------- 
# systemctl start tuned	        
# systemctl enable tuned	
# systemctl status tuned	
# systemctl stop tuned		

--------------- On RHEL/CentOS 6 ---------------
# service tuned start
# chkconfig tuned on
# service tuned status
# service tuned stop

Now you can control tuned using the tunde-adm tool. There are a number of predefined tuning profiles already included for some common use cases. You can check the current active profile with following command.

# tuned-adm active

From the output of the above command, the test system (which is a Linode VPS) is optimized for running as a virtual guest.

Check Current Tuned Profile

Check Current Tuned Profile

You can get a list of available tuning profiles using following command.

# tuned-adm list

List Available Tuned Profiles

List Available Tuned Profiles

To switch to any of the available profiles for example throughput-performance – a tuning which results into excellent performance across a variety of common server workloads.

# tuned-adm  profile throughput-performance
# tuned-adm active

Switch to Tuning Profile

Switch to Tuning Profile

To use the recommended profile for your system, run the following command.

# tuned-adm recommend

And you can disable all tuning as shown.

 
# tuned-adm off

How To Create Custom Tuning Profiles

You can also create new profiles, we will create a new profile called test-performance which will use settings from an existing profile called latency-performance.

Switch into the path which stores sub-directories for all tuning profiles, create a new sub-directory called test-performance for your custom tuning profile there.

# cd /usr/lib/tuned/
# mkdir test-performance

Then create a tuned.conf configuration file in the directory.

# vim test-performance/tuned.conf

Copy and paste the following configuration in the file.

[main]
include=latency-performance
summary=Test profile that uses settings for latency-performance tuning profile

Save the file and close it.

If you run the tuned-adm list command again, the new tuning profile should exist in the list of available profiles.

# tuned-adm list

Check New Tuned Profile

Check New Tuned Profile

To activate new tuned profile, issue following command.

# tuned-adm  profile test-performance

For more information and further tinkering options, see the tuned and tuned-adm man pages.

# man tuned
# man tuned-adm

Tuned Github repositoryhttps://github.com/fcelda/tuned

That’s all for now! Tuned is a daemon that monitors usage of system components and dynamically auto-tunes a Linux server for maximum performance. If you have any questions or thoughts to share, use the feedback form below to reach us.

Source

Glances – An Advanced Real Time System Monitoring Tool for Linux

Earlier, we’ve written about many Linux System Monitor Tools that can be used to monitor performance of Linux systems, but we think that, most users prefer the default one that comes with every Linux distributions (topcommand).

The top command is real time task manager in Linux and the most frequently used system monitoring tool in GNU/Linux distributions to find the performance related bottlenecks in system which help us to take corrective actions. It has a nice minimalist interface, comes with few amount of reasonable options that enables us to get a better idea about overall system performance quickly.

However, sometimes its very tricky to find an application/process that consuming lots of system resources is a bit difficult under top. Because top command doesn’t have a ability to highlights programs that are eating too much of CPURAM, other resources.

For keeping such approach, here we are bringing a powerful system monitor program called “Glances” that automatically highlights programs that are utilizing highest system resources and providing maximum of information about Linux/Unix server.

What is Glances?

Glances is a cross-platform command-line curses-based system monitoring tool written in Python language which use the psutil library to grab informations from the system. With Glance, we can monitor CPULoad AverageMemoryNetwork InterfacesDisk I/OProcesses and File System spaces utilization.

Glances is a free tool and licensed under GPL to monitory GNU/Linux and FreeBSD operating systems. There are lots of interesting options available in Glances as well. One of the main features we have seen in Glances is that we can set thresholds (carefulwarning and critical) in configuration file and informations will be shown in colors which indicates the bottleneck in the system.

Glances Features

  1. CPU Informations (user related applications, system core programs and idle programs.
  2. Total memory Information including RAM, Swap, Free memory etc.
  3. The average CPU load for the past 1min, 5mins and 15 mins.
  4. Network Download/Upload rates of network connections.
  5. Total number of processes, active ones, sleeping processes etc.
  6. Disk I/O related (read or write) speed details
  7. Currently mounted devices disk usages.
  8. Top processes with their CPU/Memory usages, Names and location of application.
  9. Shows the current date and time at bottom.
  10. Highlights processes in Red that consumes highest system resources.

Here is an example screen grab of Glances.

Install Glances Monitoring in Centos

Glances View

Installation of Glances in Linux/Unix Systems

Although it’s a very young utility, you can install “Glances” in Red Hat based systems by turning on EPEL repository and then run the following command on the terminal.

On RHEL/CentOS/Fedora
# yum install -y glances
On Debian/Ubuntu/Linux Mint
$ sudo apt-add-repository ppa:arnaud-hartmann/glances-stable
$ sudo apt-get update
$ sudo apt-get install glances

Usage of Glances

To start, issue the basic syntax on the terminal.

# glances

Install Glances in Ubuntu

Glances Preview – Ubuntu 13.10

Press ‘q‘ or (‘ESC‘ or ‘Ctrl&C‘ also works) to quit from Glances terminal. Here, is the another screen grab taken from the CentOS 6.5 system.

Glances Monitoring Linux

Glances Preview – CentOS 6.5

By default, interval time is set to ‘1‘ second. But you can define the custom interval time while running glances from the terminal.

# glances -t 2
Glances Color Codes

Meaning of Glances color code:

  1. GREEN: OK (everything is fine)
  2. BLUE: CAREFUL (need attention)
  3. VIOLET: WARNING (alert)
  4. RED: CRITICAL (critical)

We can set thresholds in configuration file. By default thresholds set is (careful=50warning=70 and critical=90), we can customized as per our needs. The default configuration file is located at ‘/etc/glances/glances.conf’.

Glances Options

Besides, several command line options, glances provides many more hot keys to find output information while glances is running. Below are the list of several hot keys.

  1. a – Sort processes automatically
  2. c – Sort processes by CPU%
  3. m – Sort processes by MEM%
  4. p – Sort processes by name
  5. i – Sort processes by I/O rate
  6. d – Show/hide disk I/O stats ols
  7. f – Show/hide file system statshddtemp
  8. n – Show/hide network stats
  9. s – Show/hide sensors stats
  10. y – Show/hide hddtemp stats
  11. l – Show/hide logs
  12. b – Bytes or bits for network I/Oools
  13. w – Delete warning logs
  14. x – Delete warning and critical logs
  15. x – Delete warning and critical logs
  16. 1 – Global CPU or per-CPU stats
  17. h – Show/hide this help screen
  18. t – View network I/O as combination
  19. u – View cumulative network I/O
  20. q – Quit (Esc and Ctrl-C also work)

Use Glances on Remote Systems

With the Glances, you can even monitor remote systems too. To use ‘glances‘ on remote systems, run the ‘glances -s‘ (-s enables server/client mode) command on the server.

# glances -s

Define the password for the Glances server
Password: 
Password (confirm): 
Glances server is running on 0.0.0.0:61209

Note : Once, you issue ‘glances‘ command, it will prompt you to define the password for the Glances server. Define the password and hit enter, you see glances running on port 61209.

Now, go to the remote host and execute the following command to connect to a Glances server by specifying IP address or hostname as shown below. Here ‘172.16.27.56‘ is my glances server IP Address.

# glances -c -P 172.16.27.56

Below are few notable points that user must know while using glances in server/client mode.

* In server mode, you can set the bind address -B ADDRESS and listening TCP port -p PORT.
* In client mode, you can set the TCP port of the server -p PORT.
* Default binding address is 0.0.0.0, but it listens on all network interfaces at port 61209.
* In server/client mode, limits are set by the server side.
* You can also define a password to access to the server -P password.

Read AlsoUse Glances to Monitor Remote Linux in Web Server Mode

Conclusion

Glances is a much resources friendly tool for most users. But if you’re a system administrator who’d like to quickly get overall “idea” about systems by just glancing at command line, then this tool will be must have tool for system administrators.

Source

How to Delete HUGE (100-200GB) Files in Linux

Usually, to delete/remove a file from Linux terminal, we use the rm command (delete files), shred command (securely delete a file), wipe command (securely erase a file) or secure-deletion toolkit (a collection of secure file deletion tools).

We can use any of the above utilities to deal with relatively small files. What if we want to delete/remove a huge file/directory say of about 100-200GB. This may not be as easy as it seems, in terms of the time taken to remove the file (I/O scheduling) as well as the amount of RAM consumed while carrying out the operation.

In this tutorial, we will explain how to efficiently and reliably delete huge files/directories in Linux.

Suggested Read: 5 Ways to Empty or Delete a Large File Content in Linux

The main aim here is to use a technique that will not slow down the system while removing a huge file, resulting to reasonable I/O. We can achieve this using the ionice command.

Deleting HUGE (200GB) Files in Linux Using ionice Command

ionice is a useful program which sets or gets the I/O scheduling class and priority for another program. If no arguments or just -p is given, ionice will query the current I/O scheduling class and priority for that process.

If we give a command name such as rm command, it will run this command with the given arguments. To specify the process IDs of running processes for which to get or set the scheduling parameters, run this:

# ionice -p PID

To specify the name or number of the scheduling class to use (0 for none, 1 for real time, 2 for best-effort, 3 for idle) the command below.

This means that rm will belong to idle I/O class and only uses I/O when any other process does not need it:

---- Deleting Huge Files in Linux -----
# ionice -c 3 rm /var/logs/syslog
# ionice -c 3 rm -rf /var/log/apache

If there won’t be much idle time on the system, then we may want to use the best-effort scheduling class and set a low priority like this:

# ionice -c 2 -n 6 rm /var/logs/syslog
# ionice -c 2 -n 6 rm -rf /var/log/apache

Note: To delete huge files using a secure method, we may use the shredwipe and various tools in the secure-deletion toolkit mentioned earlier on, instead of rm command.

Suggested Read: 3 Ways to Permanently and Securely Delete Files/Directories’ in Linux

For more info, look through the ionice man page:

# man ionice 

That’s it for now! What other methods do you have in mind for the above purpose? Use the comment section below to share with us.

Source

Configuration Management Tool Chef Announces to go 100% Open Source

Last updated April 5, 2019

In case you did not know, among the most popular automation software services, Chef is one of the best out there.

Recently, it announced some new changes to its business model and the software. While we know that everyone here believes in the power of open source – and Chef supports that idea too. So, now they have decided to go 100% open source.

It will included all of their software under the Apache 2.0 license. You can use, modify, distribute and monetize their source code as long as you respect the trademark policy.

In addition to this, they’ve also introduced a new service for enterprises, we’ll take a look at that as you read on.

Chef going to be 100% open source

Why 100% Open Source?

The examples of some commercial open-source business models encouraged Chef to take this decision. In their blog post, they also mentioned about it:

We aren’t making this change lightly. Over the years we have experimented with and learned from a variety of different open source, community and commercial models, in search of the right balance. We believe that this change, and the way we have made it, best aligns the objectives of our communities with our own business objectives.

Barry crist, ceo of chef

So, they want people to collaborate and utilize their source code without any restrictions. This is a great news for people who want to experiment their ideas on a non-commercial application. And, as for the enterprises working with Chef – the open source model will help them get the best out of Chef’s services.

Barry Crist (CEO of Chef) also mentioned:

This means that all of the software that we produce will be created in public repos. It also means that we will open up more of our product development process to the public, including roadmaps, triage and other aspects of our product design and planning process.

New Launch: Chef Enterprise Automation Stack

To streamline the way of deploying and updating their software for enterprises, they have introduced a new ‘Chef Enterprise Automation Stack’. It will be specifically tailored for enterprises relying on Chef.

However, it will also be available for free – for non-commercial usage or experimentation.

To describe it, Barry wrote:

Chef Enterprise Automation Stack is anchored by Chef Workstation, the quickest way to get a development environment up and running, and Chef Automate as the enterprise observability and management console for the system. Also included is Chef Infra (formerly just Chef) for infrastructure automation, Chef InSpec for security and compliance automation and Chef Habitat for application deployment and orchestration automation.

So, you get more perks now if you purchase a Chef subscription.

Wrapping Up

With these major changes, Chef definitely seems to offer more streamlined services keeping in mind the future of their software services and the enterprises relying on it.

What do you think about it? Let us know your thoughts in the comments below.

Source

How to Install Elixir and Phoenix Framework on Ubuntu 16.04

This tutorial will show you how to install Elixir and Phoenix frameworks on a Vultr Ubuntu 16.04 server instance for development purposes.

Prerequisites

  • A new Vultr Ubuntu 16.04 server instance
  • Logged in as a non-root sudo user.

Update the system:

sudo apt-get update

Install Erlang

Install Erlang with the following commands:

cd ~
wget https://packages.erlang-solutions.com/erlang-solutions_1.0_all.deb 
sudo dpkg -i erlang-solutions_1.0_all.deb
sudo apt-get update
sudo apt-get install esl-erlang

You can verify the installation:

erl

This will take you to the Erlang shell with following output:

Erlang/OTP 21 [erts-10.1] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:1] [hipe]

Eshell V10.1  (abort with ^G)
1>    

Press CTRL + C twice to exit the Erlang shell.

Install Elixir

Install Elixir with apt-get:

sudo apt-get install elixir

Now you can verify the Elixir installation:

elixir -v

This will show the following output:

Erlang/OTP 21 [erts-10.1] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:1] [hipe]

Elixir 1.7.3 (compiled with Erlang/OTP 20)

Now you have Elixir 1.7.3 installed on your system.

Install Phoenix

If we have just installed Elixir for the first time, we will need to install the Hex package manager as well. Hex is necessary to get a Phoenix app running, and to install any extra dependencies we might need along the way.

Type this command to install Hex:

mix local.hex

Now we can proceed to install Phoenix:

mix archive.install https://github.com/phoenixframework/archives/raw/master/phx_new.ez

Install Node.js

Phoenix uses brunch.io to compile static assets, (javascript, css and more), so you will need to install Node.js.

The recommended way to install Node.js is via nvm (node version manager).

To install nvm we run this command:

curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | bash

To find out the versions of Node.js that are available for installation, you can type the following:

nvm ls-remote

This will output:

Output
...
     v8.8.1
     v8.9.0   (LTS: Carbon)
     v8.9.1   (LTS: Carbon)
     v8.9.2   (LTS: Carbon)
     v8.9.3   (LTS: Carbon)
     v8.9.4   (LTS: Carbon)
    v8.10.0   (LTS: Carbon)
    v8.11.0   (LTS: Carbon)
    v8.11.1   (LTS: Carbon)
    v8.11.2   (LTS: Carbon)
    v8.11.3   (LTS: Carbon)
    v8.11.4   (LTS: Carbon)
->  v8.12.0   (Latest LTS: Carbon)      
...

Install the version you would like with the following command:

nvm install 8.12.0

Note: If you would like to use a different version, replace 8.12.0 with the version you would like.

Tell nvm to use the version we just downloaded:

nvm use 8.12.0

Verify node has successfully installed:

node -v

Install PostgreSQL

You can install PostgreSQL easily using the apt packaging system.

sudo apt-get update
sudo apt-get install postgresql postgresql-contrib

Open the PostgreSQL shell:

sudo -u postgres psql

Change the postgres password to a secure password:

\password postgres    

After successfully changing the password, you can exit the PostgreSQL shell:

\q

Restart the PostgreSQL service:

sudo systemctl restart postgresql.service

Install inotify-tools

This is a Linux-only filesystem watcher that Phoenix uses for live code reloading:

sudo apt-get install inotify-tools

Create a Phoenix application

Create a new application:

mix phoenix.new ~/phoenix_project_test

If the command returns the following error:

** (Mix) The task "phx.new" could not be found

You can fix it with the following command:

mix archive.install https://raw.githubusercontent.com/phoenixframework/archives/master/phx_new.ez

Now rerun the command to create a test Phoenix app:

mix phoenix.new ~/phoenix_project_test

Change the PostgreSQL password in the config file with the password you set in the previous step:

nano config/dev.exs    

The application will now be successfully created. Move to the application folder and start it:

cd ~/phoenix_project_test
mix ecto.create
mix phx.server

Now the Phoenix application is up and running at port 4000.

Source

AWK (Alfred V. Aho – Peter J. Weinberger – Brian W. Kernighan)

Ebook: Introducing the Awk Getting Started Guide for Beginners

As a Linux system administrator, many times, you will get into situations where you need to manipulate and reformat the output from different commands, to simply display part of a output by filtering out a few lines. This process can be referred to as text filtering, using a collection of Linux programs known as filters.

There are several Linux utilities for text filtering and some of the well known filters include headtailgreptrfmtsortuniqpr and more advanced and powerful tools such as Awk and Sed.

Introducing the Awk Getting Started Guide for Beginners

Introducing the Awk Getting Started Guide for Beginners

Unlike SedAwk is more than just a text filtering tool, it is a comprehensive and flexible text pattern scanning and processing language.

Awk is a strongly recommended text filtering tool for Linux, it can be utilized directly from the command line together with several other commands, within shell scripts or in independent Awk scripts. It searches through input data or a single or multiple files for user defined patterns and modifies the input or file(s) based on certain conditions.

Since Awk is a sophisticated programming language, learning it requires a lot of time and dedication just as any other programming language out there. However, mastering a few basic concepts of this powerful text filtering language can enable you to understand how it actually works and sets you on track to learn more advanced Awk programming techniques.

After carefully and critically revising our 13 articles in the Awk programming series, with high consideration of the vital feedback from our followers and readers over the last 5 months, we have managed to organize the Introduction to Awk programming language eBook.

Therefore, if you are ready to start learning Awk programming language from the basic concepts, with simple and easy-to-understand, well explained examples, then you may consider reading this concise and precise eBook.

What’s Inside this eBook?

This book contains 13 chapters with a total of 41 pages, which covers all Awk basic and advance usage with practical examples:

  1. Chapter 1: Awk Regular Expressions to Filter Text in Files
  2. Chapter 2: Use Awk to Print Fields and Columns in File
  3. Chapter 3: Use Awk to Filter Text Using Pattern Specific Actions
  4. Chapter 4: Learn Comparison Operators with Awk
  5. Chapter 5: Learn Compound Expressions with Awk
  6. Chapter 6: Learn ‘next’ Command with Awk
  7. Chapter 7: Read Awk Input from STDIN in Linux
  8. Chapter 8: Learn Awk Variables, Numeric Expressions and Assignment Operators
  9. Chapter 9: Learn Awk Special Patterns ‘BEGIN and END’
  10. Chapter 10: Learn Awk Built-in Variables
  11. Chapter 11: Learn Awk to Use Shell Variables
  12. Chapter 12: Learn Flow Control Statements in Awk
  13. Chapter 13: Write Scripts Using Awk Programming Language

How to Use Awk and Regular Expressions to Filter Text or String in Files

When we run certain commands in Unix/Linux to read or edit text from a string or file, we most times try to filter output to a given section of interest. This is where using regular expressions comes in handy.

Read Also: 10 Useful Linux Chaining Operators with Practical Examples

What are Regular Expressions?

A regular expression can be defined as a strings that represent several sequence of characters. One of the most important things about regular expressions is that they allow you to filter the output of a command or file, edit a section of a text or configuration file and so on.

Features of Regular Expression

Regular expressions are made of:

  1. Ordinary characters such as space, underscore(_), A-Z, a-z, 0-9.
  2. Meta characters that are expanded to ordinary characters, they include:
    1. (.) it matches any single character except a newline.
    2. (*) it matches zero or more existences of the immediate character preceding it.
    3. [ character(s) ] it matches any one of the characters specified in character(s), one can also use a hyphen (-) to mean a range of characters such as [a-f][1-5], and so on.
    4. ^ it matches the beginning of a line in a file.
    5. $ matches the end of line in a file.
    6. \ it is an escape character.

In order to filter text, one has to use a text filtering tool such as awk. You can think of awk as a programming language of its own. But for the scope of this guide to using awk, we shall cover it as a simple command line filtering tool.

The general syntax of awk is:

# awk 'script' filename

Where 'script' is a set of commands that are understood by awk and are execute on file, filename.

It works by reading a given line in the file, makes a copy of the line and then executes the script on the line. This is repeated on all the lines in the file.

The 'script' is in the form '/pattern/ action' where pattern is a regular expression and the action is what awk will do when it finds the given pattern in a line.

How to Use Awk Filtering Tool in Linux

In the following examples, we shall focus on the meta characters that we discussed above under the features of awk.

A simple example of using awk:

The example below prints all the lines in the file /etc/hosts since no pattern is given.

# awk '//{print}'/etc/hosts

Awk Prints all Lines in a File

Awk Prints all Lines in a File

Use Awk with Pattern:

I the example below, a pattern localhost has been given, so awk will match line having localhost in the /etc/hosts file.

# awk '/localhost/{print}' /etc/hosts 

Awk Print Given Matching Line in a File

Awk Print Given Matching Line in a File

Using Awk with (.) wild card in a Pattern

The (.) will match strings containing loclocalhostlocalnet in the example below.

That is to say * l some_single_character c *.

# awk '/l.c/{print}' /etc/hosts

Use Awk to Print Matching Strings in a File

Use Awk to Print Matching Strings in a File

Using Awk with (*) Character in a Pattern

It will match strings containing localhostlocalnetlinescapable, as in the example below:

# awk '/l*c/{print}' /etc/localhost

Use Awk to Match Strings in File

Use Awk to Match Strings in File

You will also realize that (*) tries to a get you the longest match possible it can detect.

Let look at a case that demonstrates this, take the regular expression t*t which means match strings that start with letter t and end with t in the line below:

this is tecmint, where you get the best good tutorials, how to's, guides, tecmint. 

You will get the following possibilities when you use the pattern /t*t/:

this is t
this is tecmint
this is tecmint, where you get t
this is tecmint, where you get the best good t
this is tecmint, where you get the best good tutorials, how t
this is tecmint, where you get the best good tutorials, how tos, guides, t
this is tecmint, where you get the best good tutorials, how tos, guides, tecmint

And (*) in /t*t/ wild card character allows awk to choose the the last option:

this is tecmint, where you get the best good tutorials, how to's, guides, tecmint

Using Awk with set [ character(s) ]

Take for example the set [al1], here awk will match all strings containing character a or l or 1 in a line in the file /etc/hosts.

# awk '/[al1]/{print}' /etc/hosts

Use-Awk to Print Matching Character in File

Use-Awk to Print Matching Character in File

The next example matches strings starting with either K or k followed by T:

# awk '/[Kk]T/{print}' /etc/hosts 

Use Awk to Print Matched String in File

Use Awk to Print Matched String in File

Specifying Characters in a Range

Understand characters with awk:

  1. [0-9] means a single number
  2. [a-z] means match a single lower case letter
  3. [A-Z] means match a single upper case letter
  4. [a-zA-Z] means match a single letter
  5. [a-zA-Z 0-9] means match a single letter or number

Lets look at an example below:

# awk '/[0-9]/{print}' /etc/hosts 

Use Awk To Print Matching Numbers in File

Use Awk To Print Matching Numbers in File

All the line from the file /etc/hosts contain at least a single number [0-9] in the above example.

Use Awk with (^) Meta Character

It matches all the lines that start with the pattern provided as in the example below:

# awk '/^fe/{print}' /etc/hosts
# awk '/^ff/{print}' /etc/hosts

Use Awk to Print All Matching Lines with Pattern

Use Awk to Print All Matching Lines with Pattern

Use Awk with ($) Meta Character

It matches all the lines that end with the pattern provided:

# awk '/ab$/{print}' /etc/hosts
# awk '/ost$/{print}' /etc/hosts
# awk '/rs$/{print}' /etc/hosts

Use Awk to Print Given Pattern String

Use Awk to Print Given Pattern String

Use Awk with (\) Escape Character

It allows you to take the character following it as a literal that is to say consider it just as it is.

In the example below, the first command prints out all line in the file, the second command prints out nothing because I want to match a line that has $25.00, but no escape character is used.

The third command is correct since a an escape character has been used to read $ as it is.

# awk '//{print}' deals.txt
# awk '/$25.00/{print}' deals.txt
# awk '/\.00/{print}' deals.txt

Use Awk with Escape Character

Use Awk with Escape Character

Summary

That is not all with the awk command line filtering tool, the examples above a the basic operations of awk. In the next parts we shall be advancing on how to use complex features of awk. Thanks for reading through and for any additions or clarifications, post a comment in the comments section.

How to Use Awk to Print Fields and Columns in File

In this part of our Linux Awk command series, we shall have a look at one of the most important features of Awk, which is field editing.

It is good to know that Awk automatically divides input lines provided to it into fields, and a field can be defined as a set of characters that are separated from other fields by an internal field separator.

Awk Print Fields and Columns

Awk Print Fields and Columns

If you are familiar with the Unix/Linux or do bash shell programming, then you should know what internal field separator (IFS) variable is. The default IFS in Awk are tab and space.

This is how the idea of field separation works in Awk: when it encounters an input line, according to the IFS defined, the first set of characters is field one, which is accessed using $1, the second set of characters is field two, which is accessed using $2, the third set of characters is field three, which is accessed using $3 and so forth till the last set of character(s).

To understand this Awk field editing better, let us take a look at the examples below:

Example 1: I have created a text file called tecmintinfo.txt.

# vi tecmintinfo.txt
# cat tecmintinfo.txt

Create File in Linux

Create File in Linux

Then from the command line, I try to print the firstsecond and third fields from the file tecmintinfo.txt using the command below:

$ awk '//{print $1 $2 $3 }' tecmintinfo.txt

TecMint.comisthe

From the output above, you can see that the characters from the first three fields are printed based on the IFSdefined which is space:

  1. Field one which is “TecMint.com” is accessed using $1.
  2. Field two which is “is” is accessed using $2.
  3. Field three which is “the” is accessed using $3.

If you have noticed in the printed output, the field values are not separated and this is how print behaves by default.

To view the output clearly with space between the field values, you need to add (,) operator as follows:

$ awk '//{print $1, $2, $3; }' tecmintinfo.txt

TecMint.com is the

One important thing to note and always remember is that the use of ($) in Awk is different from its use in shell scripting.

Under shell scripting ($) is used to access the value of variables while in Awk ($) it is used only when accessing the contents of a field but not for accessing the value of variables.

Example 2: Let us take a look at one other example using a file which contains multiple lines called my_shoping.list.

No	Item_Name		Unit_Price	Quantity	Price
1	Mouse			#20,000		   1		#20,000
2 	Monitor			#500,000	   1		#500,000
3	RAM_Chips		#150,000	   2		#300,000
4	Ethernet_Cables	        #30,000		   4		#120,000		

Say you wanted to only print Unit_Price of each item on the shopping list, you will need to run the command below:

$ awk '//{print $2, $3 }' my_shopping.txt 

Item_Name Unit_Price
Mouse #20,000
Monitor #500,000
RAM_Chips #150,000
Ethernet_Cables #30,000

Awk also has a printf command that helps you to format your output is a nice way as you can see the above output is not clear enough.

Using printf to format output of the Item_Name and Unit_Price:

$ awk '//{printf "%-10s %s\n",$2, $3 }' my_shopping.txt 

Item_Name  Unit_Price
Mouse      #20,000
Monitor    #500,000
RAM_Chips  #150,000
Ethernet_Cables #30,000

Summary

Field editing is very important when using Awk to filter text or strings, it helps you get particular data in columns in a list. And always remember that the use of ($) operator in Awk is different from that in shell scripting.

I hope the article was helpful to you and for any additional information required or questions, you can post a comment in the comment section.

How to Use Awk to Filter Text or Strings Using Pattern Specific Actions

In the third part of the Awk command series, we shall take a look at filtering text or strings based on specific patterns that a user can define.

Sometimes, when filtering text, you want to indicate certain lines from an input file or lines of strings based on a given condition or using a specific pattern that can be matched. Doing this with Awk is very easy, it is one of the great features of Awk that you will find helpful.

Let us take a look at an example below, say you have a shopping list for food items that you want to buy, called food_prices.list. It has the following list of food items and their prices.

$ cat food_prices.list 
No	Item_Name		Quantity	Price
1	Mangoes			   10		$2.45
2	Apples			   20		$1.50
3	Bananas			   5		$0.90
4	Pineapples		   10		$3.46
5	Oranges			   10		$0.78
6	Tomatoes		   5		$0.55
7	Onions			   5            $0.45

And then, you want to indicate a (*) sign on food items whose price is greater than $2, this can be done by running the following command:

$ awk '/ *$[2-9]\.[0-9][0-9] */ { print $1, $2, $3, $4, "*" ; } / *$[0-1]\.[0-9][0-9] */ { print ; }' food_prices.list

Print Items Whose Price is Greater Than $2

Print Items Whose Price is Greater Than $2

From the output above, you can see that the there is a (*) sign at the end of the lines having food items, mangoes and pineapples. If you check their prices, they are above $2.

In this example, we have used used two patterns:

  1. the first: / *\$[2-9]\.[0-9][0-9] */ gets the lines that have food item price greater than $2 and
  2. the second: /*\$[0-1]\.[0-9][0-9] */ looks for lines with food item price less than $2.

This is what happens, there are four fields in the file, when pattern one encounters a line with food item price greater than $2, it prints all the four fields and a (*) sign at the end of the line as a flag.

The second pattern simply prints the other lines with food price less than $2 as they appear in the input file, food_prices.list.

This way you can use pattern specific actions to filter out food items that are priced above $2, though there is a problem with the output, the lines that have the (*) sign are not formatted out like the rest of the lines making the output not clear enough.

We saw the same problem in Part 2 of the awk series, but we can solve it in two ways:

1. Using printf command which is a long and boring way using the command below:

$ awk '/ *$[2-9]\.[0-9][0-9] */ { printf "%-10s %-10s %-10s %-10s\n", $1, $2, $3, $4 "*" ; } / *$[0-1]\.[0-9][0-9] */ { printf "%-10s %-10s %-10s %-10s\n", $1, $2, $3, $4; }' food_prices.list 

Filter and Print Items Using Awk and Printf

Filter and Print Items Using Awk and Printf

2. Using $0 field. Awk uses the variable 0 to store the whole input line. This is handy for solving the problem above and it is simple and fast as follows:

$ awk '/ *$[2-9]\.[0-9][0-9] */ { print $0 "*" ; } / *$[0-1]\.[0-9][0-9] */ { print ; }' food_prices.list 

Filter and Print Items Using Awk and Variable

Filter and Print Items Using Awk and Variable

Conclusion

That’s it for now and these are simple ways of filtering text using pattern specific action that can help in flagging lines of text or strings in a file using Awk command.

Hope you find this article helpful and remember to read the next part of the series which will focus on using comparison operators using awk tool.

How to Use Comparison Operators with Awk in Linux – Part 4

When dealing with numerical or string values in a line of text, filtering text or strings using comparison operators comes in handy for Awk command users.

In this part of the Awk series, we shall take a look at how you can filter text or strings using comparison operators. If you are a programmer then you must already be familiar with comparison operators but those who are not, let me explain in the section below.

What are Comparison operators in Awk?

Comparison operators in Awk are used to compare the value of numbers or strings and they include the following:

  1. > – greater than
  2. < – less than
  3. >= – greater than or equal to
  4. <= – less than or equal to
  5. == – equal to
  6. != – not equal to
  7. some_value ~ / pattern/ – true if some_value matches pattern
  8. some_value !~ / pattern/ – true if some_value does not match pattern

Now that we have looked at the various comparison operators in Awk, let us understand them better using an example.

In this example, we have a file named food_list.txt which is a shopping list for different food items and I would like to flag food items whose quantity is less than or equal 20 by adding (**) at the end of each line.

File – food_list.txt
No      Item_Name               Quantity        Price
1       Mangoes                    45           $3.45
2       Apples                     25           $2.45
3       Pineapples                 5            $4.45
4       Tomatoes                   25           $3.45
5       Onions                     15           $1.45
6       Bananas                    30           $3.45

The general syntax for using comparison operators in Awk is:

# expression { actions; }

To achieve the above goal, I will have to run the command below:

# awk '$3 <= 30 { printf "%s\t%s\n", $0,"**" ; } $3 > 30 { print $0 ;}' food_list.txt

No	Item_Name`		Quantity	Price
1	Mangoes	      		   45		$3.45
2	Apples			   25		$2.45	**
3	Pineapples		   5		$4.45	**
4	Tomatoes		   25		$3.45	**
5	Onions			   15           $1.45	**
6	Bananas			   30           $3.45	**

In the above example, there are two important things that happen:

  1. The first expression { action ; } combination, $3 <= 30 { printf “%s\t%s\n”, $0,”**” ; } prints out lines with quantity less than or equal to 30 and adds a (**) at the end of each line. The value of quantity is accessed using $3 field variable.
  2. The second expression { action ; } combination, $3 > 30 { print $0 ;} prints out lines unchanged since their quantity is greater then 30.

One more example:

# awk '$3 <= 20 { printf "%s\t%s\n", $0,"TRUE" ; } $3 > 20  { print $0 ;} ' food_list.txt 

No	Item_Name		Quantity	Price
1	Mangoes			   45		$3.45
2	Apples			   25		$2.45
3	Pineapples		   5		$4.45	TRUE
4	Tomatoes		   25		$3.45
5	Onions			   15           $1.45	TRUE
6       Bananas	                   30           $3.45

In this example, we want to indicate lines with quantity less or equal to 20 with the word (TRUE) at the end.

Summary

This is an introductory tutorial to comparison operators in Awk, therefore you need to try out many other options and discover more.

In case of any problems you face or any additions that you have in mind, then drop a comment in the comment section below. Remember to read the next part of the Awk series where I will take you through compound expressions.

How to Use Compound Expressions with Awk in Linux – Part 5

All along, we have been looking at simple expressions when checking whether a condition has been meet or not. What if you want to use more then one expression to check for a particular condition in?

In this article, we shall take a look at the how you can combine multiple expressions referred to as compound expressions to check for a condition when filtering text or strings.

In Awkcompound expressions are built using the && referred to as (and) and the || referred to as (or)compound operators.

The general syntax for compound expressions is:

( first_expression ) && ( second_expression )

Here, first_expression and second_expression must be true to make the whole expression true.

( first_expression ) || ( second_expression) 

Here, one of the expressions either first_expression or second_expression must be true for the whole expression to be true.

Caution: Remember to always include the parenthesis.

The expressions can be built using the comparison operators that we looked at in Part 4 of the awk series.

Let us now get a clear understanding using an example below:

In this example, a have a text file named tecmint_deals.txt, which contains a list of some amazing random Tecmint deals, it includes the name of the deal, the price and type.

TecMint Deal List
No      Name                                    Price           Type
1       Mac_OS_X_Cleanup_Suite                  $9.99           Software
2       Basics_Notebook                         $14.99          Lifestyle
3       Tactical_Pen                            $25.99          Lifestyle
4       Scapple                                 $19.00          Unknown
5       Nano_Tool_Pack                          $11.99          Unknown
6       Ditto_Bluetooth_Altering_Device         $33.00          Tech
7       Nano_Prowler_Mini_Drone                 $36.99          Tech 

Say that we want only print and flag deals that are above $20 and of type “Tech” using the (**) sign at the end of each line.

We shall need to run the command below.

# awk '($3 ~ /^$[2-9][0-9]*\.[0-9][0-9]$/) && ($4=="Tech") { printf "%s\t%s\n",$0,"*"; } ' tecmint_deals.txt 

6	Ditto_Bluetooth_Altering_Device		$33.00		Tech	*
7	Nano_Prowler_Mini_Drone			$36.99          Tech	 *

In this example, we have used two expressions in a compound expression:

  1. First expression, ($3 ~ /^\$[2-9][0-9]*\.[0-9][0-9]$/) ; checks the for lines with deals with price above $20, and it is only true if the value of $3 which is the price matches the pattern /^\$[2-9][0-9]*\.[0-9][0-9]$/
  2. And the second expression, ($4 == “Tech”) ; checks whether the deal is of type “Tech” and it is only true if the value of $4 equals to “Tech”.

Remember, a line will only be flagged with the (**), if first expression and second expression are true as states the principle of the && operator.

Summary

Some conditions always require building compound expressions for you to match exactly what you want. When you understand the use of comparison and compound expression operators then, filtering text or strings based on some difficult conditions will become easy.

Hope you find this guide useful and for any questions or additions, always remember to leave a comment and your concern will be solved accordingly.

How to Use ‘next’ Command with Awk in Linux – Part 6

In this sixth part of Awk series, we shall look at using next command, which tells Awk to skip all remaining patterns and expressions that you have provided, but instead read the next input line.

The next command helps you to prevent executing what I would refer to as time-wasting steps in a command execution.

To understand how it works, let us consider a file called food_list.txt that looks like this:

Food List Items
No      Item_Name               Price           Quantity
1       Mangoes                 $3.45              5
2       Apples                  $2.45              25
3       Pineapples              $4.45              55
4       Tomatoes                $3.45              25
5       Onions                  $1.45              15
6       Bananas                 $3.45              30

Consider running the following command that will flag food items whose quantity is less than or equal to 20with a (*) sign at the end of each line:

# awk '$4 <= 20 { printf "%s\t%s\n", $0,"*" ; } $4 > 20 { print $0 ;} ' food_list.txt 

No	Item_Name		Price		Quantity
1	Mangoes			$3.45		   5	*
2	Apples			$2.45              25
3	Pineapples		$4.45              55
4	Tomatoes		$3.45              25 
5	Onions			$1.45              15	*
6	Bananas	                $3.45              30

The command above actually works as follows:

  1. First, it checks whether the quantity, fourth field of each input line is less than or equal to 20, if a value meets that condition, it is printed and flagged with the (*) sign at the end using expression one: $4 <= 20
  2. Secondly, it checks if the fourth field of each input line is greater than 20, and if a line meets the condition it gets printed using expression two: $4 > 20

But there is one problem here, when the first expression is executed, a line that we want to flag is printed using: { printf "%s\t%s\n", $0,"**" ; } and then in the same step, the second expression is also checked which becomes a time wasting factor.

So there is no need to execute the second expression, $4 > 20 again after printing already flagged lines that have been printed using the first expression.

To deal with this problem, you have to use the next command as follows:

# awk '$4 <= 20 { printf "%s\t%s\n", $0,"*" ; next; } $4 > 20 { print $0 ;} ' food_list.txt

No	Item_Name		Price		Quantity
1	Mangoes			$3.45		   5	*
2	Apples			$2.45              25
3	Pineapples		$4.45              55
4	Tomatoes		$3.45              25 
5	Onions			$1.45              15	*
6	Bananas	                $3.45              30

After a single input line is printed using $4 <= 20 { printf "%s\t%s\n", $0,"*" ; next ; }, the next command included will help skip the second expression $4 > 20 { print $0 ;}, so execution goes to the next input line without having to waste time on checking whether the quantity is greater than 20.

The next command is very important is writing efficient commands and where necessary, you can always use to speed up the execution of a script. Prepare for the next part of the series where we shall look at using standard input (STDIN) as input for Awk.

Hope you find this how to guide helpful and you can as always put your thoughts in writing by leaving a comment in the comment section below.

How to Read Awk Input from STDIN in Linux – Part 7

In the previous parts of the Awk tool series, we looked at reading input mostly from a file(s), but what if you want to read input from STDIN.

In this Part 7 of Awk series, we shall look at few examples where you can filter the output of other commands instead of reading input from a file.

We shall start with the dir utility that works similar to ls command, in the first example below, we use the output of dir -l command as input for Awk to print owner’s username, groupname and the files he/she owns in the current directory:

# dir -l | awk '{print $3, $4, $9;}'

List Files Owned By User in Directory

List Files Owned By User in Directory

Take a look at another example where we employ awk expressions, here, we want to print files owned by the root user by using an expression to filter strings as in the awk command below:

# dir -l | awk '$3=="root" {print $1,$3,$4, $9;} '

List Files Owned by Root User

List Files Owned by Root User

The command above includes the (==) comparison operator to help us filter out files in the current directory which are owned by the root user. This is achieved using the expression $3==”root”.

Let us look at another example of where we use a awk comparison operator to match a certain string.

Here, we have used the cat utility to view the contents of a file named tecmint_deals.txt and we want to view the deals of type Tech only, so we shall run the following commands:

# cat tecmint_deals.txt
# cat tecmint_deals.txt | awk '$4 ~ /tech/{print}'
# cat tecmint_deals.txt | awk '$4 ~ /Tech/{print}'

Use Awk Comparison Operator to Match String

Use Awk Comparison Operator to Match String

In the example above, we have used the value ~ /pattern/ comparison operator, but there are two commands to try and bring out something very important.

When you run the command with pattern tech nothing is printed out because there is no deal of that type, but with Tech, you get deals of type Tech.

So always be careful when using this comparison operator, it is case sensitive as we have seen above.

You can always use the output of another command instead as input for awk instead of reading input from a file, this is very simple as we have looked at in the examples above.

Hope the examples were clear enough for you to understand, if you have any concerns, you can express them through the comment section below and remember to check the next part of the series where we shall look at awk features such as variablesnumeric expressions and assignment operators.

Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators – Part 8

The Awk command series is getting exciting I believe, in the previous seven parts, we walked through some fundamentals of Awk that you need to master to enable you perform some basic text or string filtering in Linux.

Starting with this part, we shall dive into advance areas of Awk to handle more complex text or string filtering operations. Therefore, we are going to cover Awk features such as variables, numeric expressions and assignment operators.

Learn Awk Variables, Numeric Expressions and Assignment Operators

Learn Awk Variables, Numeric Expressions and Assignment Operators

These concepts are not comprehensively distinct from the ones you may have probably encountered in many programming languages before such shell, C, Python plus many others, so there is no need to worry much about this topic, we are simply revising the common ideas of using these mentioned features.

This will probably be one of the easiest Awk command sections to understand, so sit back and lets get going.

1. Awk Variables

In any programming language, a variable is a place holder which stores a value, when you create a variable in a program file, as the file is executed, some space is created in memory that will store the value you specify for the variable.

You can define Awk variables in the same way you define shell variables as follows:

variable_name=value 

In the syntax above:

  1. variable_name: is the name you give a variable
  2. value: the value stored in the variable

Let’s look at some examples below:

computer_name=”tecmint.com”
port_no=”22”
email=”admin@tecmint.com”
server=”computer_name”

Take a look at the simple examples above, in the first variable definition, the value tecmint.com is assigned to the variable computer_name.

Furthermore, the value 22 is assigned to the variable port_no, it is also possible to assign the value of one variable to another variable as in the last example where we assigned the value of computer_name to the variable server.

If you can recall, right from part 2 of this Awk series were we covered field editing, we talked about how Awk divides input lines into fields and uses standard field access operator, $ to read the different fields that have been parsed. We can also use variables to store the values of fields as follows.

first_name=$2
second_name=$3

In the examples above, the value of first_name is set to second field and second_name is set to the third field.

As an illustration, consider a file named names.txt which contains a list of an application’s users indicating their first and last names plus gender. Using the cat command, we can view the contents of the file as follows:

$ cat names.txt

List File Content Using cat Command

List File Content Using cat Command

Then, we can also use the variables first_name and second_name to store the first and second names of the first user on the list as by running the Awk command below:

$ awk '/Aaron/{ first_name=$2 ; second_name=$3 ; print first_name, second_name ; }' names.txt

Store Variables Using Awk Command

Store Variables Using Awk Command

Let us also take a look at another case, when you issue the command uname -a on your terminal, it prints out all your system information.

The second field contains your hostname, therefore we can store the hostname in a variable called hostnameand print it using Awk as follows:

$ uname -a
$ uname -a | awk '{hostname=$2 ; print hostname ; }' 

Store Command Output to Variable Using Awk

Store Command Output to Variable Using Awk

2. Numeric Expressions

In Awk, numeric expressions are built using the following numeric operators:

  1. * : multiplication operator
  2. + : addition operator
  3. / : division operator
  4. - : subtraction operator
  5. % : modulus operator
  6. ^ : exponentiation operator

The syntax for a numeric expressions is:

$ operand1 operator operand2

In the form above, operand1 and operand2 can be numbers or variable names, and operator is any of the operators above.

Below are some examples to demonstrate how to build numeric expressions:

counter=0
num1=5
num2=10
num3=num2-num1
counter=counter+1

To understand the use of numeric expressions in Awk, we shall consider the following example below, with the file domains.txt which contains all domains owned by Tecmint.

news.tecmint.com
tecmint.com
linuxsay.com
windows.tecmint.com
tecmint.com
news.tecmint.com
tecmint.com
linuxsay.com
tecmint.com
news.tecmint.com
tecmint.com
linuxsay.com
windows.tecmint.com
tecmint.com

To view the contents of the file, use the command below:

$ cat domains.txt

View Contents of File

View Contents of File

If we want to count the number of times the domain tecmint.com appears in the file, we can write a simple script to do that as follows:

#!/bin/bash
for file in $@; do
        if [ -f $file ] ; then
                #print out filename
                echo "File is: $file"
                #print a number incrementally for every line containing tecmint.com 
                awk  '/^tecmint.com/ { counter=counter+1 ; printf "%s\n", counter ; }'   $file
        else
                #print error info incase input is not a file
                echo "$file is not a file, please specify a file." >&2 && exit 1
        fi
done
#terminate script with exit code 0 in case of successful execution 
exit 0

Shell Script to Count a String or Text in File

Shell Script to Count a String or Text in File

After creating the script, save it and make it executable, when we run it with the file, domains.txt as out input, we get the following output:

$ ./script.sh  ~/domains.txt

Script to Count String or Text

Script to Count String or Text

From the output of the script, there are 6 lines in the file domains.txt which contain tecmint.com, to confirm that you can manually count them.

3. Assignment Operators

The last Awk feature we shall cover is assignment operators, there are several assignment operators in Awk and these include the following:

  1. *= : multiplication assignment operator
  2. += : addition assignment operator
  3. /= : division assignment operator
  4. -= : subtraction assignment operator
  5. %= : modulus assignment operator
  6. ^= : exponentiation assignment operator

The simplest syntax of an assignment operation in Awk is as follows:

$ variable_name=variable_name operator operand

Examples:

counter=0
counter=counter+1

num=20
num=num-1

You can use the assignment operators above to shorten assignment operations in Awk, consider the previous examples, we could perform the assignment in the following form:

variable_name operator=operand
counter=0
counter+=1

num=20
num-=1

Therefore, we can alter the Awk command in the shell script we just wrote above using += assignment operator as follows:

#!/bin/bash
for file in $@; do
        if [ -f $file ] ; then
                #print out filename
                echo "File is: $file"
                #print a number incrementally for every line containing tecmint.com 
                awk  '/^tecmint.com/ { counter+=1 ; printf  "%s\n",  counter ; }'   $file
        else
                #print error info incase input is not a file
                echo "$file is not a file, please specify a file." >&2 && exit 1
        fi
done
#terminate script with exit code 0 in case of successful execution 
exit 0

Alter Shell Script

Alter Shell Script

In this segment of the Awk series, we covered some powerful Awk features, that is variables, building numeric expressions and using assignment operators, plus some few illustrations of how we can actually use them.

These concepts are not any different from the one in other programming languages but there may be some significant distinctions under Awk programming.

In part 9, we shall look at more Awk features that is special patterns: BEGIN and END.

Learn How to Use Awk Special Patterns ‘BEGIN and END’ – Part 9

In Part 8 of this Awk series, we introduced some powerful Awk command features, that is variables, numeric expressions and assignment operators.

As we advance, in this segment, we shall cover more Awk features, and that is the special patterns: BEGIN and END.

Learn Awk Patterns BEGIN and END

Learn Awk Patterns BEGIN and END

These special features will prove helpful as we try to expand on and explore more methods of building complex Awk operations.

To get started, let us drive our thoughts back to the introduction of the Awk series, remember when we started this series, I pointed out that the general syntax of a running an Awk command is:

# awk 'script' filenames  

And in the syntax above, the Awk script has the form:

/pattern/ { actions } 

When you consider the pattern in the script, it is normally a regular expression, additionally, you can also think of pattern as special patterns BEGIN and END. Therefore, we can also write an Awk command in the form below:

awk '
 	BEGIN { actions } 
 	/pattern/ { actions }
 	/pattern/ { actions }
            ……….
	 END { actions } 
' filenames  

In the event that you use the special patterns: BEGIN and END in an Awk script, this is what each of them means:

  1. BEGIN pattern: means that Awk will execute the action(s) specified in BEGIN once before any input lines are read.
  2. END pattern: means that Awk will execute the action(s) specified in END before it actually exits.

And the flow of execution of the an Awk command script which contains these special patterns is as follows:

  1. When the BEGIN pattern is used in a script, all the actions for BEGIN are executed once before any input line is read.
  2. Then an input line is read and parsed into the different fields.
  3. Next, each of the non-special patterns specified is compared with the input line for a match, when a match is found, the action(s) for that pattern are then executed. This stage will be repeated for all the patterns you have specified.
  4. Next, stage 2 and 3 are repeated for all input lines.
  5. When all input lines have been read and dealt with, in case you specify the END pattern, the action(s) will be executed.

You should always remember this sequence of execution when working with the special patterns to achieve the best results in an Awk operation.

To understand it all, let us illustrate using the example from part 8, about the list of domains owned by Tecmint, as stored in a file named domains.txt.

news.tecmint.com
tecmint.com
linuxsay.com
windows.tecmint.com
tecmint.com
news.tecmint.com
tecmint.com
linuxsay.com
tecmint.com
news.tecmint.com
tecmint.com
linuxsay.com
windows.tecmint.com
tecmint.com
$ cat ~/domains.txt

View Contents of File

View Contents of File

In this example, we want to count the number of times the domain tecmint.com is listed in the file domains.txt. So we wrote a small shell script to help us do that using the idea of variables, numeric expressions and assignment operators which has the following content:

#!/bin/bash
for file in $@; do
        if [ -f $file ] ; then
                #print out filename
                echo "File is: $file"
                #print a number incrementally for every line containing tecmint.com 
                awk '/^tecmint.com/ { counter+=1 ; printf "%s\n", counter ; }' $file
        else
                #print error info incase input is not a file
                echo "$file is not a file, please specify a file." >&2 && exit 1
        fi
done
#terminate script with exit code 0 in case of successful execution 
exit 0

Let us now employ the two special patterns: BEGIN and END in the Awk command in the script above as follows:

We shall alter the script:

awk '/^tecmint.com/ { counter+=1 ; printf "%s\n", counter ; }' $file

To:

awk ' BEGIN {  print "The number of times tecmint.com appears in the file is:" ; }
                      /^tecmint.com/ {  counter+=1  ;  }
                      END {  printf "%s\n",  counter  ; } 
                    '  $file

After making the changes to the Awk command, the complete shell script now looks like this:

#!/bin/bash
for file in $@; do
        if [ -f $file ] ; then
                #print out filename
                echo "File is: $file"
                #print the total number of times tecmint.com appears in the file
                awk ' BEGIN {  print "The number of times tecmint.com appears in the file is:" ; }
                      /^tecmint.com/ {  counter+=1  ;  }
                      END {  printf "%s\n",  counter  ; } 
                    '  $file
        else
                #print error info incase input is not a file
                echo "$file is not a file, please specify a file." >&2 && exit 1
        fi
done
#terminate script with exit code 0 in case of successful execution 
exit 0

Awk BEGIN and END Patterns

Awk BEGIN and END Patterns

When we run the script above, it will first of all print the location of the file domains.txt, then the Awk command script is executed, where the BEGIN special pattern helps us print out the message “The number of times tecmint.com appears in the file is:” before any input lines are read from the file.

Then our pattern, /^tecmint.com/ is compared against every input line and the action, { counter+=1 ; }is executed for each input line, which counts the number of times tecmint.com appears in the file.

Finally, the END pattern will print the total the number of times the domain tecmint.com appears in the file.

$ ./script.sh ~/domains.txt 

Script to Count Number of Times String Appears

Script to Count Number of Times String Appears

To conclude, we walked through more Awk features exploring on the concepts of special pattern: BEGIN and END.

As I pointed out before, these Awk features will help us build more complex text filtering operations, there is more to cover under Awk features and in part 10, we shall approach the idea of Awk built-in variables, so stay connected.

Learn How to Use Awk Built-in Variables – Part 10

As we uncover the section of Awk features, in this part of the series, we shall walk through the concept of built-in variables in Awk. There are two types of variables you can use in Awk, these are; user-defined variables, which we covered in Part 8 and built-in variables.

Awk Built in Variables Examples

Awk Built in Variables Examples

Built-in variables have values already defined in Awk, but we can also carefully alter those values, the built-in variables include:

  1. FILENAME : current input file name( do not change variable name)
  2. FR : number of the current input line (that is input line 1, 2, 3… so on, do not change variable name)
  3. NF : number of fields in current input line (do not change variable name)
  4. OFS : output field separator
  5. FS : input field separator
  6. ORS : output record separator
  7. RS : input record separator

Let us proceed to illustrate the use of some of the Awk built-in variables above:

To read the filename of the current input file, you can use the FILENAME built-in variable as follows:

$ awk ' { print FILENAME } ' ~/domains.txt 

Awk FILENAME Variable

Awk FILENAME Variable

You will realize that, the filename is printed out for each input line, that is the default behavior of Awk when you use FILENAME built-in variable.

Using NR to count the number of lines (records) in an input file, remember that, it also counts the empty lines, as we shall see in the example below.

When we view the file domains.txt using cat command, it contains 14 lines with text and empty 2 lines:

$ cat ~/domains.txt

Print Contents of File

Print Contents of File

$ awk ' END { print "Number of records in file is: ", NR } ' ~/domains.txt 

Awk Count Number of Lines

Awk Count Number of Lines

To count the number of fields in a record or line, we use the NR built-in variable as follows:

$ cat ~/names.txt

List File Contents

List File Contents

$ awk '{ print "Record:",NR,"has",NF,"fields" ; }' ~/names.txt

Awk Count Number of Fields in File

Awk Count Number of Fields in File

Next, you can also specify an input field separator using the FS built-in variable, it defines how Awk divides input lines into fields.

The default value for FS is space and tab, but we can change the value of FS to any character that will instruct Awk to divide input lines accordingly.

There are two methods to do this:

  1. one method is to use the FS built-in variable
  2. and the second is to invoke the -F Awk option

Consider the file /etc/passwd on a Linux system, the fields in this file are divided using the : character, so we can specify it as the new input field separator when we want to filter out certain fields as in the following examples:

We can use the -F option as follows:

$ awk -F':' '{ print $1, $4 ;}' /etc/passwd

Awk Filter Fields in Password File

Awk Filter Fields in Password File

Optionally, we can also take advantage of the FS built-in variable as below:

$ awk ' BEGIN {  FS=“:” ; }  { print $1, $4  ; } ' /etc/passwd

Filter Fields in File Using Awk

Filter Fields in File Using Awk

To specify an output field separator, use the OFS built-in variable, it defines how the output fields will be separated using the character we use as in the example below:

$ awk -F':' ' BEGIN { OFS="==>" ;} { print $1, $4 ;}' /etc/passwd

Add Separator to Field in File

Add Separator to Field in File

In this Part 10, we have explored the idea of using Awk built-in variables which come with predefined values. But we can also change these values, though, it is not recommended to do so unless you know what you are doing, with adequate understanding. After this, we shall progress to cover how we can use shell variables in Awk command operations.

How to Allow Awk to Use Shell Variables – Part 11

When we write shell scripts, we normally include other smaller programs or commands such as Awk operations in our scripts. In the case of Awk, we have to find ways of passing some values from the shell to Awk operations.

This can be done by using shell variables within Awk commands, and in this part of the series, we shall learn how to allow Awk to use shell variables that may contain values we want to pass to Awk commands.

There possibly two ways you can enable Awk to use shell variables:

1. Using Shell Quoting

Let us take a look at an example to illustrate how you can actually use shell quoting to substitute the value of a shell variable in an Awk command. In this example, we want to search for a username in the file /etc/passwd, filter and print the user’s account information.

Therefore, we can write a test.sh script with the following content:

#!/bin/bash

#read user input
read -p "Please enter username:" username

#search for username in /etc/passwd file and print details on the screen
cat /etc/passwd | awk "/$username/ "' { print $0 }'

Thereafter, save the file and exit.

Interpretation of the Awk command in the test.sh script above:

cat /etc/passwd | awk "/$username/ "' { print $0 }'

"/$username/ " – shell quoting used to substitute value of shell variable username in Awk command. The value of username is the pattern to be searched in the file /etc/passwd.

Note that the double quote is outside the Awk script, ‘{ print $0 }’.

Then make the script executable and run it as follows:

$ chmod  +x  test.sh
$ ./text.sh 

After running the script, you will be prompted to enter a username, type a valid username and hit Enter. You will view the user’s account details from the /etc/passwd file as below:

Shell Script to Find Username in Password File

Shell Script to Find Username in Password File

2. Using Awk’s Variable Assignment

This method is much simpler and better in comparison to method one above. Considering the example above, we can run a simple command to accomplish the job. Under this method, we use the -v option to assign a shell variable to a Awk variable.

Firstly, create a shell variable, username and assign it the name that we want to search in the /etc/passswdfile:

username="aaronkilik"

Then type the command below and hit Enter:

# cat /etc/passwd | awk -v name="$username" ' $0 ~ name {print $0}'

Find Username in Password File Using Awk

Find Username in Password File Using Awk

Explanation of the above command:

  1. -v – Awk option to declare a variable
  2. username – is the shell variable
  3. name – is the Awk variable

Let us take a careful look at $0 ~ name inside the Awk script, ' $0 ~ name {print $0}'. Remember, when we covered Awk comparison operators in Part 4 of this series, one of the comparison operators was value ~pattern, which means: true if value matches the pattern.

The output($0) of cat command piped to Awk matches the pattern (aaronkilik) which is the name we are searching for in /etc/passwd, as a result, the comparison operation is true. The line containing the user’s account information is then printed on the screen.

Conclusion

We have covered an important section of Awk features, that can help us use shell variables within Awk commands. Many times, you will write small Awk programs or commands within shell scripts and therefore, you need to have a clear understanding of how to use shell variables within Awk commands.

In the next part of the Awk series, we shall dive into yet another critical section of Awk features, that is flow control statements. So stay tunned and let’s keep learning and sharing.

How to Use Flow Control Statements in Awk – Part 12

When you review all the Awk examples we have covered so far, right from the start of the Awk series, you will notice that all the commands in the various examples are executed sequentially, that is one after the other. But in certain situations, we may want to run some text filtering operations based on some conditions, that is where the approach of flow control statements sets in.

Use Flow Control Statements in Awk

Use Flow Control Statements in Awk

There are various flow control statements in Awk programming and these include:

  1. if-else statement
  2. for statement
  3. while statement
  4. do-while statement
  5. break statement
  6. continue statement
  7. next statement
  8. nextfile statement
  9. exit statement

However, for the scope of this series, we shall expound on: if-elseforwhile and do while statements. Remember that we already walked through how to use next statement in Part 6 of this Awk series.

1. The if-else Statement

The expected syntax of the if statement is similar to that of the shell if statement:

if  (condition1) {
     actions1
}
else {
      actions2
}

In the above syntax, condition1 and condition2 are Awk expressions, and actions1 and actions2 are Awk commands executed when the respective conditions are satisfied.

When condition1 is satisfied, meaning it’s true, then actions1 is executed and the if statement exits, otherwise actions2 is executed.

The if statement can also be expanded to a if-else_if-else statement as below:

if (condition1){
     actions1
}
else if (conditions2){
      actions2
}
else{
     actions3
}

For the form above, if condition1 is true, then actions1 is executed and the if statement exits, otherwise condition2 is evaluated and if it is true, then actions2 is executed and the if statement exits. However, when condition2 is false then, actions3 is executed and the if statement exits.

Here is a case in point of using if statements, we have a list of users and their ages stored in the file, users.txt.

We want to print a statement indicating a user’s name and whether the user’s age is less or more than 25 years old.

aaronkilik@tecMint ~ $ cat users.txt
Sarah L			35    	F
Aaron Kili		40    	M
John  Doo		20    	M
Kili  Seth		49    	M    

We can write a short shell script to carry out our job above, here is the content of the script:

#!/bin/bash
awk ' { 
        if ( $3 <= 25 ){
           print "User",$1,$2,"is less than 25 years old." ;
        }
        else {
           print "User",$1,$2,"is more than 25 years old" ; 
}
}'    ~/users.txt

Then save the file and exit, make the script executable and run it as follows:

$ chmod +x test.sh
$ ./test.sh
Sample Output
User Sarah L is more than 25 years old
User Aaron Kili is more than 25 years old
User John Doo is less than 25 years old.
User Kili Seth is more than 25 years old

2. The for Statement

In case you want to execute some Awk commands in a loop, then the for statement offers you a suitable way to do that, with the syntax below:

Here, the approach is simply defined by the use of a counter to control the loop execution, first you need to initialize the counter, then run it against a test condition, if it is true, execute the actions and finally increment the counter. The loop terminates when the counter does not satisfy the condition.

for ( counter-initialization; test-condition; counter-increment ){
      actions
}

The following Awk command shows how the for statement works, where we want to print the numbers 0-10:

$ awk 'BEGIN{ for(counter=0;counter<=10;counter++){ print counter} }'
Sample Output
0
1
2
3
4
5
6
7
8
9
10

3. The while Statement

The conventional syntax of the while statement is as follows:

while ( condition ) {
          actions
}

The condition is an Awk expression and actions are lines of Awk commands executed when the condition is true.

Below is a script to illustrate the use of while statement to print the numbers 0-10:

#!/bin/bash
awk ' BEGIN{ counter=0 ;
         
        while(counter<=10){
              print counter;
              counter+=1 ;
             
}
}  

Save the file and make the script executable, then run it:

$ chmod +x test.sh
$ ./test.sh
Sample Output
0
1
2
3
4
5
6
7
8
9
10

4. The do while Statement

It is a modification of the while statement above, with the following underlying syntax:

do {
     actions
}
 while (condition) 

The slight difference is that, under do while, the Awk commands are executed before the condition is evaluated. Using the very example under while statement above, we can illustrate the use of do while by altering the Awk command in the test.sh script as follows:

#!/bin/bash

awk ' BEGIN{ counter=0 ;  
        do{
            print counter;  
            counter+=1 ;    
        }
        while (counter<=10)   
} 
'

After modifying the script, save the file and exit. Then make the script executable and execute it as follows:

$ chmod +x test.sh
$ ./test.sh
Sample Output
0
1
2
3
4
5
6
7
8
9
10

Conclusion

This is not a comprehensive guide regarding Awk flow control statements, as I had mentioned earlier on, there are several other flow control statements in Awk.

Nonetheless, this part of the Awk series should give you a clear fundamental idea of how execution of Awk commands can be controlled based on certain conditions.

You can as well expound more on the rest of the flow control statements to gain more understanding on the subject matter. Finally, in the next section of the Awk series, we shall move into writing Awk scripts.

How to Write Scripts Using Awk Programming Language – Part 13

All along from the beginning of the Awk series up to Part 12, we have been writing small Awk commands and programs on the command line and in shell scripts respectively.

However, Awk, just as Shell, is also an interpreted language, therefore, with all that we have walked through from the start of this series, you can now write Awk executable scripts.

Similar to how we write a shell script, Awk scripts start with the line:

#! /path/to/awk/utility -f 

For example on my system, the Awk utility is located in /usr/bin/awk, therefore, I would start an Awk script as follows:

#! /usr/bin/awk -f 

Explaining the line above:

  1. #! – referred to as Shebang, which specifies an interpreter for the instructions in a script
  2. /usr/bin/awk – is the interpreter
  3. -f – interpreter option, used to read a program file

That said, let us now dive into looking at some examples of Awk executable scripts, we can start with the simple script below. Use your favorite editor to open a new file as follows:

$ vi script.awk

And paste the code below in the file:

#!/usr/bin/awk -f 
BEGIN { printf "%s\n","Writing my first Awk executable script!" }

Save the file and exit, then make the script executable by issuing the command below:

$ chmod +x script.awk

Thereafter, run it:

$ ./script.awk
Sample Output
Writing my first Awk executable script!

A critical programmer out there must be asking, “where are the comments?”, yes, you can also include comments in your Awk script. Writing comments in your code is always a good programming practice.

It helps other programmers looking through your code to understand what you are trying to achieve in each section of a script or program file.

Therefore, you can include comments in the script above as follows.

#!/usr/bin/awk -f 

#This is how to write a comment in Awk
#using the BEGIN special pattern to print a sentence 

BEGIN { printf "%s\n","Writing my first Awk executable script!" }

Next, we shall look at an example where we read input from a file. We want to search for a system user named aaronkilik in the account file, /etc/passwd, then print the username, user ID and user GID as follows:

Below is the content of our script called second.awk.

#! /usr/bin/awk -f 

#use BEGIN sepecial character to set FS built-in variable
BEGIN { FS=":" }

#search for username: aaronkilik and print account details 
/aaronkilik/ { print "Username :",$1,"User ID :",$3,"User GID :",$4 }

Save the file and exit, make the script executable and execute it as below:

$ chmod +x second.awk
$ ./second.awk /etc/passwd
Sample Output
Username : aaronkilik User ID : 1000 User GID : 1000

In the last example below, we shall use do while statement to print out numbers from 0-10:

Below is the content of our script called do.awk.

#! /usr/bin/awk -f 

#printing from 0-10 using a do while statement 
#do while statement 
BEGIN {
#initialize a counter
x=0

do {
    print x;
    x+=1;
}
while(x<=10)
}

After saving the file, make the script executable as we have done before. Afterwards, run it:

$ chmod +x do.awk
$ ./do.awk
Sample Output
0
1
2
3
4
5
6
7
8
9
10

Summary

We have come to the end of this interesting Awk series, I hope you have learned a lot from all the 13 parts, as an introduction to Awk programming language.

As I mentioned from the beginning, Awk is a complete text processing language, for that reason, you can learn more other aspects of Awk programming language such as environmental variables, arrays, functions (built-in & user defined) and beyond.

There is yet additional parts of Awk programming to learn and master, so, below, I have provided some links to important online resources that you can use to expand your Awk programming skills, these are not necessarily all that you need, you can also look out for useful Awk programming books.

Reference LinksThe GNU Awk User’s Guide and AWK Language Programming

For any thoughts you wish to share or questions, use the comment form below.

Source

BEGINNER’S GUIDE FOR LINUX – Start Learning Linux in Minutes

Welcome to this exclusive edition “BEGINNER’S GUIDE FOR LINUX” by TecMint, this course module is specially designed and compiled for those beginners, who want to make their way into Linux learning process and do the best in today’s IT organizations. This courseware is created as per requirements of industrial environment with complete entrance to Linux, which will help you to build a great success in Linux.

We have given special priority to Linux commands and switches, scripting, services and applications, access control, process control, user management, database management, web services, etc. Even though Linux command-line provides thousands of commands, but only a few of basic commands you need to learn to perform a day-to-day Linux tasks.

Prerequisites:

All students must have a little understanding of computers and passion to learn new technology.

Distributions:

This courseware is presently supported on the latest releases of Linux distributions like Red Hat Enterprise Linux, CentOS, Debian, Ubuntu, etc.

Course Objectives

Section 1: Introduction To Linux and OS Installations

  1. Linux Boot Process
  2. Linux File System Hierarchy
  3. Installation of CentOS 7
  4. Installation of Various Linux Distributions including Debian, RHEL, Ubuntu, Fedora, etc
  5. Installation of CentOS on VirtualBox
  6. Dual Boot Installation of Windows and Linux

Section 2: Essentials of Basic Linux Commands

  1. List Files and Directories Using ‘ls’ Command
  2. Switch Between Linux Directories and Paths with ‘cd’ Command
  3. How to Use ‘dir’ Command with Different Options in Linux
  4. Find Out Present Working Directory Using ‘pwd’ Command
  5. Create Files using ‘touch’ Command
  6. Copy Files and Directories using ‘cp’ Command
  7. View File Content with ‘cat’ Command
  8. Check File System Disk Space Usage with ‘df’ Command
  9. Check Files and Directories Disk Usage with ‘du’ Command
  10. Find Files and Directories using find Command
  11. Find File Pattern Searches using grep Command

Section 3: Essentials of Advance Linux Commands

  1. Quirky ‘ls’ Commands Every Linux User Must Know
  2. Manage Files Effectively using head, tail and cat Commands in Linux
  3. Count Number of Lines, Words, Characters in File using ‘wc’ Command
  4. Basic ‘sort’ Commands to Sort Files in Linux
  5. Advance ‘sort’ Commands to Sort Files in Linux
  6. Pydf an Alternative “df” Command to Check Disk Usage
  7. Check Linux Ram Usage with ‘free’ Command
  8. Advance ‘rename’ Command to Rename Files and Directories
  9. Print Text/String in Terminal using ‘echo’ Command

Section 4: Some More Advance Linux Commands

  1. Switching From Windows to Nix – 20 Useful Commands for Newbies – Part 1
  2. 20 Advanced Commands for Middle Level Linux Users – Part 2
  3. 20 Advanced Commands for Linux Experts – Part 3
  4. 20 Funny Commands of Linux or Linux is Fun in Terminal – Part 1
  5. 6 Interesting Funny Commands of Linux (Fun in Terminal) – Part 2
  6. 51 Useful Lesser Known Commands for Linux Users
  7. 10 Most Dangerous Commands – You Should Never Execute on Linux

Section 5: User, Group and File Permissions Management

  1. How to Add or Create New Users using ‘useradd’ Command
  2. How to Modify or Change Users Attributes using ‘usermod’ Command
  3. Managing Users & Groups, File Permissions & Attributes – Advance Level
  4. Difference Between su and sudo – How to Configure sudo – Advance Level
  5. How to Monitor User Activity with psacct or acct Tools

Section 6: Linux Package Management

  1. Yum Package Management – CentOS, RHEL and Fedora
  2. RPM Package Management – CentOS, RHEL and Fedora
  3. APT-GET and APT-CACHE Package Management – Debian, Ubuntu
  4. DPKG Package Management – Debian, Ubuntu
  5. Zypper Package Management – Suse and OpenSuse
  6. Linux Package Management with Yum, RPM, Apt, Dpkg, Aptitude and Zypper – Advance Level
  7. 27 ‘DNF’ (Fork of Yum) Commands for RPM Package Management – New Update

Section 7: System Monitoring & Cron Scheduling

  1. Linux Process Monitoring with top Command
  2. Linux Process Management with Kill, Pkill and Killall Commands
  3. Linux File Process Management with lsof Commands
  4. Linux Job Scheduling with Cron
  5. 20 Command Line Tools to Monitor Linux Performance – Part 1
  6. 13 Linux Performance Monitoring Tools – Part 2
  7. Nagios Monitoring Tool for Linux – Advance Level
  8. Zabbix Monitoring Tool for Linux – Advance Level
  9. Shell Script to Monitor Network, Disk Usage, Uptime, Load Average and RAM – New Update

Section 8: Linux Archiving/Compression, Backup/Sync and Recovery

Archiving/Compression Files
  1. How to Archive/Compress Linux Files and Directories using ‘tar’ Command
  2. How to Open, Extract and Create RAR Files in Linux
  3. 5 Tools to Archive/Compress Files in Linux
  4. How to Archive/Compress Files and Setting File Attributes – Advance Level
Backup/Sync Files and Directories in Linux
  1. How to Copy/Synchronize Files and Directories Locally/Remotely with rsync
  2. How to Transfer Files/Folders in Linux using scp
  3. Rsnapshot (Rsync Based) – A Local/Remote File System Backup Tool
  4. Sync Two Apache Web Servers/Websites Using Rsync – Advance Level
Backup/Recovery Linux Filesystems
  1. Backup and Restore Linux Systems using Redo Backup Tool
  2. How to Clone/Backup Linux Systems Using – Mondo Rescue Disaster Recovery Tool
  3. How to Recover Deleted Files/Folders using ‘Scalpel’ Tool
  4. 8 “Disk Cloning/Backup” Softwares for Linux Servers

Section 9: Linux File System / Network Storage Management

  1. What is Ext2, Ext3 & Ext4 and How to Create and Convert Linux File Systems
  2. Understanding Linux File System Types
  3. Linux File System Creation and Configurations – Advance Level
  4. Setting Up Standard Linux File Systems and Configuring NFSv4 Server – Advance Level
  5. How to Mount/Unmount Local and Network (Samba & NFS) Filesystems – Advance Level
  6. How to Create and Manage Btrfs File System in Linux – Advance Level
  7. Introduction to GlusterFS (File System) and Installation – Advance Level

Section 10: Linux LVM Management

  1. Setup Flexible Disk Storage with Logical Volume Management
  2. How to Extend/Reduce LVM’s (Logical Volume Management)
  3. How to Take Snapshot/Restore LVM’s
  4. Setup Thin Provisioning Volumes in LVM
  5. Manage Multiple LVM Disks using Striping I/O
  6. Migrating LVM Partitions to New Logical Volume

Section 11: Linux RAID Management

  1. Introduction to RAID, Concepts of RAID and RAID Levels
  2. Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm
  3. Setting up RAID 1 (Mirroring) using ‘Two Disks’ in Linux
  4. Creating RAID 5 (Striping with Distributed Parity) in Linux
  5. Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux
  6. Setting Up RAID 10 or 1+0 (Nested) in Linux
  7. Growing an Existing RAID Array and Removing Failed Disks in Linux
  8. Assembling Partitions as RAID Devices – Creating & Managing System Backups

Section 12: Manage Services in Linux

  1. Configure Linux Services to Start and Stop Automatically
  2. How to Stop and Disable Unwanted Services in Linux
  3. How to Manage ‘Systemd’ Services Using Systemctl in Linux
  4. Managing System Startup Process and Services in Linux

Section 13: Linux System Security and Firewall

Linux Security and Tools
  1. 25 Hardening Security Tips for Linux Servers
  2. 5 Best Practices to Secure and Protect SSH Server
  3. How to Password Protect Grub in Linux
  4. Protect SSH Logins with SSH & MOTD Banner Messages
  5. How to Audit Linux Systems using Lynis Tool
  6. Secure Files/Directories using ACLs (Access Control Lists) in Linux
  7. How to Audit Network Performance, Security, and Troubleshooting in Linux
  8. Mandatory Access Control Essentials with SELinux – New Update
Linux Firewall and Tools
  1. Basic Guide on IPTables (Linux Firewall) Tips / Commands
  2. How To Setup an Iptables Firewall in Linux
  3. How to Configure ‘FirewallD’ in Linux
  4. Useful ‘FirewallD’ Rules to Configure and Manage Firewall in Linux
  5. How to Install and Configure UFW – An Un-complicated FireWall
  6. Shorewall – A High-Level Firewall for Configuring Linux Servers
  7. Install ConfigServer Security & Firewall (CSF) in Linux
  8. How to Install ‘IPFire’ Free Firewall Linux Distribution
  9. How to Install and Configure pfSense 2.1.5 (Firewall/Router) in Linux
  10. 10 Useful Open Source Security Firewalls for Linux Systems

Section 14: LAMP (Linux, Apache, MySQL/MariaDB and PHP) Setup’s

  1. Installing LAMP in RHEL/CentOS 6.0
  2. Installing LAMP in RHEL/CentOS 7.0
  3. Ubuntu 14.04 Server Installation Guide and Setup LAMP
  4. Installing LAMP in Arch Linux
  5. Setting Up LAMP in Ubuntu Server 14.10
  6. Installing LAMP in Gentoo Linux
  7. Creating Your Own Webserver and Hosting A Website from Your Linux Box
  8. Apache Virtual Hosting: IP Based and Name Based Virtual Hosts in Linux
  9. How to Setup Standalone Apache Server with Name-Based Virtual Hosting with SSL Certificate
  10. Creating Apache Virtual Hosts with Enable/Disable Vhosts Options in RHEL/CentOS 7.0
  11. Creating Virtual Hosts, Generate SSL Certificates & Keys and Enable CGI Gateway in Gentoo Linux
  12. Protect Apache Against Brute Force or DDoS Attacks Using Mod_Security and Mod_evasive Modules
  13. 13 Apache Web Server Security and Hardening Tips
  14. How to Sync Two Apache Web Servers/Websites Using Rsync
  15. How to Install ‘Varnish’ (HTTP Accelerator) and Perform Load Testing Using Apache Benchmark
  16. Installing and Configuring LAMP/LEMP Stack on Debian 8 Jessie – New Update

Section 15: LEMP (Linux, Nginx, MySQL/MariaDB and PHP) Setup’s

  1. Install LEMP in Linux
  2. Installing FcgiWrap and Enabling Perl, Ruby and Bash Dynamic Languages on Gentoo LEMP
  3. Installing LEMP in Gentoo Linux
  4. Installing LEMP in Arch Linux

Section 16: MySQL/MariaDB Administration

  1. MySQL Basic Database Administration Commands
  2. 20 MySQL (Mysqladmin) Commands for Database Administration in Linux
  3. MySQL Backup and Restore Commands for Database Administration
  4. How to Setup MySQL (Master-Slave) Replication
  5. Mytop (MySQL Database Monitoring) in Linux
  6. Install Mtop (MySQL Database Server Monitoring) in Linux
  7. https://www.tecmint.com/mysql-performance-monitoring/

Section 17: Basic Shell Scripting

  1. Understand Linux Shell and Basic Shell Scripting Language Tips – Part I
  2. 5 Shell Scripts for Linux Newbies to Learn Shell Programming – Part II
  3. Sailing Through The World of Linux BASH Scripting – Part III
  4. Mathematical Aspect of Linux Shell Programming – Part IV
  5. Calculating Mathematical Expressions in Shell Scripting Language – Part V
  6. Understanding and Writing functions in Shell Scripts – Part VI
  7. Deeper into Function Complexities with Shell Scripting – Part VII
  8. Working with Arrays in Linux Shell Scripting – Part 8
  9. An Insight of Linux “Variables” in Shell Scripting Language – Part 9
  10. Understanding and Writing ‘Linux Variables’ in Shell Scripting – Part 10
  11. Nested Variable Substitution and Predefined BASH Variables in Linux – Part 11

Section 18: Linux Interview Questions

  1. 15 Interview Questions on Linux “ls” Command – Part 1
  2. 10 Useful ‘ls’ Command Interview Questions – Part 2
  3. Basic Linux Interview Questions and Answers – Part 1
  4. Basic Linux Interview Questions and Answers – Part 2
  5. Linux Interview Questions and Answers for Linux Beginners – Part 3
  6. Core Linux Interview Questions and Answers
  7. Useful Random Linux Interview Questions and Answers
  8. Interview Questions and Answers on Various Commands in Linux
  9. Useful Interview Questions on Linux Services and Daemons
  10. Basic MySQL Interview Questions for Database Administrators
  11. MySQL Database Interview Questions for Beginners and Intermediates
  12. Advance MySQL Database “Interview Questions and Answers” for Linux Users
  13. Apache Interview Questions for Beginners and Intermediates
  14. VsFTP Interview Questions and Answers – Part 1
  15. Advance VsFTP Interview Questions and Answers – Part 2
  16. Useful SSH (Secure Shell) Interview Questions and Answers
  17. Useful “Squid Proxy Server” Interview Questions and Answers in Linux
  18. Linux Firewall Iptables Interview Questions – New Update
  19. Basic Interview Questions on Linux Networking – Part 1 – New Update

Section 19: Shell Scripting Interview Questions

  1. Useful ‘Interview Questions and Answers’ on Linux Shell Scripting
  2. Practical Interview Questions and Answers on Linux Shell Scripting

Section 20: Free Linux Books for Learning

  1. Complete Linux Command Line Cheat Sheet
  2. The GNU/Linux Advanced Administration Guide
  3. Securing & Optimizing Linux Servers
  4. Linux Patch Management: Keeping Linux Up To Date
  5. Introduction to Linux – A Hands on Guide
  6. Understanding the Linux® Virtual Memory Manager
  7. Linux Bible – Packed with Updates and Exercises
  8. A Newbie’s Getting Started Guide to Linux
  9. Linux from Scratch – Create Your Own Linux OS
  10. Linux Shell Scripting Cookbook, Second Edition
  11. Securing & Optimizing Linux: The Hacking Solution
  12. User Mode Linux – Understanding and Administration
  13. Bash Guide for Linux Beginners – New Update

Section 21: Linux Certifications – Prepration Guides

  1. RHCSA (Red Hat Certified System Administrator) Certification Guide
  2. LFCS (Linux Foundation Certified Sysadmin) Certification Guide
  3. LFCE (Linux Foundation Certified Engineer) Certification Guide

Source

WP2Social Auto Publish Powered By : XYZScripts.com