How to Delete HUGE (100-200GB) Files in Linux

Usually, to delete/remove a file from Linux terminal, we use the rm command (delete files), shred command (securely delete a file), wipe command (securely erase a file) or secure-deletion toolkit (a collection of secure file deletion tools).

We can use any of the above utilities to deal with relatively small files. What if we want to delete/remove a huge file/directory say of about 100-200GB. This may not be as easy as it seems, in terms of the time taken to remove the file (I/O scheduling) as well as the amount of RAM consumed while carrying out the operation.

In this tutorial, we will explain how to efficiently and reliably delete huge files/directories in Linux.

Suggested Read: 5 Ways to Empty or Delete a Large File Content in Linux

The main aim here is to use a technique that will not slow down the system while removing a huge file, resulting to reasonable I/O. We can achieve this using the ionice command.

Deleting HUGE (200GB) Files in Linux Using ionice Command

ionice is a useful program which sets or gets the I/O scheduling class and priority for another program. If no arguments or just -p is given, ionice will query the current I/O scheduling class and priority for that process.

If we give a command name such as rm command, it will run this command with the given arguments. To specify the process IDs of running processes for which to get or set the scheduling parameters, run this:

# ionice -p PID

To specify the name or number of the scheduling class to use (0 for none, 1 for real time, 2 for best-effort, 3 for idle) the command below.

This means that rm will belong to idle I/O class and only uses I/O when any other process does not need it:

---- Deleting Huge Files in Linux -----
# ionice -c 3 rm /var/logs/syslog
# ionice -c 3 rm -rf /var/log/apache

If there won’t be much idle time on the system, then we may want to use the best-effort scheduling class and set a low priority like this:

# ionice -c 2 -n 6 rm /var/logs/syslog
# ionice -c 2 -n 6 rm -rf /var/log/apache

Note: To delete huge files using a secure method, we may use the shredwipe and various tools in the secure-deletion toolkit mentioned earlier on, instead of rm command.

Suggested Read: 3 Ways to Permanently and Securely Delete Files/Directories’ in Linux

For more info, look through the ionice man page:

# man ionice 

That’s it for now! What other methods do you have in mind for the above purpose? Use the comment section below to share with us.

Source

Configuration Management Tool Chef Announces to go 100% Open Source

Last updated April 5, 2019

In case you did not know, among the most popular automation software services, Chef is one of the best out there.

Recently, it announced some new changes to its business model and the software. While we know that everyone here believes in the power of open source – and Chef supports that idea too. So, now they have decided to go 100% open source.

It will included all of their software under the Apache 2.0 license. You can use, modify, distribute and monetize their source code as long as you respect the trademark policy.

In addition to this, they’ve also introduced a new service for enterprises, we’ll take a look at that as you read on.

Chef going to be 100% open source

Why 100% Open Source?

The examples of some commercial open-source business models encouraged Chef to take this decision. In their blog post, they also mentioned about it:

We aren’t making this change lightly. Over the years we have experimented with and learned from a variety of different open source, community and commercial models, in search of the right balance. We believe that this change, and the way we have made it, best aligns the objectives of our communities with our own business objectives.

Barry crist, ceo of chef

So, they want people to collaborate and utilize their source code without any restrictions. This is a great news for people who want to experiment their ideas on a non-commercial application. And, as for the enterprises working with Chef – the open source model will help them get the best out of Chef’s services.

Barry Crist (CEO of Chef) also mentioned:

This means that all of the software that we produce will be created in public repos. It also means that we will open up more of our product development process to the public, including roadmaps, triage and other aspects of our product design and planning process.

New Launch: Chef Enterprise Automation Stack

To streamline the way of deploying and updating their software for enterprises, they have introduced a new ‘Chef Enterprise Automation Stack’. It will be specifically tailored for enterprises relying on Chef.

However, it will also be available for free – for non-commercial usage or experimentation.

To describe it, Barry wrote:

Chef Enterprise Automation Stack is anchored by Chef Workstation, the quickest way to get a development environment up and running, and Chef Automate as the enterprise observability and management console for the system. Also included is Chef Infra (formerly just Chef) for infrastructure automation, Chef InSpec for security and compliance automation and Chef Habitat for application deployment and orchestration automation.

So, you get more perks now if you purchase a Chef subscription.

Wrapping Up

With these major changes, Chef definitely seems to offer more streamlined services keeping in mind the future of their software services and the enterprises relying on it.

What do you think about it? Let us know your thoughts in the comments below.

Source

How to Install Elixir and Phoenix Framework on Ubuntu 16.04

This tutorial will show you how to install Elixir and Phoenix frameworks on a Vultr Ubuntu 16.04 server instance for development purposes.

Prerequisites

  • A new Vultr Ubuntu 16.04 server instance
  • Logged in as a non-root sudo user.

Update the system:

sudo apt-get update

Install Erlang

Install Erlang with the following commands:

cd ~
wget https://packages.erlang-solutions.com/erlang-solutions_1.0_all.deb 
sudo dpkg -i erlang-solutions_1.0_all.deb
sudo apt-get update
sudo apt-get install esl-erlang

You can verify the installation:

erl

This will take you to the Erlang shell with following output:

Erlang/OTP 21 [erts-10.1] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:1] [hipe]

Eshell V10.1  (abort with ^G)
1>    

Press CTRL + C twice to exit the Erlang shell.

Install Elixir

Install Elixir with apt-get:

sudo apt-get install elixir

Now you can verify the Elixir installation:

elixir -v

This will show the following output:

Erlang/OTP 21 [erts-10.1] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:1] [hipe]

Elixir 1.7.3 (compiled with Erlang/OTP 20)

Now you have Elixir 1.7.3 installed on your system.

Install Phoenix

If we have just installed Elixir for the first time, we will need to install the Hex package manager as well. Hex is necessary to get a Phoenix app running, and to install any extra dependencies we might need along the way.

Type this command to install Hex:

mix local.hex

Now we can proceed to install Phoenix:

mix archive.install https://github.com/phoenixframework/archives/raw/master/phx_new.ez

Install Node.js

Phoenix uses brunch.io to compile static assets, (javascript, css and more), so you will need to install Node.js.

The recommended way to install Node.js is via nvm (node version manager).

To install nvm we run this command:

curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | bash

To find out the versions of Node.js that are available for installation, you can type the following:

nvm ls-remote

This will output:

Output
...
     v8.8.1
     v8.9.0   (LTS: Carbon)
     v8.9.1   (LTS: Carbon)
     v8.9.2   (LTS: Carbon)
     v8.9.3   (LTS: Carbon)
     v8.9.4   (LTS: Carbon)
    v8.10.0   (LTS: Carbon)
    v8.11.0   (LTS: Carbon)
    v8.11.1   (LTS: Carbon)
    v8.11.2   (LTS: Carbon)
    v8.11.3   (LTS: Carbon)
    v8.11.4   (LTS: Carbon)
->  v8.12.0   (Latest LTS: Carbon)      
...

Install the version you would like with the following command:

nvm install 8.12.0

Note: If you would like to use a different version, replace 8.12.0 with the version you would like.

Tell nvm to use the version we just downloaded:

nvm use 8.12.0

Verify node has successfully installed:

node -v

Install PostgreSQL

You can install PostgreSQL easily using the apt packaging system.

sudo apt-get update
sudo apt-get install postgresql postgresql-contrib

Open the PostgreSQL shell:

sudo -u postgres psql

Change the postgres password to a secure password:

\password postgres    

After successfully changing the password, you can exit the PostgreSQL shell:

\q

Restart the PostgreSQL service:

sudo systemctl restart postgresql.service

Install inotify-tools

This is a Linux-only filesystem watcher that Phoenix uses for live code reloading:

sudo apt-get install inotify-tools

Create a Phoenix application

Create a new application:

mix phoenix.new ~/phoenix_project_test

If the command returns the following error:

** (Mix) The task "phx.new" could not be found

You can fix it with the following command:

mix archive.install https://raw.githubusercontent.com/phoenixframework/archives/master/phx_new.ez

Now rerun the command to create a test Phoenix app:

mix phoenix.new ~/phoenix_project_test

Change the PostgreSQL password in the config file with the password you set in the previous step:

nano config/dev.exs    

The application will now be successfully created. Move to the application folder and start it:

cd ~/phoenix_project_test
mix ecto.create
mix phx.server

Now the Phoenix application is up and running at port 4000.

Source

AWK (Alfred V. Aho – Peter J. Weinberger – Brian W. Kernighan)

Ebook: Introducing the Awk Getting Started Guide for Beginners

As a Linux system administrator, many times, you will get into situations where you need to manipulate and reformat the output from different commands, to simply display part of a output by filtering out a few lines. This process can be referred to as text filtering, using a collection of Linux programs known as filters.

There are several Linux utilities for text filtering and some of the well known filters include headtailgreptrfmtsortuniqpr and more advanced and powerful tools such as Awk and Sed.

Introducing the Awk Getting Started Guide for Beginners

Introducing the Awk Getting Started Guide for Beginners

Unlike SedAwk is more than just a text filtering tool, it is a comprehensive and flexible text pattern scanning and processing language.

Awk is a strongly recommended text filtering tool for Linux, it can be utilized directly from the command line together with several other commands, within shell scripts or in independent Awk scripts. It searches through input data or a single or multiple files for user defined patterns and modifies the input or file(s) based on certain conditions.

Since Awk is a sophisticated programming language, learning it requires a lot of time and dedication just as any other programming language out there. However, mastering a few basic concepts of this powerful text filtering language can enable you to understand how it actually works and sets you on track to learn more advanced Awk programming techniques.

After carefully and critically revising our 13 articles in the Awk programming series, with high consideration of the vital feedback from our followers and readers over the last 5 months, we have managed to organize the Introduction to Awk programming language eBook.

Therefore, if you are ready to start learning Awk programming language from the basic concepts, with simple and easy-to-understand, well explained examples, then you may consider reading this concise and precise eBook.

What’s Inside this eBook?

This book contains 13 chapters with a total of 41 pages, which covers all Awk basic and advance usage with practical examples:

  1. Chapter 1: Awk Regular Expressions to Filter Text in Files
  2. Chapter 2: Use Awk to Print Fields and Columns in File
  3. Chapter 3: Use Awk to Filter Text Using Pattern Specific Actions
  4. Chapter 4: Learn Comparison Operators with Awk
  5. Chapter 5: Learn Compound Expressions with Awk
  6. Chapter 6: Learn ‘next’ Command with Awk
  7. Chapter 7: Read Awk Input from STDIN in Linux
  8. Chapter 8: Learn Awk Variables, Numeric Expressions and Assignment Operators
  9. Chapter 9: Learn Awk Special Patterns ‘BEGIN and END’
  10. Chapter 10: Learn Awk Built-in Variables
  11. Chapter 11: Learn Awk to Use Shell Variables
  12. Chapter 12: Learn Flow Control Statements in Awk
  13. Chapter 13: Write Scripts Using Awk Programming Language

How to Use Awk and Regular Expressions to Filter Text or String in Files

When we run certain commands in Unix/Linux to read or edit text from a string or file, we most times try to filter output to a given section of interest. This is where using regular expressions comes in handy.

Read Also: 10 Useful Linux Chaining Operators with Practical Examples

What are Regular Expressions?

A regular expression can be defined as a strings that represent several sequence of characters. One of the most important things about regular expressions is that they allow you to filter the output of a command or file, edit a section of a text or configuration file and so on.

Features of Regular Expression

Regular expressions are made of:

  1. Ordinary characters such as space, underscore(_), A-Z, a-z, 0-9.
  2. Meta characters that are expanded to ordinary characters, they include:
    1. (.) it matches any single character except a newline.
    2. (*) it matches zero or more existences of the immediate character preceding it.
    3. [ character(s) ] it matches any one of the characters specified in character(s), one can also use a hyphen (-) to mean a range of characters such as [a-f][1-5], and so on.
    4. ^ it matches the beginning of a line in a file.
    5. $ matches the end of line in a file.
    6. \ it is an escape character.

In order to filter text, one has to use a text filtering tool such as awk. You can think of awk as a programming language of its own. But for the scope of this guide to using awk, we shall cover it as a simple command line filtering tool.

The general syntax of awk is:

# awk 'script' filename

Where 'script' is a set of commands that are understood by awk and are execute on file, filename.

It works by reading a given line in the file, makes a copy of the line and then executes the script on the line. This is repeated on all the lines in the file.

The 'script' is in the form '/pattern/ action' where pattern is a regular expression and the action is what awk will do when it finds the given pattern in a line.

How to Use Awk Filtering Tool in Linux

In the following examples, we shall focus on the meta characters that we discussed above under the features of awk.

A simple example of using awk:

The example below prints all the lines in the file /etc/hosts since no pattern is given.

# awk '//{print}'/etc/hosts

Awk Prints all Lines in a File

Awk Prints all Lines in a File

Use Awk with Pattern:

I the example below, a pattern localhost has been given, so awk will match line having localhost in the /etc/hosts file.

# awk '/localhost/{print}' /etc/hosts 

Awk Print Given Matching Line in a File

Awk Print Given Matching Line in a File

Using Awk with (.) wild card in a Pattern

The (.) will match strings containing loclocalhostlocalnet in the example below.

That is to say * l some_single_character c *.

# awk '/l.c/{print}' /etc/hosts

Use Awk to Print Matching Strings in a File

Use Awk to Print Matching Strings in a File

Using Awk with (*) Character in a Pattern

It will match strings containing localhostlocalnetlinescapable, as in the example below:

# awk '/l*c/{print}' /etc/localhost

Use Awk to Match Strings in File

Use Awk to Match Strings in File

You will also realize that (*) tries to a get you the longest match possible it can detect.

Let look at a case that demonstrates this, take the regular expression t*t which means match strings that start with letter t and end with t in the line below:

this is tecmint, where you get the best good tutorials, how to's, guides, tecmint. 

You will get the following possibilities when you use the pattern /t*t/:

this is t
this is tecmint
this is tecmint, where you get t
this is tecmint, where you get the best good t
this is tecmint, where you get the best good tutorials, how t
this is tecmint, where you get the best good tutorials, how tos, guides, t
this is tecmint, where you get the best good tutorials, how tos, guides, tecmint

And (*) in /t*t/ wild card character allows awk to choose the the last option:

this is tecmint, where you get the best good tutorials, how to's, guides, tecmint

Using Awk with set [ character(s) ]

Take for example the set [al1], here awk will match all strings containing character a or l or 1 in a line in the file /etc/hosts.

# awk '/[al1]/{print}' /etc/hosts

Use-Awk to Print Matching Character in File

Use-Awk to Print Matching Character in File

The next example matches strings starting with either K or k followed by T:

# awk '/[Kk]T/{print}' /etc/hosts 

Use Awk to Print Matched String in File

Use Awk to Print Matched String in File

Specifying Characters in a Range

Understand characters with awk:

  1. [0-9] means a single number
  2. [a-z] means match a single lower case letter
  3. [A-Z] means match a single upper case letter
  4. [a-zA-Z] means match a single letter
  5. [a-zA-Z 0-9] means match a single letter or number

Lets look at an example below:

# awk '/[0-9]/{print}' /etc/hosts 

Use Awk To Print Matching Numbers in File

Use Awk To Print Matching Numbers in File

All the line from the file /etc/hosts contain at least a single number [0-9] in the above example.

Use Awk with (^) Meta Character

It matches all the lines that start with the pattern provided as in the example below:

# awk '/^fe/{print}' /etc/hosts
# awk '/^ff/{print}' /etc/hosts

Use Awk to Print All Matching Lines with Pattern

Use Awk to Print All Matching Lines with Pattern

Use Awk with ($) Meta Character

It matches all the lines that end with the pattern provided:

# awk '/ab$/{print}' /etc/hosts
# awk '/ost$/{print}' /etc/hosts
# awk '/rs$/{print}' /etc/hosts

Use Awk to Print Given Pattern String

Use Awk to Print Given Pattern String

Use Awk with (\) Escape Character

It allows you to take the character following it as a literal that is to say consider it just as it is.

In the example below, the first command prints out all line in the file, the second command prints out nothing because I want to match a line that has $25.00, but no escape character is used.

The third command is correct since a an escape character has been used to read $ as it is.

# awk '//{print}' deals.txt
# awk '/$25.00/{print}' deals.txt
# awk '/\.00/{print}' deals.txt

Use Awk with Escape Character

Use Awk with Escape Character

Summary

That is not all with the awk command line filtering tool, the examples above a the basic operations of awk. In the next parts we shall be advancing on how to use complex features of awk. Thanks for reading through and for any additions or clarifications, post a comment in the comments section.

How to Use Awk to Print Fields and Columns in File

In this part of our Linux Awk command series, we shall have a look at one of the most important features of Awk, which is field editing.

It is good to know that Awk automatically divides input lines provided to it into fields, and a field can be defined as a set of characters that are separated from other fields by an internal field separator.

Awk Print Fields and Columns

Awk Print Fields and Columns

If you are familiar with the Unix/Linux or do bash shell programming, then you should know what internal field separator (IFS) variable is. The default IFS in Awk are tab and space.

This is how the idea of field separation works in Awk: when it encounters an input line, according to the IFS defined, the first set of characters is field one, which is accessed using $1, the second set of characters is field two, which is accessed using $2, the third set of characters is field three, which is accessed using $3 and so forth till the last set of character(s).

To understand this Awk field editing better, let us take a look at the examples below:

Example 1: I have created a text file called tecmintinfo.txt.

# vi tecmintinfo.txt
# cat tecmintinfo.txt

Create File in Linux

Create File in Linux

Then from the command line, I try to print the firstsecond and third fields from the file tecmintinfo.txt using the command below:

$ awk '//{print $1 $2 $3 }' tecmintinfo.txt

TecMint.comisthe

From the output above, you can see that the characters from the first three fields are printed based on the IFSdefined which is space:

  1. Field one which is “TecMint.com” is accessed using $1.
  2. Field two which is “is” is accessed using $2.
  3. Field three which is “the” is accessed using $3.

If you have noticed in the printed output, the field values are not separated and this is how print behaves by default.

To view the output clearly with space between the field values, you need to add (,) operator as follows:

$ awk '//{print $1, $2, $3; }' tecmintinfo.txt

TecMint.com is the

One important thing to note and always remember is that the use of ($) in Awk is different from its use in shell scripting.

Under shell scripting ($) is used to access the value of variables while in Awk ($) it is used only when accessing the contents of a field but not for accessing the value of variables.

Example 2: Let us take a look at one other example using a file which contains multiple lines called my_shoping.list.

No	Item_Name		Unit_Price	Quantity	Price
1	Mouse			#20,000		   1		#20,000
2 	Monitor			#500,000	   1		#500,000
3	RAM_Chips		#150,000	   2		#300,000
4	Ethernet_Cables	        #30,000		   4		#120,000		

Say you wanted to only print Unit_Price of each item on the shopping list, you will need to run the command below:

$ awk '//{print $2, $3 }' my_shopping.txt 

Item_Name Unit_Price
Mouse #20,000
Monitor #500,000
RAM_Chips #150,000
Ethernet_Cables #30,000

Awk also has a printf command that helps you to format your output is a nice way as you can see the above output is not clear enough.

Using printf to format output of the Item_Name and Unit_Price:

$ awk '//{printf "%-10s %s\n",$2, $3 }' my_shopping.txt 

Item_Name  Unit_Price
Mouse      #20,000
Monitor    #500,000
RAM_Chips  #150,000
Ethernet_Cables #30,000

Summary

Field editing is very important when using Awk to filter text or strings, it helps you get particular data in columns in a list. And always remember that the use of ($) operator in Awk is different from that in shell scripting.

I hope the article was helpful to you and for any additional information required or questions, you can post a comment in the comment section.

How to Use Awk to Filter Text or Strings Using Pattern Specific Actions

In the third part of the Awk command series, we shall take a look at filtering text or strings based on specific patterns that a user can define.

Sometimes, when filtering text, you want to indicate certain lines from an input file or lines of strings based on a given condition or using a specific pattern that can be matched. Doing this with Awk is very easy, it is one of the great features of Awk that you will find helpful.

Let us take a look at an example below, say you have a shopping list for food items that you want to buy, called food_prices.list. It has the following list of food items and their prices.

$ cat food_prices.list 
No	Item_Name		Quantity	Price
1	Mangoes			   10		$2.45
2	Apples			   20		$1.50
3	Bananas			   5		$0.90
4	Pineapples		   10		$3.46
5	Oranges			   10		$0.78
6	Tomatoes		   5		$0.55
7	Onions			   5            $0.45

And then, you want to indicate a (*) sign on food items whose price is greater than $2, this can be done by running the following command:

$ awk '/ *$[2-9]\.[0-9][0-9] */ { print $1, $2, $3, $4, "*" ; } / *$[0-1]\.[0-9][0-9] */ { print ; }' food_prices.list

Print Items Whose Price is Greater Than $2

Print Items Whose Price is Greater Than $2

From the output above, you can see that the there is a (*) sign at the end of the lines having food items, mangoes and pineapples. If you check their prices, they are above $2.

In this example, we have used used two patterns:

  1. the first: / *\$[2-9]\.[0-9][0-9] */ gets the lines that have food item price greater than $2 and
  2. the second: /*\$[0-1]\.[0-9][0-9] */ looks for lines with food item price less than $2.

This is what happens, there are four fields in the file, when pattern one encounters a line with food item price greater than $2, it prints all the four fields and a (*) sign at the end of the line as a flag.

The second pattern simply prints the other lines with food price less than $2 as they appear in the input file, food_prices.list.

This way you can use pattern specific actions to filter out food items that are priced above $2, though there is a problem with the output, the lines that have the (*) sign are not formatted out like the rest of the lines making the output not clear enough.

We saw the same problem in Part 2 of the awk series, but we can solve it in two ways:

1. Using printf command which is a long and boring way using the command below:

$ awk '/ *$[2-9]\.[0-9][0-9] */ { printf "%-10s %-10s %-10s %-10s\n", $1, $2, $3, $4 "*" ; } / *$[0-1]\.[0-9][0-9] */ { printf "%-10s %-10s %-10s %-10s\n", $1, $2, $3, $4; }' food_prices.list 

Filter and Print Items Using Awk and Printf

Filter and Print Items Using Awk and Printf

2. Using $0 field. Awk uses the variable 0 to store the whole input line. This is handy for solving the problem above and it is simple and fast as follows:

$ awk '/ *$[2-9]\.[0-9][0-9] */ { print $0 "*" ; } / *$[0-1]\.[0-9][0-9] */ { print ; }' food_prices.list 

Filter and Print Items Using Awk and Variable

Filter and Print Items Using Awk and Variable

Conclusion

That’s it for now and these are simple ways of filtering text using pattern specific action that can help in flagging lines of text or strings in a file using Awk command.

Hope you find this article helpful and remember to read the next part of the series which will focus on using comparison operators using awk tool.

How to Use Comparison Operators with Awk in Linux – Part 4

When dealing with numerical or string values in a line of text, filtering text or strings using comparison operators comes in handy for Awk command users.

In this part of the Awk series, we shall take a look at how you can filter text or strings using comparison operators. If you are a programmer then you must already be familiar with comparison operators but those who are not, let me explain in the section below.

What are Comparison operators in Awk?

Comparison operators in Awk are used to compare the value of numbers or strings and they include the following:

  1. > – greater than
  2. < – less than
  3. >= – greater than or equal to
  4. <= – less than or equal to
  5. == – equal to
  6. != – not equal to
  7. some_value ~ / pattern/ – true if some_value matches pattern
  8. some_value !~ / pattern/ – true if some_value does not match pattern

Now that we have looked at the various comparison operators in Awk, let us understand them better using an example.

In this example, we have a file named food_list.txt which is a shopping list for different food items and I would like to flag food items whose quantity is less than or equal 20 by adding (**) at the end of each line.

File – food_list.txt
No      Item_Name               Quantity        Price
1       Mangoes                    45           $3.45
2       Apples                     25           $2.45
3       Pineapples                 5            $4.45
4       Tomatoes                   25           $3.45
5       Onions                     15           $1.45
6       Bananas                    30           $3.45

The general syntax for using comparison operators in Awk is:

# expression { actions; }

To achieve the above goal, I will have to run the command below:

# awk '$3 <= 30 { printf "%s\t%s\n", $0,"**" ; } $3 > 30 { print $0 ;}' food_list.txt

No	Item_Name`		Quantity	Price
1	Mangoes	      		   45		$3.45
2	Apples			   25		$2.45	**
3	Pineapples		   5		$4.45	**
4	Tomatoes		   25		$3.45	**
5	Onions			   15           $1.45	**
6	Bananas			   30           $3.45	**

In the above example, there are two important things that happen:

  1. The first expression { action ; } combination, $3 <= 30 { printf “%s\t%s\n”, $0,”**” ; } prints out lines with quantity less than or equal to 30 and adds a (**) at the end of each line. The value of quantity is accessed using $3 field variable.
  2. The second expression { action ; } combination, $3 > 30 { print $0 ;} prints out lines unchanged since their quantity is greater then 30.

One more example:

# awk '$3 <= 20 { printf "%s\t%s\n", $0,"TRUE" ; } $3 > 20  { print $0 ;} ' food_list.txt 

No	Item_Name		Quantity	Price
1	Mangoes			   45		$3.45
2	Apples			   25		$2.45
3	Pineapples		   5		$4.45	TRUE
4	Tomatoes		   25		$3.45
5	Onions			   15           $1.45	TRUE
6       Bananas	                   30           $3.45

In this example, we want to indicate lines with quantity less or equal to 20 with the word (TRUE) at the end.

Summary

This is an introductory tutorial to comparison operators in Awk, therefore you need to try out many other options and discover more.

In case of any problems you face or any additions that you have in mind, then drop a comment in the comment section below. Remember to read the next part of the Awk series where I will take you through compound expressions.

How to Use Compound Expressions with Awk in Linux – Part 5

All along, we have been looking at simple expressions when checking whether a condition has been meet or not. What if you want to use more then one expression to check for a particular condition in?

In this article, we shall take a look at the how you can combine multiple expressions referred to as compound expressions to check for a condition when filtering text or strings.

In Awkcompound expressions are built using the && referred to as (and) and the || referred to as (or)compound operators.

The general syntax for compound expressions is:

( first_expression ) && ( second_expression )

Here, first_expression and second_expression must be true to make the whole expression true.

( first_expression ) || ( second_expression) 

Here, one of the expressions either first_expression or second_expression must be true for the whole expression to be true.

Caution: Remember to always include the parenthesis.

The expressions can be built using the comparison operators that we looked at in Part 4 of the awk series.

Let us now get a clear understanding using an example below:

In this example, a have a text file named tecmint_deals.txt, which contains a list of some amazing random Tecmint deals, it includes the name of the deal, the price and type.

TecMint Deal List
No      Name                                    Price           Type
1       Mac_OS_X_Cleanup_Suite                  $9.99           Software
2       Basics_Notebook                         $14.99          Lifestyle
3       Tactical_Pen                            $25.99          Lifestyle
4       Scapple                                 $19.00          Unknown
5       Nano_Tool_Pack                          $11.99          Unknown
6       Ditto_Bluetooth_Altering_Device         $33.00          Tech
7       Nano_Prowler_Mini_Drone                 $36.99          Tech 

Say that we want only print and flag deals that are above $20 and of type “Tech” using the (**) sign at the end of each line.

We shall need to run the command below.

# awk '($3 ~ /^$[2-9][0-9]*\.[0-9][0-9]$/) && ($4=="Tech") { printf "%s\t%s\n",$0,"*"; } ' tecmint_deals.txt 

6	Ditto_Bluetooth_Altering_Device		$33.00		Tech	*
7	Nano_Prowler_Mini_Drone			$36.99          Tech	 *

In this example, we have used two expressions in a compound expression:

  1. First expression, ($3 ~ /^\$[2-9][0-9]*\.[0-9][0-9]$/) ; checks the for lines with deals with price above $20, and it is only true if the value of $3 which is the price matches the pattern /^\$[2-9][0-9]*\.[0-9][0-9]$/
  2. And the second expression, ($4 == “Tech”) ; checks whether the deal is of type “Tech” and it is only true if the value of $4 equals to “Tech”.

Remember, a line will only be flagged with the (**), if first expression and second expression are true as states the principle of the && operator.

Summary

Some conditions always require building compound expressions for you to match exactly what you want. When you understand the use of comparison and compound expression operators then, filtering text or strings based on some difficult conditions will become easy.

Hope you find this guide useful and for any questions or additions, always remember to leave a comment and your concern will be solved accordingly.

How to Use ‘next’ Command with Awk in Linux – Part 6

In this sixth part of Awk series, we shall look at using next command, which tells Awk to skip all remaining patterns and expressions that you have provided, but instead read the next input line.

The next command helps you to prevent executing what I would refer to as time-wasting steps in a command execution.

To understand how it works, let us consider a file called food_list.txt that looks like this:

Food List Items
No      Item_Name               Price           Quantity
1       Mangoes                 $3.45              5
2       Apples                  $2.45              25
3       Pineapples              $4.45              55
4       Tomatoes                $3.45              25
5       Onions                  $1.45              15
6       Bananas                 $3.45              30

Consider running the following command that will flag food items whose quantity is less than or equal to 20with a (*) sign at the end of each line:

# awk '$4 <= 20 { printf "%s\t%s\n", $0,"*" ; } $4 > 20 { print $0 ;} ' food_list.txt 

No	Item_Name		Price		Quantity
1	Mangoes			$3.45		   5	*
2	Apples			$2.45              25
3	Pineapples		$4.45              55
4	Tomatoes		$3.45              25 
5	Onions			$1.45              15	*
6	Bananas	                $3.45              30

The command above actually works as follows:

  1. First, it checks whether the quantity, fourth field of each input line is less than or equal to 20, if a value meets that condition, it is printed and flagged with the (*) sign at the end using expression one: $4 <= 20
  2. Secondly, it checks if the fourth field of each input line is greater than 20, and if a line meets the condition it gets printed using expression two: $4 > 20

But there is one problem here, when the first expression is executed, a line that we want to flag is printed using: { printf "%s\t%s\n", $0,"**" ; } and then in the same step, the second expression is also checked which becomes a time wasting factor.

So there is no need to execute the second expression, $4 > 20 again after printing already flagged lines that have been printed using the first expression.

To deal with this problem, you have to use the next command as follows:

# awk '$4 <= 20 { printf "%s\t%s\n", $0,"*" ; next; } $4 > 20 { print $0 ;} ' food_list.txt

No	Item_Name		Price		Quantity
1	Mangoes			$3.45		   5	*
2	Apples			$2.45              25
3	Pineapples		$4.45              55
4	Tomatoes		$3.45              25 
5	Onions			$1.45              15	*
6	Bananas	                $3.45              30

After a single input line is printed using $4 <= 20 { printf "%s\t%s\n", $0,"*" ; next ; }, the next command included will help skip the second expression $4 > 20 { print $0 ;}, so execution goes to the next input line without having to waste time on checking whether the quantity is greater than 20.

The next command is very important is writing efficient commands and where necessary, you can always use to speed up the execution of a script. Prepare for the next part of the series where we shall look at using standard input (STDIN) as input for Awk.

Hope you find this how to guide helpful and you can as always put your thoughts in writing by leaving a comment in the comment section below.

How to Read Awk Input from STDIN in Linux – Part 7

In the previous parts of the Awk tool series, we looked at reading input mostly from a file(s), but what if you want to read input from STDIN.

In this Part 7 of Awk series, we shall look at few examples where you can filter the output of other commands instead of reading input from a file.

We shall start with the dir utility that works similar to ls command, in the first example below, we use the output of dir -l command as input for Awk to print owner’s username, groupname and the files he/she owns in the current directory:

# dir -l | awk '{print $3, $4, $9;}'

List Files Owned By User in Directory

List Files Owned By User in Directory

Take a look at another example where we employ awk expressions, here, we want to print files owned by the root user by using an expression to filter strings as in the awk command below:

# dir -l | awk '$3=="root" {print $1,$3,$4, $9;} '

List Files Owned by Root User

List Files Owned by Root User

The command above includes the (==) comparison operator to help us filter out files in the current directory which are owned by the root user. This is achieved using the expression $3==”root”.

Let us look at another example of where we use a awk comparison operator to match a certain string.

Here, we have used the cat utility to view the contents of a file named tecmint_deals.txt and we want to view the deals of type Tech only, so we shall run the following commands:

# cat tecmint_deals.txt
# cat tecmint_deals.txt | awk '$4 ~ /tech/{print}'
# cat tecmint_deals.txt | awk '$4 ~ /Tech/{print}'

Use Awk Comparison Operator to Match String

Use Awk Comparison Operator to Match String

In the example above, we have used the value ~ /pattern/ comparison operator, but there are two commands to try and bring out something very important.

When you run the command with pattern tech nothing is printed out because there is no deal of that type, but with Tech, you get deals of type Tech.

So always be careful when using this comparison operator, it is case sensitive as we have seen above.

You can always use the output of another command instead as input for awk instead of reading input from a file, this is very simple as we have looked at in the examples above.

Hope the examples were clear enough for you to understand, if you have any concerns, you can express them through the comment section below and remember to check the next part of the series where we shall look at awk features such as variablesnumeric expressions and assignment operators.

Learn How to Use Awk Variables, Numeric Expressions and Assignment Operators – Part 8

The Awk command series is getting exciting I believe, in the previous seven parts, we walked through some fundamentals of Awk that you need to master to enable you perform some basic text or string filtering in Linux.

Starting with this part, we shall dive into advance areas of Awk to handle more complex text or string filtering operations. Therefore, we are going to cover Awk features such as variables, numeric expressions and assignment operators.

Learn Awk Variables, Numeric Expressions and Assignment Operators

Learn Awk Variables, Numeric Expressions and Assignment Operators

These concepts are not comprehensively distinct from the ones you may have probably encountered in many programming languages before such shell, C, Python plus many others, so there is no need to worry much about this topic, we are simply revising the common ideas of using these mentioned features.

This will probably be one of the easiest Awk command sections to understand, so sit back and lets get going.

1. Awk Variables

In any programming language, a variable is a place holder which stores a value, when you create a variable in a program file, as the file is executed, some space is created in memory that will store the value you specify for the variable.

You can define Awk variables in the same way you define shell variables as follows:

variable_name=value 

In the syntax above:

  1. variable_name: is the name you give a variable
  2. value: the value stored in the variable

Let’s look at some examples below:

computer_name=”tecmint.com”
port_no=”22”
email=”admin@tecmint.com”
server=”computer_name”

Take a look at the simple examples above, in the first variable definition, the value tecmint.com is assigned to the variable computer_name.

Furthermore, the value 22 is assigned to the variable port_no, it is also possible to assign the value of one variable to another variable as in the last example where we assigned the value of computer_name to the variable server.

If you can recall, right from part 2 of this Awk series were we covered field editing, we talked about how Awk divides input lines into fields and uses standard field access operator, $ to read the different fields that have been parsed. We can also use variables to store the values of fields as follows.

first_name=$2
second_name=$3

In the examples above, the value of first_name is set to second field and second_name is set to the third field.

As an illustration, consider a file named names.txt which contains a list of an application’s users indicating their first and last names plus gender. Using the cat command, we can view the contents of the file as follows:

$ cat names.txt

List File Content Using cat Command

List File Content Using cat Command

Then, we can also use the variables first_name and second_name to store the first and second names of the first user on the list as by running the Awk command below:

$ awk '/Aaron/{ first_name=$2 ; second_name=$3 ; print first_name, second_name ; }' names.txt

Store Variables Using Awk Command

Store Variables Using Awk Command

Let us also take a look at another case, when you issue the command uname -a on your terminal, it prints out all your system information.

The second field contains your hostname, therefore we can store the hostname in a variable called hostnameand print it using Awk as follows:

$ uname -a
$ uname -a | awk '{hostname=$2 ; print hostname ; }' 

Store Command Output to Variable Using Awk

Store Command Output to Variable Using Awk

2. Numeric Expressions

In Awk, numeric expressions are built using the following numeric operators:

  1. * : multiplication operator
  2. + : addition operator
  3. / : division operator
  4. - : subtraction operator
  5. % : modulus operator
  6. ^ : exponentiation operator

The syntax for a numeric expressions is:

$ operand1 operator operand2

In the form above, operand1 and operand2 can be numbers or variable names, and operator is any of the operators above.

Below are some examples to demonstrate how to build numeric expressions:

counter=0
num1=5
num2=10
num3=num2-num1
counter=counter+1

To understand the use of numeric expressions in Awk, we shall consider the following example below, with the file domains.txt which contains all domains owned by Tecmint.

news.tecmint.com
tecmint.com
linuxsay.com
windows.tecmint.com
tecmint.com
news.tecmint.com
tecmint.com
linuxsay.com
tecmint.com
news.tecmint.com
tecmint.com
linuxsay.com
windows.tecmint.com
tecmint.com

To view the contents of the file, use the command below:

$ cat domains.txt

View Contents of File

View Contents of File

If we want to count the number of times the domain tecmint.com appears in the file, we can write a simple script to do that as follows:

#!/bin/bash
for file in $@; do
        if [ -f $file ] ; then
                #print out filename
                echo "File is: $file"
                #print a number incrementally for every line containing tecmint.com 
                awk  '/^tecmint.com/ { counter=counter+1 ; printf "%s\n", counter ; }'   $file
        else
                #print error info incase input is not a file
                echo "$file is not a file, please specify a file." >&2 && exit 1
        fi
done
#terminate script with exit code 0 in case of successful execution 
exit 0

Shell Script to Count a String or Text in File

Shell Script to Count a String or Text in File

After creating the script, save it and make it executable, when we run it with the file, domains.txt as out input, we get the following output:

$ ./script.sh  ~/domains.txt

Script to Count String or Text

Script to Count String or Text

From the output of the script, there are 6 lines in the file domains.txt which contain tecmint.com, to confirm that you can manually count them.

3. Assignment Operators

The last Awk feature we shall cover is assignment operators, there are several assignment operators in Awk and these include the following:

  1. *= : multiplication assignment operator
  2. += : addition assignment operator
  3. /= : division assignment operator
  4. -= : subtraction assignment operator
  5. %= : modulus assignment operator
  6. ^= : exponentiation assignment operator

The simplest syntax of an assignment operation in Awk is as follows:

$ variable_name=variable_name operator operand

Examples:

counter=0
counter=counter+1

num=20
num=num-1

You can use the assignment operators above to shorten assignment operations in Awk, consider the previous examples, we could perform the assignment in the following form:

variable_name operator=operand
counter=0
counter+=1

num=20
num-=1

Therefore, we can alter the Awk command in the shell script we just wrote above using += assignment operator as follows:

#!/bin/bash
for file in $@; do
        if [ -f $file ] ; then
                #print out filename
                echo "File is: $file"
                #print a number incrementally for every line containing tecmint.com 
                awk  '/^tecmint.com/ { counter+=1 ; printf  "%s\n",  counter ; }'   $file
        else
                #print error info incase input is not a file
                echo "$file is not a file, please specify a file." >&2 && exit 1
        fi
done
#terminate script with exit code 0 in case of successful execution 
exit 0

Alter Shell Script

Alter Shell Script

In this segment of the Awk series, we covered some powerful Awk features, that is variables, building numeric expressions and using assignment operators, plus some few illustrations of how we can actually use them.

These concepts are not any different from the one in other programming languages but there may be some significant distinctions under Awk programming.

In part 9, we shall look at more Awk features that is special patterns: BEGIN and END.

Learn How to Use Awk Special Patterns ‘BEGIN and END’ – Part 9

In Part 8 of this Awk series, we introduced some powerful Awk command features, that is variables, numeric expressions and assignment operators.

As we advance, in this segment, we shall cover more Awk features, and that is the special patterns: BEGIN and END.

Learn Awk Patterns BEGIN and END

Learn Awk Patterns BEGIN and END

These special features will prove helpful as we try to expand on and explore more methods of building complex Awk operations.

To get started, let us drive our thoughts back to the introduction of the Awk series, remember when we started this series, I pointed out that the general syntax of a running an Awk command is:

# awk 'script' filenames  

And in the syntax above, the Awk script has the form:

/pattern/ { actions } 

When you consider the pattern in the script, it is normally a regular expression, additionally, you can also think of pattern as special patterns BEGIN and END. Therefore, we can also write an Awk command in the form below:

awk '
 	BEGIN { actions } 
 	/pattern/ { actions }
 	/pattern/ { actions }
            ……….
	 END { actions } 
' filenames  

In the event that you use the special patterns: BEGIN and END in an Awk script, this is what each of them means:

  1. BEGIN pattern: means that Awk will execute the action(s) specified in BEGIN once before any input lines are read.
  2. END pattern: means that Awk will execute the action(s) specified in END before it actually exits.

And the flow of execution of the an Awk command script which contains these special patterns is as follows:

  1. When the BEGIN pattern is used in a script, all the actions for BEGIN are executed once before any input line is read.
  2. Then an input line is read and parsed into the different fields.
  3. Next, each of the non-special patterns specified is compared with the input line for a match, when a match is found, the action(s) for that pattern are then executed. This stage will be repeated for all the patterns you have specified.
  4. Next, stage 2 and 3 are repeated for all input lines.
  5. When all input lines have been read and dealt with, in case you specify the END pattern, the action(s) will be executed.

You should always remember this sequence of execution when working with the special patterns to achieve the best results in an Awk operation.

To understand it all, let us illustrate using the example from part 8, about the list of domains owned by Tecmint, as stored in a file named domains.txt.

news.tecmint.com
tecmint.com
linuxsay.com
windows.tecmint.com
tecmint.com
news.tecmint.com
tecmint.com
linuxsay.com
tecmint.com
news.tecmint.com
tecmint.com
linuxsay.com
windows.tecmint.com
tecmint.com
$ cat ~/domains.txt

View Contents of File

View Contents of File

In this example, we want to count the number of times the domain tecmint.com is listed in the file domains.txt. So we wrote a small shell script to help us do that using the idea of variables, numeric expressions and assignment operators which has the following content:

#!/bin/bash
for file in $@; do
        if [ -f $file ] ; then
                #print out filename
                echo "File is: $file"
                #print a number incrementally for every line containing tecmint.com 
                awk '/^tecmint.com/ { counter+=1 ; printf "%s\n", counter ; }' $file
        else
                #print error info incase input is not a file
                echo "$file is not a file, please specify a file." >&2 && exit 1
        fi
done
#terminate script with exit code 0 in case of successful execution 
exit 0

Let us now employ the two special patterns: BEGIN and END in the Awk command in the script above as follows:

We shall alter the script:

awk '/^tecmint.com/ { counter+=1 ; printf "%s\n", counter ; }' $file

To:

awk ' BEGIN {  print "The number of times tecmint.com appears in the file is:" ; }
                      /^tecmint.com/ {  counter+=1  ;  }
                      END {  printf "%s\n",  counter  ; } 
                    '  $file

After making the changes to the Awk command, the complete shell script now looks like this:

#!/bin/bash
for file in $@; do
        if [ -f $file ] ; then
                #print out filename
                echo "File is: $file"
                #print the total number of times tecmint.com appears in the file
                awk ' BEGIN {  print "The number of times tecmint.com appears in the file is:" ; }
                      /^tecmint.com/ {  counter+=1  ;  }
                      END {  printf "%s\n",  counter  ; } 
                    '  $file
        else
                #print error info incase input is not a file
                echo "$file is not a file, please specify a file." >&2 && exit 1
        fi
done
#terminate script with exit code 0 in case of successful execution 
exit 0

Awk BEGIN and END Patterns

Awk BEGIN and END Patterns

When we run the script above, it will first of all print the location of the file domains.txt, then the Awk command script is executed, where the BEGIN special pattern helps us print out the message “The number of times tecmint.com appears in the file is:” before any input lines are read from the file.

Then our pattern, /^tecmint.com/ is compared against every input line and the action, { counter+=1 ; }is executed for each input line, which counts the number of times tecmint.com appears in the file.

Finally, the END pattern will print the total the number of times the domain tecmint.com appears in the file.

$ ./script.sh ~/domains.txt 

Script to Count Number of Times String Appears

Script to Count Number of Times String Appears

To conclude, we walked through more Awk features exploring on the concepts of special pattern: BEGIN and END.

As I pointed out before, these Awk features will help us build more complex text filtering operations, there is more to cover under Awk features and in part 10, we shall approach the idea of Awk built-in variables, so stay connected.

Learn How to Use Awk Built-in Variables – Part 10

As we uncover the section of Awk features, in this part of the series, we shall walk through the concept of built-in variables in Awk. There are two types of variables you can use in Awk, these are; user-defined variables, which we covered in Part 8 and built-in variables.

Awk Built in Variables Examples

Awk Built in Variables Examples

Built-in variables have values already defined in Awk, but we can also carefully alter those values, the built-in variables include:

  1. FILENAME : current input file name( do not change variable name)
  2. FR : number of the current input line (that is input line 1, 2, 3… so on, do not change variable name)
  3. NF : number of fields in current input line (do not change variable name)
  4. OFS : output field separator
  5. FS : input field separator
  6. ORS : output record separator
  7. RS : input record separator

Let us proceed to illustrate the use of some of the Awk built-in variables above:

To read the filename of the current input file, you can use the FILENAME built-in variable as follows:

$ awk ' { print FILENAME } ' ~/domains.txt 

Awk FILENAME Variable

Awk FILENAME Variable

You will realize that, the filename is printed out for each input line, that is the default behavior of Awk when you use FILENAME built-in variable.

Using NR to count the number of lines (records) in an input file, remember that, it also counts the empty lines, as we shall see in the example below.

When we view the file domains.txt using cat command, it contains 14 lines with text and empty 2 lines:

$ cat ~/domains.txt

Print Contents of File

Print Contents of File

$ awk ' END { print "Number of records in file is: ", NR } ' ~/domains.txt 

Awk Count Number of Lines

Awk Count Number of Lines

To count the number of fields in a record or line, we use the NR built-in variable as follows:

$ cat ~/names.txt

List File Contents

List File Contents

$ awk '{ print "Record:",NR,"has",NF,"fields" ; }' ~/names.txt

Awk Count Number of Fields in File

Awk Count Number of Fields in File

Next, you can also specify an input field separator using the FS built-in variable, it defines how Awk divides input lines into fields.

The default value for FS is space and tab, but we can change the value of FS to any character that will instruct Awk to divide input lines accordingly.

There are two methods to do this:

  1. one method is to use the FS built-in variable
  2. and the second is to invoke the -F Awk option

Consider the file /etc/passwd on a Linux system, the fields in this file are divided using the : character, so we can specify it as the new input field separator when we want to filter out certain fields as in the following examples:

We can use the -F option as follows:

$ awk -F':' '{ print $1, $4 ;}' /etc/passwd

Awk Filter Fields in Password File

Awk Filter Fields in Password File

Optionally, we can also take advantage of the FS built-in variable as below:

$ awk ' BEGIN {  FS=“:” ; }  { print $1, $4  ; } ' /etc/passwd

Filter Fields in File Using Awk

Filter Fields in File Using Awk

To specify an output field separator, use the OFS built-in variable, it defines how the output fields will be separated using the character we use as in the example below:

$ awk -F':' ' BEGIN { OFS="==>" ;} { print $1, $4 ;}' /etc/passwd

Add Separator to Field in File

Add Separator to Field in File

In this Part 10, we have explored the idea of using Awk built-in variables which come with predefined values. But we can also change these values, though, it is not recommended to do so unless you know what you are doing, with adequate understanding. After this, we shall progress to cover how we can use shell variables in Awk command operations.

How to Allow Awk to Use Shell Variables – Part 11

When we write shell scripts, we normally include other smaller programs or commands such as Awk operations in our scripts. In the case of Awk, we have to find ways of passing some values from the shell to Awk operations.

This can be done by using shell variables within Awk commands, and in this part of the series, we shall learn how to allow Awk to use shell variables that may contain values we want to pass to Awk commands.

There possibly two ways you can enable Awk to use shell variables:

1. Using Shell Quoting

Let us take a look at an example to illustrate how you can actually use shell quoting to substitute the value of a shell variable in an Awk command. In this example, we want to search for a username in the file /etc/passwd, filter and print the user’s account information.

Therefore, we can write a test.sh script with the following content:

#!/bin/bash

#read user input
read -p "Please enter username:" username

#search for username in /etc/passwd file and print details on the screen
cat /etc/passwd | awk "/$username/ "' { print $0 }'

Thereafter, save the file and exit.

Interpretation of the Awk command in the test.sh script above:

cat /etc/passwd | awk "/$username/ "' { print $0 }'

"/$username/ " – shell quoting used to substitute value of shell variable username in Awk command. The value of username is the pattern to be searched in the file /etc/passwd.

Note that the double quote is outside the Awk script, ‘{ print $0 }’.

Then make the script executable and run it as follows:

$ chmod  +x  test.sh
$ ./text.sh 

After running the script, you will be prompted to enter a username, type a valid username and hit Enter. You will view the user’s account details from the /etc/passwd file as below:

Shell Script to Find Username in Password File

Shell Script to Find Username in Password File

2. Using Awk’s Variable Assignment

This method is much simpler and better in comparison to method one above. Considering the example above, we can run a simple command to accomplish the job. Under this method, we use the -v option to assign a shell variable to a Awk variable.

Firstly, create a shell variable, username and assign it the name that we want to search in the /etc/passswdfile:

username="aaronkilik"

Then type the command below and hit Enter:

# cat /etc/passwd | awk -v name="$username" ' $0 ~ name {print $0}'

Find Username in Password File Using Awk

Find Username in Password File Using Awk

Explanation of the above command:

  1. -v – Awk option to declare a variable
  2. username – is the shell variable
  3. name – is the Awk variable

Let us take a careful look at $0 ~ name inside the Awk script, ' $0 ~ name {print $0}'. Remember, when we covered Awk comparison operators in Part 4 of this series, one of the comparison operators was value ~pattern, which means: true if value matches the pattern.

The output($0) of cat command piped to Awk matches the pattern (aaronkilik) which is the name we are searching for in /etc/passwd, as a result, the comparison operation is true. The line containing the user’s account information is then printed on the screen.

Conclusion

We have covered an important section of Awk features, that can help us use shell variables within Awk commands. Many times, you will write small Awk programs or commands within shell scripts and therefore, you need to have a clear understanding of how to use shell variables within Awk commands.

In the next part of the Awk series, we shall dive into yet another critical section of Awk features, that is flow control statements. So stay tunned and let’s keep learning and sharing.

How to Use Flow Control Statements in Awk – Part 12

When you review all the Awk examples we have covered so far, right from the start of the Awk series, you will notice that all the commands in the various examples are executed sequentially, that is one after the other. But in certain situations, we may want to run some text filtering operations based on some conditions, that is where the approach of flow control statements sets in.

Use Flow Control Statements in Awk

Use Flow Control Statements in Awk

There are various flow control statements in Awk programming and these include:

  1. if-else statement
  2. for statement
  3. while statement
  4. do-while statement
  5. break statement
  6. continue statement
  7. next statement
  8. nextfile statement
  9. exit statement

However, for the scope of this series, we shall expound on: if-elseforwhile and do while statements. Remember that we already walked through how to use next statement in Part 6 of this Awk series.

1. The if-else Statement

The expected syntax of the if statement is similar to that of the shell if statement:

if  (condition1) {
     actions1
}
else {
      actions2
}

In the above syntax, condition1 and condition2 are Awk expressions, and actions1 and actions2 are Awk commands executed when the respective conditions are satisfied.

When condition1 is satisfied, meaning it’s true, then actions1 is executed and the if statement exits, otherwise actions2 is executed.

The if statement can also be expanded to a if-else_if-else statement as below:

if (condition1){
     actions1
}
else if (conditions2){
      actions2
}
else{
     actions3
}

For the form above, if condition1 is true, then actions1 is executed and the if statement exits, otherwise condition2 is evaluated and if it is true, then actions2 is executed and the if statement exits. However, when condition2 is false then, actions3 is executed and the if statement exits.

Here is a case in point of using if statements, we have a list of users and their ages stored in the file, users.txt.

We want to print a statement indicating a user’s name and whether the user’s age is less or more than 25 years old.

aaronkilik@tecMint ~ $ cat users.txt
Sarah L			35    	F
Aaron Kili		40    	M
John  Doo		20    	M
Kili  Seth		49    	M    

We can write a short shell script to carry out our job above, here is the content of the script:

#!/bin/bash
awk ' { 
        if ( $3 <= 25 ){
           print "User",$1,$2,"is less than 25 years old." ;
        }
        else {
           print "User",$1,$2,"is more than 25 years old" ; 
}
}'    ~/users.txt

Then save the file and exit, make the script executable and run it as follows:

$ chmod +x test.sh
$ ./test.sh
Sample Output
User Sarah L is more than 25 years old
User Aaron Kili is more than 25 years old
User John Doo is less than 25 years old.
User Kili Seth is more than 25 years old

2. The for Statement

In case you want to execute some Awk commands in a loop, then the for statement offers you a suitable way to do that, with the syntax below:

Here, the approach is simply defined by the use of a counter to control the loop execution, first you need to initialize the counter, then run it against a test condition, if it is true, execute the actions and finally increment the counter. The loop terminates when the counter does not satisfy the condition.

for ( counter-initialization; test-condition; counter-increment ){
      actions
}

The following Awk command shows how the for statement works, where we want to print the numbers 0-10:

$ awk 'BEGIN{ for(counter=0;counter<=10;counter++){ print counter} }'
Sample Output
0
1
2
3
4
5
6
7
8
9
10

3. The while Statement

The conventional syntax of the while statement is as follows:

while ( condition ) {
          actions
}

The condition is an Awk expression and actions are lines of Awk commands executed when the condition is true.

Below is a script to illustrate the use of while statement to print the numbers 0-10:

#!/bin/bash
awk ' BEGIN{ counter=0 ;
         
        while(counter<=10){
              print counter;
              counter+=1 ;
             
}
}  

Save the file and make the script executable, then run it:

$ chmod +x test.sh
$ ./test.sh
Sample Output
0
1
2
3
4
5
6
7
8
9
10

4. The do while Statement

It is a modification of the while statement above, with the following underlying syntax:

do {
     actions
}
 while (condition) 

The slight difference is that, under do while, the Awk commands are executed before the condition is evaluated. Using the very example under while statement above, we can illustrate the use of do while by altering the Awk command in the test.sh script as follows:

#!/bin/bash

awk ' BEGIN{ counter=0 ;  
        do{
            print counter;  
            counter+=1 ;    
        }
        while (counter<=10)   
} 
'

After modifying the script, save the file and exit. Then make the script executable and execute it as follows:

$ chmod +x test.sh
$ ./test.sh
Sample Output
0
1
2
3
4
5
6
7
8
9
10

Conclusion

This is not a comprehensive guide regarding Awk flow control statements, as I had mentioned earlier on, there are several other flow control statements in Awk.

Nonetheless, this part of the Awk series should give you a clear fundamental idea of how execution of Awk commands can be controlled based on certain conditions.

You can as well expound more on the rest of the flow control statements to gain more understanding on the subject matter. Finally, in the next section of the Awk series, we shall move into writing Awk scripts.

How to Write Scripts Using Awk Programming Language – Part 13

All along from the beginning of the Awk series up to Part 12, we have been writing small Awk commands and programs on the command line and in shell scripts respectively.

However, Awk, just as Shell, is also an interpreted language, therefore, with all that we have walked through from the start of this series, you can now write Awk executable scripts.

Similar to how we write a shell script, Awk scripts start with the line:

#! /path/to/awk/utility -f 

For example on my system, the Awk utility is located in /usr/bin/awk, therefore, I would start an Awk script as follows:

#! /usr/bin/awk -f 

Explaining the line above:

  1. #! – referred to as Shebang, which specifies an interpreter for the instructions in a script
  2. /usr/bin/awk – is the interpreter
  3. -f – interpreter option, used to read a program file

That said, let us now dive into looking at some examples of Awk executable scripts, we can start with the simple script below. Use your favorite editor to open a new file as follows:

$ vi script.awk

And paste the code below in the file:

#!/usr/bin/awk -f 
BEGIN { printf "%s\n","Writing my first Awk executable script!" }

Save the file and exit, then make the script executable by issuing the command below:

$ chmod +x script.awk

Thereafter, run it:

$ ./script.awk
Sample Output
Writing my first Awk executable script!

A critical programmer out there must be asking, “where are the comments?”, yes, you can also include comments in your Awk script. Writing comments in your code is always a good programming practice.

It helps other programmers looking through your code to understand what you are trying to achieve in each section of a script or program file.

Therefore, you can include comments in the script above as follows.

#!/usr/bin/awk -f 

#This is how to write a comment in Awk
#using the BEGIN special pattern to print a sentence 

BEGIN { printf "%s\n","Writing my first Awk executable script!" }

Next, we shall look at an example where we read input from a file. We want to search for a system user named aaronkilik in the account file, /etc/passwd, then print the username, user ID and user GID as follows:

Below is the content of our script called second.awk.

#! /usr/bin/awk -f 

#use BEGIN sepecial character to set FS built-in variable
BEGIN { FS=":" }

#search for username: aaronkilik and print account details 
/aaronkilik/ { print "Username :",$1,"User ID :",$3,"User GID :",$4 }

Save the file and exit, make the script executable and execute it as below:

$ chmod +x second.awk
$ ./second.awk /etc/passwd
Sample Output
Username : aaronkilik User ID : 1000 User GID : 1000

In the last example below, we shall use do while statement to print out numbers from 0-10:

Below is the content of our script called do.awk.

#! /usr/bin/awk -f 

#printing from 0-10 using a do while statement 
#do while statement 
BEGIN {
#initialize a counter
x=0

do {
    print x;
    x+=1;
}
while(x<=10)
}

After saving the file, make the script executable as we have done before. Afterwards, run it:

$ chmod +x do.awk
$ ./do.awk
Sample Output
0
1
2
3
4
5
6
7
8
9
10

Summary

We have come to the end of this interesting Awk series, I hope you have learned a lot from all the 13 parts, as an introduction to Awk programming language.

As I mentioned from the beginning, Awk is a complete text processing language, for that reason, you can learn more other aspects of Awk programming language such as environmental variables, arrays, functions (built-in & user defined) and beyond.

There is yet additional parts of Awk programming to learn and master, so, below, I have provided some links to important online resources that you can use to expand your Awk programming skills, these are not necessarily all that you need, you can also look out for useful Awk programming books.

Reference LinksThe GNU Awk User’s Guide and AWK Language Programming

For any thoughts you wish to share or questions, use the comment form below.

Source

BEGINNER’S GUIDE FOR LINUX – Start Learning Linux in Minutes

Welcome to this exclusive edition “BEGINNER’S GUIDE FOR LINUX” by TecMint, this course module is specially designed and compiled for those beginners, who want to make their way into Linux learning process and do the best in today’s IT organizations. This courseware is created as per requirements of industrial environment with complete entrance to Linux, which will help you to build a great success in Linux.

We have given special priority to Linux commands and switches, scripting, services and applications, access control, process control, user management, database management, web services, etc. Even though Linux command-line provides thousands of commands, but only a few of basic commands you need to learn to perform a day-to-day Linux tasks.

Prerequisites:

All students must have a little understanding of computers and passion to learn new technology.

Distributions:

This courseware is presently supported on the latest releases of Linux distributions like Red Hat Enterprise Linux, CentOS, Debian, Ubuntu, etc.

Course Objectives

Section 1: Introduction To Linux and OS Installations

  1. Linux Boot Process
  2. Linux File System Hierarchy
  3. Installation of CentOS 7
  4. Installation of Various Linux Distributions including Debian, RHEL, Ubuntu, Fedora, etc
  5. Installation of CentOS on VirtualBox
  6. Dual Boot Installation of Windows and Linux

Section 2: Essentials of Basic Linux Commands

  1. List Files and Directories Using ‘ls’ Command
  2. Switch Between Linux Directories and Paths with ‘cd’ Command
  3. How to Use ‘dir’ Command with Different Options in Linux
  4. Find Out Present Working Directory Using ‘pwd’ Command
  5. Create Files using ‘touch’ Command
  6. Copy Files and Directories using ‘cp’ Command
  7. View File Content with ‘cat’ Command
  8. Check File System Disk Space Usage with ‘df’ Command
  9. Check Files and Directories Disk Usage with ‘du’ Command
  10. Find Files and Directories using find Command
  11. Find File Pattern Searches using grep Command

Section 3: Essentials of Advance Linux Commands

  1. Quirky ‘ls’ Commands Every Linux User Must Know
  2. Manage Files Effectively using head, tail and cat Commands in Linux
  3. Count Number of Lines, Words, Characters in File using ‘wc’ Command
  4. Basic ‘sort’ Commands to Sort Files in Linux
  5. Advance ‘sort’ Commands to Sort Files in Linux
  6. Pydf an Alternative “df” Command to Check Disk Usage
  7. Check Linux Ram Usage with ‘free’ Command
  8. Advance ‘rename’ Command to Rename Files and Directories
  9. Print Text/String in Terminal using ‘echo’ Command

Section 4: Some More Advance Linux Commands

  1. Switching From Windows to Nix – 20 Useful Commands for Newbies – Part 1
  2. 20 Advanced Commands for Middle Level Linux Users – Part 2
  3. 20 Advanced Commands for Linux Experts – Part 3
  4. 20 Funny Commands of Linux or Linux is Fun in Terminal – Part 1
  5. 6 Interesting Funny Commands of Linux (Fun in Terminal) – Part 2
  6. 51 Useful Lesser Known Commands for Linux Users
  7. 10 Most Dangerous Commands – You Should Never Execute on Linux

Section 5: User, Group and File Permissions Management

  1. How to Add or Create New Users using ‘useradd’ Command
  2. How to Modify or Change Users Attributes using ‘usermod’ Command
  3. Managing Users & Groups, File Permissions & Attributes – Advance Level
  4. Difference Between su and sudo – How to Configure sudo – Advance Level
  5. How to Monitor User Activity with psacct or acct Tools

Section 6: Linux Package Management

  1. Yum Package Management – CentOS, RHEL and Fedora
  2. RPM Package Management – CentOS, RHEL and Fedora
  3. APT-GET and APT-CACHE Package Management – Debian, Ubuntu
  4. DPKG Package Management – Debian, Ubuntu
  5. Zypper Package Management – Suse and OpenSuse
  6. Linux Package Management with Yum, RPM, Apt, Dpkg, Aptitude and Zypper – Advance Level
  7. 27 ‘DNF’ (Fork of Yum) Commands for RPM Package Management – New Update

Section 7: System Monitoring & Cron Scheduling

  1. Linux Process Monitoring with top Command
  2. Linux Process Management with Kill, Pkill and Killall Commands
  3. Linux File Process Management with lsof Commands
  4. Linux Job Scheduling with Cron
  5. 20 Command Line Tools to Monitor Linux Performance – Part 1
  6. 13 Linux Performance Monitoring Tools – Part 2
  7. Nagios Monitoring Tool for Linux – Advance Level
  8. Zabbix Monitoring Tool for Linux – Advance Level
  9. Shell Script to Monitor Network, Disk Usage, Uptime, Load Average and RAM – New Update

Section 8: Linux Archiving/Compression, Backup/Sync and Recovery

Archiving/Compression Files
  1. How to Archive/Compress Linux Files and Directories using ‘tar’ Command
  2. How to Open, Extract and Create RAR Files in Linux
  3. 5 Tools to Archive/Compress Files in Linux
  4. How to Archive/Compress Files and Setting File Attributes – Advance Level
Backup/Sync Files and Directories in Linux
  1. How to Copy/Synchronize Files and Directories Locally/Remotely with rsync
  2. How to Transfer Files/Folders in Linux using scp
  3. Rsnapshot (Rsync Based) – A Local/Remote File System Backup Tool
  4. Sync Two Apache Web Servers/Websites Using Rsync – Advance Level
Backup/Recovery Linux Filesystems
  1. Backup and Restore Linux Systems using Redo Backup Tool
  2. How to Clone/Backup Linux Systems Using – Mondo Rescue Disaster Recovery Tool
  3. How to Recover Deleted Files/Folders using ‘Scalpel’ Tool
  4. 8 “Disk Cloning/Backup” Softwares for Linux Servers

Section 9: Linux File System / Network Storage Management

  1. What is Ext2, Ext3 & Ext4 and How to Create and Convert Linux File Systems
  2. Understanding Linux File System Types
  3. Linux File System Creation and Configurations – Advance Level
  4. Setting Up Standard Linux File Systems and Configuring NFSv4 Server – Advance Level
  5. How to Mount/Unmount Local and Network (Samba & NFS) Filesystems – Advance Level
  6. How to Create and Manage Btrfs File System in Linux – Advance Level
  7. Introduction to GlusterFS (File System) and Installation – Advance Level

Section 10: Linux LVM Management

  1. Setup Flexible Disk Storage with Logical Volume Management
  2. How to Extend/Reduce LVM’s (Logical Volume Management)
  3. How to Take Snapshot/Restore LVM’s
  4. Setup Thin Provisioning Volumes in LVM
  5. Manage Multiple LVM Disks using Striping I/O
  6. Migrating LVM Partitions to New Logical Volume

Section 11: Linux RAID Management

  1. Introduction to RAID, Concepts of RAID and RAID Levels
  2. Creating Software RAID0 (Stripe) on ‘Two Devices’ Using ‘mdadm
  3. Setting up RAID 1 (Mirroring) using ‘Two Disks’ in Linux
  4. Creating RAID 5 (Striping with Distributed Parity) in Linux
  5. Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux
  6. Setting Up RAID 10 or 1+0 (Nested) in Linux
  7. Growing an Existing RAID Array and Removing Failed Disks in Linux
  8. Assembling Partitions as RAID Devices – Creating & Managing System Backups

Section 12: Manage Services in Linux

  1. Configure Linux Services to Start and Stop Automatically
  2. How to Stop and Disable Unwanted Services in Linux
  3. How to Manage ‘Systemd’ Services Using Systemctl in Linux
  4. Managing System Startup Process and Services in Linux

Section 13: Linux System Security and Firewall

Linux Security and Tools
  1. 25 Hardening Security Tips for Linux Servers
  2. 5 Best Practices to Secure and Protect SSH Server
  3. How to Password Protect Grub in Linux
  4. Protect SSH Logins with SSH & MOTD Banner Messages
  5. How to Audit Linux Systems using Lynis Tool
  6. Secure Files/Directories using ACLs (Access Control Lists) in Linux
  7. How to Audit Network Performance, Security, and Troubleshooting in Linux
  8. Mandatory Access Control Essentials with SELinux – New Update
Linux Firewall and Tools
  1. Basic Guide on IPTables (Linux Firewall) Tips / Commands
  2. How To Setup an Iptables Firewall in Linux
  3. How to Configure ‘FirewallD’ in Linux
  4. Useful ‘FirewallD’ Rules to Configure and Manage Firewall in Linux
  5. How to Install and Configure UFW – An Un-complicated FireWall
  6. Shorewall – A High-Level Firewall for Configuring Linux Servers
  7. Install ConfigServer Security & Firewall (CSF) in Linux
  8. How to Install ‘IPFire’ Free Firewall Linux Distribution
  9. How to Install and Configure pfSense 2.1.5 (Firewall/Router) in Linux
  10. 10 Useful Open Source Security Firewalls for Linux Systems

Section 14: LAMP (Linux, Apache, MySQL/MariaDB and PHP) Setup’s

  1. Installing LAMP in RHEL/CentOS 6.0
  2. Installing LAMP in RHEL/CentOS 7.0
  3. Ubuntu 14.04 Server Installation Guide and Setup LAMP
  4. Installing LAMP in Arch Linux
  5. Setting Up LAMP in Ubuntu Server 14.10
  6. Installing LAMP in Gentoo Linux
  7. Creating Your Own Webserver and Hosting A Website from Your Linux Box
  8. Apache Virtual Hosting: IP Based and Name Based Virtual Hosts in Linux
  9. How to Setup Standalone Apache Server with Name-Based Virtual Hosting with SSL Certificate
  10. Creating Apache Virtual Hosts with Enable/Disable Vhosts Options in RHEL/CentOS 7.0
  11. Creating Virtual Hosts, Generate SSL Certificates & Keys and Enable CGI Gateway in Gentoo Linux
  12. Protect Apache Against Brute Force or DDoS Attacks Using Mod_Security and Mod_evasive Modules
  13. 13 Apache Web Server Security and Hardening Tips
  14. How to Sync Two Apache Web Servers/Websites Using Rsync
  15. How to Install ‘Varnish’ (HTTP Accelerator) and Perform Load Testing Using Apache Benchmark
  16. Installing and Configuring LAMP/LEMP Stack on Debian 8 Jessie – New Update

Section 15: LEMP (Linux, Nginx, MySQL/MariaDB and PHP) Setup’s

  1. Install LEMP in Linux
  2. Installing FcgiWrap and Enabling Perl, Ruby and Bash Dynamic Languages on Gentoo LEMP
  3. Installing LEMP in Gentoo Linux
  4. Installing LEMP in Arch Linux

Section 16: MySQL/MariaDB Administration

  1. MySQL Basic Database Administration Commands
  2. 20 MySQL (Mysqladmin) Commands for Database Administration in Linux
  3. MySQL Backup and Restore Commands for Database Administration
  4. How to Setup MySQL (Master-Slave) Replication
  5. Mytop (MySQL Database Monitoring) in Linux
  6. Install Mtop (MySQL Database Server Monitoring) in Linux
  7. https://www.tecmint.com/mysql-performance-monitoring/

Section 17: Basic Shell Scripting

  1. Understand Linux Shell and Basic Shell Scripting Language Tips – Part I
  2. 5 Shell Scripts for Linux Newbies to Learn Shell Programming – Part II
  3. Sailing Through The World of Linux BASH Scripting – Part III
  4. Mathematical Aspect of Linux Shell Programming – Part IV
  5. Calculating Mathematical Expressions in Shell Scripting Language – Part V
  6. Understanding and Writing functions in Shell Scripts – Part VI
  7. Deeper into Function Complexities with Shell Scripting – Part VII
  8. Working with Arrays in Linux Shell Scripting – Part 8
  9. An Insight of Linux “Variables” in Shell Scripting Language – Part 9
  10. Understanding and Writing ‘Linux Variables’ in Shell Scripting – Part 10
  11. Nested Variable Substitution and Predefined BASH Variables in Linux – Part 11

Section 18: Linux Interview Questions

  1. 15 Interview Questions on Linux “ls” Command – Part 1
  2. 10 Useful ‘ls’ Command Interview Questions – Part 2
  3. Basic Linux Interview Questions and Answers – Part 1
  4. Basic Linux Interview Questions and Answers – Part 2
  5. Linux Interview Questions and Answers for Linux Beginners – Part 3
  6. Core Linux Interview Questions and Answers
  7. Useful Random Linux Interview Questions and Answers
  8. Interview Questions and Answers on Various Commands in Linux
  9. Useful Interview Questions on Linux Services and Daemons
  10. Basic MySQL Interview Questions for Database Administrators
  11. MySQL Database Interview Questions for Beginners and Intermediates
  12. Advance MySQL Database “Interview Questions and Answers” for Linux Users
  13. Apache Interview Questions for Beginners and Intermediates
  14. VsFTP Interview Questions and Answers – Part 1
  15. Advance VsFTP Interview Questions and Answers – Part 2
  16. Useful SSH (Secure Shell) Interview Questions and Answers
  17. Useful “Squid Proxy Server” Interview Questions and Answers in Linux
  18. Linux Firewall Iptables Interview Questions – New Update
  19. Basic Interview Questions on Linux Networking – Part 1 – New Update

Section 19: Shell Scripting Interview Questions

  1. Useful ‘Interview Questions and Answers’ on Linux Shell Scripting
  2. Practical Interview Questions and Answers on Linux Shell Scripting

Section 20: Free Linux Books for Learning

  1. Complete Linux Command Line Cheat Sheet
  2. The GNU/Linux Advanced Administration Guide
  3. Securing & Optimizing Linux Servers
  4. Linux Patch Management: Keeping Linux Up To Date
  5. Introduction to Linux – A Hands on Guide
  6. Understanding the Linux® Virtual Memory Manager
  7. Linux Bible – Packed with Updates and Exercises
  8. A Newbie’s Getting Started Guide to Linux
  9. Linux from Scratch – Create Your Own Linux OS
  10. Linux Shell Scripting Cookbook, Second Edition
  11. Securing & Optimizing Linux: The Hacking Solution
  12. User Mode Linux – Understanding and Administration
  13. Bash Guide for Linux Beginners – New Update

Section 21: Linux Certifications – Prepration Guides

  1. RHCSA (Red Hat Certified System Administrator) Certification Guide
  2. LFCS (Linux Foundation Certified Sysadmin) Certification Guide
  3. LFCE (Linux Foundation Certified Engineer) Certification Guide

Source

Learn How to Generate and Verify Files with MD5 Checksum in Linux

checksum is a digit which serves as a sum of correct digits in data, which can be used later to detect errors in the data during storage or transmission. MD5 (Message Digest 5) sums can be used as a checksum to verify files or strings in a Linux file system.

MD5 Sums are 128-bit character strings (numerals and letters) resulting from running the MD5 algorithm against a specific file. The MD5 algorithm is a popular hash function that generates 128-bit message digest referred to as a hash value, and when you generate one for a particular file, it is precisely unchanged on any machine no matter the number of times it is generated.

It is normally very difficult to find two distinct files that results in same strings. Therefore, you can use md5sumto check digital data integrity by determining that a file or ISO you downloaded is a bit-for-bit copy of the remote file or ISO.

Suggested Read: Progress – Monitor Progress for (cp, mv, dd, tar, etc.) Commands in Linux

In Linux, the md5sum program computes and checks MD5 hash values of a file. It is a constituent of GNU Core Utilities package, therefore comes pre-installed on most, if not all Linux distributions.

Take a look at the contents of /etc/group saved as groups.cvs below.

root:x:0:
daemon:x:1:
bin:x:2:
sys:x:3:
adm:x:4:syslog,aaronkilik
tty:x:5:
disk:x:6:
lp:x:7:
mail:x:8:
news:x:9:
uucp:x:10:
man:x:12:
proxy:x:13:
kmem:x:15:
dialout:x:20:
fax:x:21:
voice:x:22:
cdrom:x:24:aaronkilik
floppy:x:25:
tape:x:26:
sudo:x:27:aaronkilik
audio:x:29:pulse
dip:x:30:aaronkilik

The md5sums command below will generate a hash value for the file as follows:

$ md5sum groups.csv

bc527343c7ffc103111f3a694b004e2f  groups.csv

When you attempt to alter the contents of the file by removing the first line, root:x:0: and then run the command for a second time, try to observe the hash value:

$ md5sum groups.csv

46798b5cfca45c46a84b7419f8b74735  groups.csv

You will notice that the hash value has now changed, indicating that the contents of the file where altered.

Now, put back the first line of the file, root:x:0: and rename it to group_file.txt and run the command below to generate its hash value again:

$ md5sum groups_list.txt

bc527343c7ffc103111f3a694b004e2f  groups_list.txt

From the output above, the hash value is still the same even when the file has been renamed, with its original content.

Importantmd5 sums only verifies/works with the file content rather than the file name.

The file groups_list.txt is a duplicate of groups.csv, so, try to generate the hash value of the files at the same time as follows.

You will see that they both have equal hash values, this is because they have the exact same content.

$ md5sum groups_list.txt  groups.csv 

bc527343c7ffc103111f3a694b004e2f  groups_list.txt
bc527343c7ffc103111f3a694b004e2f  groups.csv

You can redirect the hash value(s) of a file(s) into a text file and store, share them with others. For the two files above, you can issues the command below to redirect generated hash values into a text file for later use:

$ md5sum groups_list.txt  groups.csv > myfiles.md5

To check that the files have not been modified since you created the checksum, run the next command. You should be able to view the name of each file along with “OK”.

Suggested Read: Find Top 15 Processes by Memory Usage in Linux

The -c or --check option tells md5sums command to read MD5 sums from the files and check them.

$ md5sum -c myfiles.md5

groups_list.txt: OK
groups.csv: OK

Remember that after creating the checksum, you can not rename the files or else you get a “No such file or directory” error, when you try to verify the files with new names.

For instance:

$ mv groups_list.txt new.txt
$ mv groups.csv file.txt
$ md5sum -c  myfiles.md5
Error Message
md5sum: groups_list.txt: No such file or directory
groups_list.txt: FAILED open or read
md5sum: groups.csv: No such file or directory
groups.csv: FAILED open or read
md5sum: WARNING: 2 listed files could not be read

The concept also works for strings alike, in the commands below, -n means do not output the trailing newline:

$ echo -n "Tecmint How-Tos" | md5sum - 

afc7cb02baab440a6e64de1a5b0d0f1b  -
$ echo -n "Tecmint How-To" | md5sum - 

65136cb527bff5ed8615bd1959b0a248  -

In this guide, I showed you how to generate hash values for files, create a checksum for later verification of file integrity in Linux. Although security vulnerabilities in the MD5 algorithm have been detected, MD5 hashes still remains useful especially if you trust the party that creates them.

Verifying files is therefore an important aspect of file handling on your systems to avoid downloading, storing or sharing corrupted files. Last but not least, as usual reach us by means of the comment form below to seek any assistance, you can as well make some important suggestions to improve this post.

Source

How to Check MD5 Sums of Installed Packages in Debian/Ubuntu Linux

Have you ever wondered why a given binary or package installed on your system does not work according to you expectations, meaning it does not function correctly as it is supposed to do, perhaps it can not event start at all.

While downloading packages, you may face challenges of unsteady network connections or unexpected power blackouts, this can result into installation of corrupted package.

Considering this as an important factor in maintaining uncorrupted packages on your system, it is therefore a vital step to verify the files on the file system against the information stored in the package by using following article.

Suggested Read: Learn How to Generate and Verify Files with MD5 Checksum in Linux

How to Verify Installed Debian Packages Against MD5 Checksums

On Debian/Ubuntu systems, you can use the debsums tool to check the MD5 sums of installed packages. If you want to know the information about debsums package before installing it, you can use APT-CACHE like so:

$ apt-cache search debsums

Next, install it using apt command as follows:

$ sudo apt install debsums

Now its time to learn how to use debsums tool to verify MD5sum of installed packages.

Note: I have used sudo with all the commands below because certain files may not have read permissions for regular users.

In addition, the output from the debsums command shows you the file location on the left and the check results on the right. There are three possible results you can get, they include:

  1. OK – indicates that a file’s MD5 sum is good.
  2. FAILED – shows that a file’s MD5 sum does not match.
  3. REPLACED – means that the specific file has been replaced by a file from another package.

When you run it without any options, debsums checks every file on your system against the stock md5sum files.

$ sudo debsums
Scans File System for MD5 Sums
/usr/bin/a11y-profile-manager-indicator                                       OK
/usr/share/doc/a11y-profile-manager-indicator/copyright                       OK
/usr/share/man/man1/a11y-profile-manager-indicator.1.gz                       OK
/usr/share/accounts/providers/facebook.provider                               OK
/usr/share/accounts/qml-plugins/facebook/Main.qml                             OK
/usr/share/accounts/services/facebook-microblog.service                       OK
/usr/share/accounts/services/facebook-sharing.service                         OK
/usr/share/doc/account-plugin-facebook/copyright                              OK
/usr/share/accounts/providers/flickr.provider                                 OK
/usr/share/accounts/qml-plugins/flickr/Main.qml                               OK
/usr/share/accounts/services/flickr-microblog.service                         OK
/usr/share/accounts/services/flickr-sharing.service                           OK
/usr/share/doc/account-plugin-flickr/copyright                                OK
/usr/share/accounts/providers/google.provider                                 OK
/usr/share/accounts/qml-plugins/google/Main.qml                               OK
/usr/share/accounts/services/google-drive.service                             OK
/usr/share/accounts/services/google-im.service                                OK
/usr/share/accounts/services/picasa.service                                   OK
/usr/share/doc/account-plugin-google/copyright                                OK
/lib/systemd/system/accounts-daemon.service                                   OK
/usr/lib/accountsservice/accounts-daemon                                      OK
/usr/share/dbus-1/interfaces/org.freedesktop.Accounts.User.xml                OK
/usr/share/dbus-1/interfaces/org.freedesktop.Accounts.xml                     OK
/usr/share/dbus-1/system-services/org.freedesktop.Accounts.service            OK
/usr/share/doc/accountsservice/README                                         OK
/usr/share/doc/accountsservice/TODO                                           OK
....

To enable checking of every file and configuration files for each package for any changes, include the -a or --all option:

$ sudo debsums --all
Check MD5 Sums of All Configuration Files
/usr/bin/a11y-profile-manager-indicator                                       OK
/usr/share/doc/a11y-profile-manager-indicator/copyright                       OK
/usr/share/man/man1/a11y-profile-manager-indicator.1.gz                       OK
/etc/xdg/autostart/a11y-profile-manager-indicator-autostart.desktop           OK
/usr/share/accounts/providers/facebook.provider                               OK
/usr/share/accounts/qml-plugins/facebook/Main.qml                             OK
/usr/share/accounts/services/facebook-microblog.service                       OK
/usr/share/accounts/services/facebook-sharing.service                         OK
/usr/share/doc/account-plugin-facebook/copyright                              OK
/etc/signon-ui/webkit-options.d/www.facebook.com.conf                         OK
/usr/share/accounts/providers/flickr.provider                                 OK
/usr/share/accounts/qml-plugins/flickr/Main.qml                               OK
/usr/share/accounts/services/flickr-microblog.service                         OK
/usr/share/accounts/services/flickr-sharing.service                           OK
/usr/share/doc/account-plugin-flickr/copyright                                OK
/etc/signon-ui/webkit-options.d/login.yahoo.com.conf                          OK
/usr/share/accounts/providers/google.provider                                 OK
/usr/share/accounts/qml-plugins/google/Main.qml                               OK
/usr/share/accounts/services/google-drive.service                             OK
/usr/share/accounts/services/google-im.service                                OK
/usr/share/accounts/services/picasa.service                                   OK
/usr/share/doc/account-plugin-google/copyright                                OK
...

It is as well possible to check only the configuration file excluding all other package files by using the -e or --config option:

$ sudo debsums --config
Only Check MD5 Sums of Configuration Files
/etc/xdg/autostart/a11y-profile-manager-indicator-autostart.desktop           OK
/etc/signon-ui/webkit-options.d/www.facebook.com.conf                         OK
/etc/signon-ui/webkit-options.d/login.yahoo.com.conf                          OK
/etc/signon-ui/webkit-options.d/accounts.google.com.conf                      OK
/etc/dbus-1/system.d/org.freedesktop.Accounts.conf                            OK
/etc/acpi/asus-keyboard-backlight.sh                                          OK
/etc/acpi/events/asus-keyboard-backlight-down                                 OK
/etc/acpi/ibm-wireless.sh                                                     OK
/etc/acpi/events/tosh-wireless                                                OK
/etc/acpi/asus-wireless.sh                                                    OK
/etc/acpi/events/lenovo-undock                                                OK
/etc/default/acpi-support                                                     OK
/etc/acpi/events/ibm-wireless                                                 OK
/etc/acpi/events/asus-wireless-on                                             OK
/etc/acpi/events/asus-wireless-off                                            OK
/etc/acpi/tosh-wireless.sh                                                    OK
/etc/acpi/events/asus-keyboard-backlight-up                                   OK
/etc/acpi/events/thinkpad-cmos                                                OK
/etc/acpi/undock.sh                                                           OK
/etc/acpi/events/powerbtn                                                     OK
/etc/acpi/powerbtn.sh                                                         OK
/etc/init.d/acpid                                                             OK
/etc/init/acpid.conf                                                          OK
/etc/default/acpid                                                            OK
...

Next, to only display changed files in the output of debsums, use the -c or --changed option. I didn’t found any changed files in my system.

$ sudo debsums --changed

The next command prints out files that do not have md5sum info, here we use the -l and --list-missingoption. On my system, the command does not show any file.

$ sudo debsums --list-missing

Now it’s time to verify the md5 sum of a single package by specifying its name:

$ sudo debsums apache2 
Check MD5 Sum of Installed Package
/lib/systemd/system/apache2.service.d/apache2-systemd.conf                    OK
/usr/sbin/a2enmod                                                             OK
/usr/sbin/a2query                                                             OK
/usr/sbin/apache2ctl                                                          OK
/usr/share/apache2/apache2-maintscript-helper                                 OK
/usr/share/apache2/ask-for-passphrase                                         OK
/usr/share/bash-completion/completions/a2enmod                                OK
/usr/share/doc/apache2/NEWS.Debian.gz                                         OK
/usr/share/doc/apache2/PACKAGING.gz                                           OK
/usr/share/doc/apache2/README.Debian.gz                                       OK
/usr/share/doc/apache2/README.backtrace                                       OK
/usr/share/doc/apache2/README.multiple-instances                              OK
/usr/share/doc/apache2/copyright                                              OK
/usr/share/doc/apache2/examples/apache2.monit                                 OK
/usr/share/doc/apache2/examples/secondary-init-script                         OK
/usr/share/doc/apache2/examples/setup-instance                                OK
/usr/share/lintian/overrides/apache2                                          OK
/usr/share/man/man1/a2query.1.gz                                              OK
/usr/share/man/man8/a2enconf.8.gz                                             OK
/usr/share/man/man8/a2enmod.8.gz                                              OK
/usr/share/man/man8/a2ensite.8.gz                                             OK
/usr/share/man/man8/apache2ctl.8.gz                                           OK

Assuming that you are running debsums as a regular user without sudo, you can treat permission errors as warnings by employing the --ignore-permissions option:

$ debsums --ignore-permissions 

How To Generate MD5 Sums from .Deb Files

The -g option tells debsums to generate MD5 sums from deb contents, where:

  1. missing – instruct debsums to generate MD5 sums from the deb for packages which don’t provide one.
  2. all – directs debsums to ignore the on disk sums and use the one present in the deb file, or generated from it if none exists.
  3. keep – tells debsums to write the extracted/generated sums to /var/lib/dpkg/info/package.md5sums file.
  4. nocheck – means the extracted/generated sums are not checked against the installed package.

When you look at the contents of the directory /var/lib/dpkg/info/, you will see md5sums for various files that packages as in the image below:

$ cd /var/lib/dpkg/info
$ ls *.md5sums
List All MD5 Sums for Packages
a11y-profile-manager-indicator.md5sums
account-plugin-facebook.md5sums
account-plugin-flickr.md5sums
account-plugin-google.md5sums
accountsservice.md5sums
acl.md5sums
acpid.md5sums
acpi-support.md5sums
activity-log-manager.md5sums
adduser.md5sums
adium-theme-ubuntu.md5sums
adwaita-icon-theme.md5sums
aisleriot.md5sums
alsa-base.md5sums
alsa-utils.md5sums
anacron.md5sums
apache2-bin.md5sums
apache2-data.md5sums
apache2.md5sums
apache2-utils.md5sums
apg.md5sums
apparmor.md5sums
app-install-data.md5sums
app-install-data-partner.md5sums
...

Remember that using -g option is the same as --generate=missing, you can try to generate a md5 sum for apache2 package by running the following command.

$ sudo debsums --generate=missing apache2 

Since apache2 package on my system already has md5 sums, it will show the output below, which is the same as running:

$ sudo debsums apache2

For more interesting options and usage info, look through the debsums man page.

$ man debsums

In this article, we shared how to verify installed Debian/Ubuntu packages against MD5 checksums, this can be useful in order to avoid installing and executing corrupted binaries or package files on your system by checking the files on the file system against the information stored in the package.

For any questions or feedback, take advantage of the comment form below. Imaginably, you can as well offer one or two suggestions to make this post better.

Source

fkill – Interactively Kill Processes in Linux

Fkill-cli is a free open source, simple and cross-platform command line tool designed to interactively kill processes in Linux, developed using Nodejs. It also runs on Windows and MacOS X operating systems. It requires a process ID (PID) or process name to kill it.

fkill - Kill Linux Processes

Requirements:

  1. Install Nodejs 8 and NPM in Linux

In this article, we will explain how to install and use fkill to interactively kill processes in Linux systems.

How to Install fkill-cli in Linux Systems

To install fkill-cli tool, first you need to install required packages Nodejs and NPM on your Linux distributions using following commands.

Install Nojejs and NPM in Debian/Ubuntu

--------------- Install Noje.js 8 --------------- 
$ curl -sL https://deb.nodesource.com/setup_8.x | sudo -E bash -
$ sudo apt install -y nodejs

--------------- or Install Noje.js 10 ---------------
$ curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash -
$ sudo apt install -y nodejs

Install Nojejs and NPM in CentOS/RHEL & Fedora

--------------- Install Noje.js 8 --------------- 
$ curl --silent --location https://rpm.nodesource.com/setup_8.x | sudo bash -
$ sudo yum -y install nodejs

--------------- or Install Noje.js 10 ---------------
$ curl --silent --location https://rpm.nodesource.com/setup_10.x | sudo bash -
$ sudo yum -y install nodejs

Once Nodejs and NPM packages are installed, now you can install fkill-cli package using npm command using the -g option, which enables installing it globally.

$ sudo npm install -g fkill-cli

Once you have installed fkill-cli on your system, use the fkill command to launch it in interactive mode by running it without any arguments. Once you have select the process you want to kill, press Enter.

$ fkill  

Run fkill Interactively

Run fkill Interactively

You can also provide a PID or process name from the command line, the process name is case insensitive, here are some examples.

$ fkill 1337
$ fkill firefox

To kill a port, prefix it with a colon, for example: :19999.

$ fkill :19999

You can use the -f flag to force an operation and -v allows for displaying process arguments.

$ fkill -f 1337
$ fkill -v firefox

To view the fkill help message, use the following command.

$ fkill --help

Also check out examples of how to kill processes using traditional Linux tools such as killpkill and killall:

  1. A Guide to Kill, Pkill and Killall Commands to Terminate a Process in Linux
  2. How to Find and Kill Running Processes in Linux
  3. How to Kill Linux Processes/Unresponsive Applications Using ‘xkill’ Command

Fkill-cli Github repositoryhttps://github.com/sindresorhus/fkill-cli

That’s it! In this article, we have explained how to install and use fkill-cli tool in Linux with examples. Use the comment form below to ask any questions, or share your thoughts about it.

Source

How to Auto Execute Commands/Scripts During Reboot or Startup

I am always fascinated by the things going on behind the scenes when I boot a Linux system and log on. By pressing the power button on a bare metal or starting a virtual machine, you put in motion a series of events that lead to a fully-functional system – sometimes in less than a minute. The same is true when you log off and / or shutdown the system.

What makes this more interesting and fun is the fact that you can have the operating system execute certain actions when it boots and when you logon or logout.

In this distro-agnostic article we will discuss the traditional methods for accomplishing these goals in Linux.

Note: We will assume the use of Bash as main shell for logon and logout events. If you happen to use a different one, some of these methods may or may not work. If in doubt, refer to the documentation of your shell.

Executing Linux Scripts During Reboot or Startup

There are two traditional methods to execute a command or run scripts during startup:

Method #1 – Use a cron Job

Besides the usual format (minute / hour / day of month / month / day of week) that is widely used to indicate a schedule, cron scheduler also allows the use of @reboot. This directive, followed by the absolute path to the script, will cause it to run when the machine boots.

However, there are two caveats to this approach:

  1. a) the cron daemon must be running (which is the case under normal circumstances), and
  2. b) the script or the crontab file must include the environment variables (if any) that will be needed (refer to this StackOverflow thread for more details).

Method #2 – Use /etc/rc.d/rc.local

This method is valid even for systemd-based distributions. In order for this method to work, you must grant execute permissions to /etc/rc.d/rc.local as follows:

# chmod +x /etc/rc.d/rc.local

and add your script at the bottom of the file.

The following image shows how to run two sample scripts (/home/gacanepa/script1.sh and /home/gacanepa/script2.sh) using a cron job and rc.local, respectively, and their respective results.

script1.sh:
#!/bin/bash
DATE=$(date +'%F %H:%M:%S')
DIR=/home/gacanepa
echo "Current date and time: $DATE" > $DIR/file1.txt
script2.sh:
#!/bin/bash
SITE="Tecmint.com"
DIR=/home/gacanepa
echo "$SITE rocks... add us to your bookmarks." > $DIR/file2.txt

Run Linux Scripts at Startup

Run Linux Scripts at Startup

Keep in mind that both scripts must be granted execute permissions previously:

$ chmod +x /home/gacanepa/script1.sh
$ chmod +x /home/gacanepa/script2.sh

Executing Linux Scripts at Logon and Logout

To execute a script at logon or logout, use ~.bash_profile and ~.bash_logout, respectively. Most likely, you will need to create the latter file manually. Just drop a line invoking your script at the bottom of each file in the same fashion as before and you are ready to go.

Summary

In this article we have explained how to run script at reboot, logon, and logout. If you can think of other methods we could have included here, feel free to use the comment form below to point them out. We look forward to hearing from you!

Source

How to Find Out File Types in Linux

The easiest way to determine the type of a file on any operating system is usually to look at its extension (for instance .xml.sh.c.tar etc..). What if a file doesn’t have an extension, how can you determine its type?

Read Also7 Ways to Find Out File System Types in Linux

Linux has a useful utility called file which carry out some tests on a specified file and prints the file type once a test is successful. In this short article, we will explain useful file command examples to determine a file type in Linux.

Note: To have all the options described in this article, you should be running file version 5.25 (available in Ubuntu repositories) or newer. CentOS repositories have an older version of file command (file-5.11) which lacks some options.

You can run following command to verify the version of file utility as shown.

$ file -v

file-5.33
magic file from /etc/magic:/usr/share/misc/magic

Linux file Command Examples

1. The simplest file command is as follows where you just provide a file whose type you want to find out.

$ file etc

Find File Type in Linux

Find File Type in Linux

2. You can also pass the names of the files to be examined from a file (one per line), which you can specify using the -f flag as shown.

$ file -f files.list

Find Files Type in Filename List

Find Files Type in Filename List

3. To make file work faster you can exclude a test (valid tests include apptype, ascii, encoding, tokens, cdf, compress, elf, soft and tar) from the list of tests made to determine the file type, use the -e flag as shown.

$ file -e ascii -e compress -e elf etc

4. The -s option causes file to also read block or character special files, for example.

$ file -s /dev/sda

/dev/sda: DOS/MBR boot sector, extended partition table (last)

5. Adding the -z options instructs file to look inside compressed files.

$ file -z backup

Determine Compressed Files

Determine Compressed Files

6. If you want to report information about the contents only not the compression, of a compressed file, use the -Z flag.

$ file -Z backup

7. You can tell file command to output mime type strings instead of the more traditional human readable ones, using the -i option.

$ file -i -s /dev/sda

/dev/sda: application/octet-stream; charset=binary

8. In addition, you can get a slash-separated list of valid extensions for the file type found by adding the –extension switch.

$ file --extension /dev/sda

For more information and usage options, consult the file command man page.

$ man file

That’s all! file command is a useful Linux utility to determine the type of a file without an extension. In this article, we shared some useful file command examples. If you have any questions or thoughts to share, use the feedback form below to reach us.

Source

WP2Social Auto Publish Powered By : XYZScripts.com