Sysstat – All-in-One System Performance and Usage Activity Monitoring Tool For Linux

Sysstat is really a handy tool which comes with number of utilities to monitor system resources, their performance and usage activities. Number of utilities that we all use in our daily bases comes with sysstat package. It also provide the tool which can be scheduled using cron to collect all performance and activity data.

Install Sysstat in CentOS

Install Sysstat in Linux

Following are the list of tools included in sysstat packages.

Sysstat Features

  1. iostat: Reports all statistics about your CPU and I/O statistics for I/O devices.
  2. mpstat: Details about CPUs (individual or combined).
  3. pidstat: Statistics about running processes/task, CPU, memory etc.
  4. sar: Save and report details about different resources (CPU, Memory, IO, Network, kernel etc..).
  5. sadc: System activity data collector, used for collecting data in backend for sar.
  6. sa1: Fetch and store binary data in sadc data file. This is used with sadc.
  7. sa2: Summaries daily report to be used with sar.
  8. Sadf: Used for displaying data generated by sar in different formats (CSV or XML).
  9. Sysstat: Man page for sysstat utility.
  10. nfsiostat-sysstat: I/O statistics for NFS.
  11. cifsiostat: Statistics for CIFS.

Recenlty, on 17th of June 2014, Sysstat 11.0.0 (stable version) has been released with some new interesting features as follows.

pidstat command has been enhanced with some new options: first is “-R” which will provide information about the policy and task scheduling priority. And second one is “-G” which we can search processes with name and to get the list of all matching threads.

Some new enhancement have been brought to sar, sadc and sadf with regards to the data files: Now data files can be renamed using “saYYYYMMDD” instead of “saDD” using option –D and can be located in directory different from “/var/log/sa”. We can define new directory by setting variable “SA_DIR”, which is being used by sa1 and sa2.

Installation of Sysstat in Linux

The ‘Sysstat‘ package also available to install from default repository as a package in all major Linux distributions. However, the package which is available from the repo is little old and outdated version. So, that’s the reason, we here going to download and install the latest version of sysstat (i.e. version 11.0.0) from source package.

First download the latest version of sysstat package using the following link or you may also use wgetcommand to download directly on the terminal.

  1. http://sebastien.godard.pagesperso-orange.fr/download.html
# wget http://pagesperso-orange.fr/sebastien.godard/sysstat-11.0.0.tar.gz

Download Sysstat Package

Download Sysstat Package

Next, extract the downloaded package and go inside that directory to begin compile process.

# tar -xvf sysstat-11.0.0.tar.gz 
# cd sysstat-11.0.0/

Here you will have two options for compilation:

a). Firstly, you can use iconfig (which will give you flexibility for choosing/entering the customized values for each parameters).

# ./iconfig

Sysstat iconfig Command

Sysstat iconfig Command

b). Secondly, you can use standard configure command to define options in single line. You can run ./configure –help command to get list of different supported options.

# ./configure --help

Sysstat Configure Help

Sysstat Configure Help

Here, we are moving ahead with standard option i.e. ./configure command to compile sysstat package.

# ./configure
# make
# make install		

Configure Sysstat in Linux

Configure Sysstat in Linux

After compilation process completes, you will see the output similar to above. Now, verify the sysstat version by running following command.

# mpstat -V

sysstat version 11.0.0
(C) Sebastien Godard (sysstat <at> orange.fr)

Updating Sysstat in Linux

By default sysstat use “/usr/local” as its prefix directory. So, all binary/utilities will get installed in “/usr/local/bin” directory. If you have existing sysstat package installed, then those will be there in “/usr/bin”.

Due to existing sysstat package, you will not get your updated version reflected, because your “$PATH” variable don’t have “/usr/local/bin set”. So, make sure that “/usr/local/bin” exist there in your “$PATH” or set –prefixoption to “/usr” during compilation and remove existing version before starting updating.

# yum remove sysstat			[On RedHat based System]
# apt-get remove sysstat		[On Debian based System]
# ./configure --prefix=/usr
# make
# make install

Now again, verify the updated version of systat using same ‘mpstat’ command with option ‘-V’.

# mpstat -V

sysstat version 11.0.0
(C) Sebastien Godard (sysstat <at> orange.fr)

Reference: For more information please go through Sysstat Documentation

That’s it for now, in my upcoming article, I will show some practical examples and usages of sysstat command, till then stay tuned to updates and don’t forget to add your valuable thoughts about the article at below comment section.

Source

10 Tips On How to Use Wireshark to Analyze Packets in Your Network

In any packet-switched network, packets represent units of data that are transmitted between computers. It is the responsibility of network engineers and system administrators alike to monitor and inspect the packets for security and troubleshooting purposes.

To do this, they rely on software programs called network packet analyzers, with Wireshark perhaps being the most popular and used due to its versatility and easiness of use. On top of this, Wireshark allows you to not only monitor traffic in real-time, but also to save it to a file for later inspection.

In this article we will share 10 tips on how to use Wireshark to analyze packets in your network, and hope that when you reach the Summary section you will feel inclined to add it to your bookmarks.

Installing Wireshark in Linux

To install Wireshark, select the right installer for your operating system / architecture from https://www.wireshark.org/download.html.

Particularly, if you are using Linux, Wireshark must be available directly from your distribution’s repositories for an easier install at your convenience. Although versions may differ, the options and menus should be similar – if not identical in each one.

------------ On Debian/Ubuntu based Distros ------------ 
$ sudo apt-get install wireshark

------------ On CentOS/RHEL based Distros ------------
$ sudo yum install wireshark

------------ On Fedora 22+ Releases ------------
$ sudo dnf install wireshark

There is a known bug in Debian and derivatives that may prevent listing the network interfaces unless you use sudo to launch Wireshark. To fix this, follow the accepted answer in this post.

Once Wireshark is running, you can select the network interface that you want to monitor under Capture:

Wireshark Network Analyzer

Wireshark Network Analyzer

In this article we will use eth0, but you can choose another one if you wish. Don’t click on the interface yet – we will do so later once we have reviewed a few capture options.

Setting Capture Options

The most useful capture options we will consider are:

  1. Network interface – As we explained before, we will only analyze packets coming through eth0, either incoming or outcoming.
  2. Capture filter – This option allows us to indicate what kind of traffic we want to monitor by port, protocol, or type.

Before we proceed with the tips, it is important to note that some organizations forbid the use of Wireshark in their networks. That said, if you are not utilizing Wireshark for personal purposes make sure your organization allows its use.

For the time being, just select eth0 from the dropdown list and click Start at the button. You will start seeing all traffic passing through that interface. Not really useful for monitoring purposes due to the high amount of packets inspected, but it’s a start.

Monitor Network Interface Traffic

Monitor Network Interface Traffic

In the above image we can also see the icons to list the available interfaces, to stop the current capture, and to restart it (red box on the left) and to configure and edit a filter (red box on the right). When you hover over one of these icons, a tooltip will be displayed to indicate what it does.

We will begin by illustrating capture options, whereas tips #7 through #10 will discuss how to do actually do something useful with a capture.

TIP #1 – Inspect HTTP Traffic

Type http in the filter box and click Apply. Launch your browser and go to any site you wish:

Inspect HTTP Network Traffic

Inspect HTTP Network Traffic

To begin every subsequent tip, stop the live capture and edit the capture filter.

TIP #2 – Inspect HTTP Traffic from a Given IP Address

In this particular tip, we will prepend ip==192.168.0.10&& to the filter stanza to monitor HTTP traffic between the local computer and 192.168.0.10:

Inspect HTTP Traffic on IP Address

Inspect HTTP Traffic on IP Address

TIP #3 – Inspect HTTP Traffic to a Given IP Address

Closely related with #2, in this case we will use ip.dst as part of the capture filter as follows:

ip.dst==192.168.0.10&&http

Monitor HTTP Network Traffic to IP Address

Monitor HTTP Network Traffic to IP Address

To combine tips #2 and #3, you can use ip.addr in the filter rule instead of ip.src or ip.dst.

TIP #4 – Monitor Apache and MySQL Network Traffic

Sometimes you will be interested in inspecting traffic that matches either (or both) conditions whatsoever. For example, to monitor traffic on TCP ports 80 (web server) and 3306 (MySQL / MariaDB database server), you can use an OR condition in the capture filter:

tcp.port==80||tcp.port==3306

Monitor Apache and MySQL Traffic

Monitor Apache and MySQL Traffic

In tips #2 and #3|| and the word or produce the same results. Same with && and the word and.

TIP #5 – Reject Packets to Given IP Address

To exclude packets not matching the filter rule, use ! and enclose the rule within parentheses. For example, to exclude packages originating from or being directed to a given IP address, you can use:

!(ip.addr == 192.168.0.10)

TIP #6 – Monitor Local Network Traffic (192.168.0.0/24)

The following filter rule will display only local traffic and exclude packets going to and coming from the Internet:

ip.src==192.168.0.0/24 and ip.dst==192.168.0.0/24

Monitor Local Network Traffic

Monitor Local Network Traffic

TIP #7 – Monitor the Contents of a TCP Conversation

To inspect the contents of a TCP conversation (data exchange), right click on a given packet and choose Follow TCP stream. A window will pop-up with the content of the conversation.

This will include HTTP headers if we are inspecting web traffic, and also any plain text credentials transmitted during the process, if any.

Monitor TCP Conversation

Monitor TCP Conversation

TIP #8 – Edit Coloring Rules

By now I am sure you already noticed that each row in the capture window is colored. By default, HTTP traffic appears in green background with black text, whereas checksum errors are shown in red text with black background.

If you wish to change these settings, click the Edit coloring rules icon, choose a given filter and click Edit.

Customize Wireshark Output in Colors

Customize Wireshark Output in Colors

TIP #9 – Save the Capture to a File

Saving the contents of a capture will allow us to be able to inspect it with greater detail. To do this, go to File → Export and choose an export format from the list:

Save Wireshark Capture to File

Save Wireshark Capture to File

TIP #10 – Practice with Capture Samples

If you think your network is “boring”, Wireshark provides a series of sample capture files that you can use to practice and learn. You can download these SampleCaptures and import them via the File → Import menu.

Summary

Wireshark is free and open source software, as you can see in the FAQs section of the official website. You can configure a capture filter either before or after starting an inspection.

In case you didn’t notice, the filter has an autocomplete feature that allows you to easily search for the most used options that you can customize later. With that, the sky is the limit!

Source

VnStat PHP: A Web Based Interface for Monitoring Network Bandwidth Usage

VnStat PHP a graphical interface application for most famous console mode network logger utility called “vnstat“. This VnStat PHP is a graphical frontend to VnStat, to view and monitor network traffic bandwidth usage report in nicely graphical format. It display IN and OUT network traffic statistics in hourlydaysmonths or full summary.

This article shows you how to install VnStat and VnStat PHP  in Linux systems.

VnStat PHP Prerequisites

You need to install the following software packages on your system.

  1. VnStat : A command-line network bandwidth monitoring tool, must be installed, configured and should collect network bandwidth statistics.
  2. Apache : A Web Server to serve web pages.
  3. PHP 5 : A server-side scripting language for executing php scripts on the server.
  4. php-gd extension : A GD extension for serving graphic images.

Step 1: Installing and Configuring VnStat Command Line Tool

VnStat is an command line network bandwidth monitoring utility which counts bandwidth (transmit and received) on network devices and keeps the data in its own database.

Vnstat is a third party tool and can be installed via enabling epel repository under Red Hat based systems. Once you’ve enabled, you can install it using yum command as shown below.

On RHEL/CentOS and Fedora
# yum install vnstat
On Debian/Ubuntu and Linux Mint

Debian user’s simply apt-get to install

$ sudo apt-get install vnstat

As I said Vnstat maintains its own database to keep all network information. To create new database for network interface called “eth0“, issue the following command. Make sure to replace interface name as per your requirements.

# vnstat -i eth0

Error: Unable to read database "/var/lib/vnstat/eth0".
Info: -> A new database has been created.

If you get above error, don’t worry about such error, because you are executing the command first time. So, its creates new database for eth0.

Now run following command to update all enabled databases or only specific interface with -i parameter as shown. It will generate traffic statistics of IN and OUT of a IN and OUT of a eth0 interface.

# vnstat -u -i eth0

Next, add a crontab that runs every 5min and update eth0 database to generate traffic statistics.

*/5 * * * * /usr/bin/vnstat -u >/dev/null 2>&1

Step 2: Installing Apache, Php and Php-gd Extension

Install the following software packages with the help of package manager tool called “yum” for Red Hat based systems and “apt-get” for Debian based systems.

On RHEL/CentOS and Fedora
# yum install httpd php php-gd

Turn on Apache at system start-up and start the service.

# chkconfig httpd on
# service httpd start

Run the following “iptables” command to open Apache port “80” on firewall and then restart the service.

# iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
# service iptables restart
On Debian/Ubuntu and Linux Mint
$ sudo apt-get install apache2 php5 php5-gd
$ sudo /etc/init.d/apache2 start

Open port 80 for Apache.

$ sudo ufw allow 80

Step 3: Downloading VnStat PHP Frontend

Download latest VnStat PHP source tarball file using “wget command” as shown below or visit THIS PAGE to grab latest version.

# cd /tmp
# wget http://www.sqweek.com/sqweek/files/vnstat_php_frontend-1.5.1.tar.gz

Extract the source tarball file, using “tar command” as shown given.

# tar xvf vnstat_php_frontend-1.5.1.tar.gz

Step 4: Installing VnStat PHP Frontend

Once extracted, you will see a directory called “vnstat_php_frontend-1.5.1“. Copy the contents of this directory to web server root location as directory vnstat as shown below.

On RHEL/CentOS and Fedora
# cp -fr vnstat_php_frontend-1.5.1/ /var/www/html/vnstat

If SELinux enabled on your system, run the “restorecon” command to restore files default SELinux security contexts.

# restorecon -Rv /var/www/html/vnstat/
On Debian/Ubuntu and Linux Mint
# cp -fr vnstat_php_frontend-1.5.1/ /var/www/vnstat

Step 5: Configuring VnStat PHP Frontend

Configure it to match your setup. To do open the following file with VI editor and change the parameters as shown below.

On RHEL/CentOS and Fedora
# vi /var/www/html/vnstat/config.php
On Debian/Ubuntu and Linux Mint
# vi /var/www/vnstat/config.php

Set your default Lagrange.

// edit these to reflect your particular situation
$locale = 'en_US.UTF-8';
$language = 'en';

Define your network interfaces to be monitored.

// list of network interfaces monitored by vnStat
$iface_list = array('eth0', 'eth1');

You can set custom names for your network interfaces.

// optional names for interfaces
// if there's no name set for an interface then the interface identifier.
// will be displayed instead
$iface_title['eth0'] = 'Internal';
$iface_title['eth1'] = 'External';

Save and close the file.

Step 6: Access VnStat PHP and View Graphs

Open your favourite browser and navigate to any of the following link. Now you will see a fancy network graphs that shows you a summary of network bandwidth usage in hoursdays and months.

http://localhost/vnstat/
http://your-ip-address/vnstat/
Sample Output

Install Vnstat PHP in Linux

VnStat PHP Network Summary

Reference Link

VnStat PHP Homepage

Source

How To Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS/RHEL 7

If you are a person who is, or has been in the past, in charge of inspecting and analyzing system logs in Linux, you know what a nightmare that task can become if multiple services are being monitored simultaneously.

In days past, that task had to be done mostly manually, with each log type being handled separately. Fortunately, the combination of ElasticsearchLogstash, and Kibana on the server side, along with Filebeat on the client side, makes that once difficult task look like a walk in the park today.

The first three components form what is called an ELK stack, whose main purpose is to collect logs from multiple servers at the same time (also known as centralized logging).

Suggested Read: 4 Good Open Source Log Monitoring and Management Tools for Linux

A built-in java-based web interface allows you to inspect logs quickly at a glance for easier comparison and troubleshooting. These client logs are sent to a central server by Filebeat, which can be described as a log shipping agent.

Let’s see how all of these pieces fit together. Our test environment will consist of the following machines:

Central Server: CentOS 7 (IP address: 192.168.0.29). 2 GB of RAM.
Client #1: CentOS 7 (IP address: 192.168.0.100). 1 GB of RAM.
Client #2: Debian 8 (IP address: 192.168.0.101). 1 GB of RAM.

Please note that the RAM values provided here are not strict prerequisites, but recommended values for successful implementation of the ELK stack on the central server. Less RAM on clients will not make much difference, if any, at all.

Installing ELK Stack on the Server

Let’s begin by installing the ELK stack on the server, along with a brief explanation on what each component does:

  1. Elasticsearch stores the logs that are sent by the clients.
  2. Logstash processes those logs.
  3. Kibana provides the web interface that will help us to inspect and analyze the logs.

Install the following packages on the central server. First off, we will install Java JDK version 8 (update 102, the latest one at the time of this writing), which is a dependency of the ELK components.

You may want to check first in the Java downloads page here to see if there is a newer update available.

# yum update
# cd /opt
# wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u102-b14/jre-8u102-linux-x64.rpm"
# rpm -Uvh jre-8u102-linux-x64.rpm

Time to check whether the installation completed successfully:

# java -version
Check Java Version from Commandline

Check Java Version from Commandline

To install the latest versions of ElasticsearchLogstash, and Kibana, we will have to create repositories for yummanually as follows:

Enable Elasticsearch Repository

1. Import the Elasticsearch public GPG key to the rpm package manager:

# rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch

2. Insert the following lines to the repository configuration file elasticsearch.repo:

/etc/yum.repos.d/elasticsearch.repo
[elasticsearch]
name=Elasticsearch repository
baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1

3. Install the Elasticsearch package.

# yum install elasticsearch

When the installation is complete, you will be prompted to start and enable elasticsearch:

Install Elasticsearch in Linux

Install Elasticsearch in Linux

4. Start and enable the service.

# systemctl daemon-reload
# systemctl enable elasticsearch
# systemctl start elasticsearch

5. Allow traffic through TCP port 9200 in your firewall:

# firewall-cmd --add-port=9200/tcp
# firewall-cmd --add-port=9200/tcp --permanent

6. Check if Elasticsearch responds to simple requests over HTTP:

# curl -X GET http://localhost:9200

The output of the above command should be similar to:

Verify Elasticsearch Installation

Verify Elasticsearch Installation

Make sure you complete the above steps and then proceed with Logstash. Since both Logstash and Kibanashare the Elasticsearch GPG key, there is no need to re-import it before installing the packages.

Suggested Read: Manage System Logs (Configure, Rotate and Import Into Database) in CentOS 7

Enable Logstash Repository

7. Insert the following lines to the repository configuration file logstash.repo:

/etc/yum.repos.d/logstash.repo
[logstash]
name=Logstash
baseurl=http://packages.elasticsearch.org/logstash/2.2/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1

8. Install the Logstash package:

# yum install logstash

9. Add a SSL certificate based on the IP address of the ELK server at the the following line below the [ v3_ca ] section in /etc/pki/tls/openssl.cnf:

[ v3_ca ]
subjectAltName = IP: 192.168.0.29
Add Elasticsearch Server IP Address

Add Elasticsearch Server IP Address

10. Generate a self-signed certificate valid for 365 days:

# cd /etc/pki/tls
# openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

11. Configure Logstash input, output, and filter files:

Input: Create /etc/logstash/conf.d/input.conf and insert the following lines into it. This is necessary for Logstash to “learn” how to process beats coming from clients. Make sure the path to the certificate and key match the right paths as outlined in the previous step:

/etc/logstash/conf.d/input.conf
input {
  beats {
	port => 5044
	ssl => true
	ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
	ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}

Output (/etc/logstash/conf.d/output.conf) file:

/etc/logstash/conf.d/output.conf
output {
  elasticsearch {
	hosts => ["localhost:9200"]
	sniffing => true
	manage_template => false
	index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
	document_type => "%{[@metadata][type]}"
  }
}

Filter (/etc/logstash/conf.d/filter.conf) file. We will log syslog messages for simplicity:

/etc/logstash/conf.d/filter.conf
filter {
if [type] == "syslog" {
	grok {
  	match => { "message" => "%{SYSLOGLINE}" }
	}

	date {
match => [ "timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
  }
}

12. Verify the Logstash configuration files.

# service logstash configtest
Verify Logstash Configuration

Verify Logstash Configuration

13. Start and enable logstash:

# systemctl daemon-reload
# systemctl start logstash
# systemctl enable logstash

14. Configure the firewall to allow Logstash to get the logs from the clients (TCP port 5044):

# firewall-cmd --add-port=5044/tcp
# firewall-cmd --add-port=5044/tcp --permanent

Enable Kibana Repository

14. Insert the following lines to the repository configuration file kibana.repo:

/etc/yum.repos.d/kibana.repo
[kibana]
name=Kibana repository
baseurl=http://packages.elastic.co/kibana/4.4/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1

15. Install the Kibana package:

# yum install kibana

16. Start and enable Kibana.

# systemctl daemon-reload
# systemctl start kibana
# systemctl enable kibana

17. Make sure you can access access Kibana’s web interface from another computer (allow traffic on TCP port 5601):

# firewall-cmd --add-port=5601/tcp
# firewall-cmd --add-port=5601/tcp --permanent

18. Launch Kibana (http://192.168.0.29:5601) to verify that you can access the web interface:

Access Kibana Web Interface

Access Kibana Web Interface

We will return here after we have installed and configured Filebeat on the clients.

Suggested Read: Monitor Server Logs in Real-Time with “Log.io” Tool in Linux

Install Filebeat on the Client Servers

We will show you how to do this for Client #1 (repeat for Client #2 afterwards, changing paths if applicable to your distribution).

1. Copy the SSL certificate from the server to the clients:

# scp /etc/pki/tls/certs/logstash-forwarder.crt root@192.168.0.100:/etc/pki/tls/certs/

2. Import the Elasticsearch public GPG key to the rpm package manager:

# rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch

3. Create a repository for Filebeat (/etc/yum.repos.d/filebeat.repo) in CentOS based distributions:

/etc/yum.repos.d/filebeat.repo
[filebeat]
name=Filebeat for ELK clients
baseurl=https://packages.elastic.co/beats/yum/el/$basearch
enabled=1
gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
gpgcheck=1

4. Configure the source to install Filebeat on Debian and its derivatives:

# aptitude install apt-transport-https
# echo "deb https://packages.elastic.co/beats/apt stable main" > /etc/apt/sources.list.d/filebeat.list
# aptitude update

5. Install the Filebeat package:

# yum install filebeat        [On CentOS and based Distros]
# aptitude install filebeat   [On Debian and its derivatives]

6. Start and enable Filebeat:

# systemctl start filebeat
# systemctl enable filebeat

Configure Filebeat

A word of caution here. Filebeat configuration is stored in a YAML file, which requires strict indentation. Be careful with this as you edit /etc/filebeat/filebeat.yml as follows:

  1. Under paths, indicate which log files should be “shipped” to the ELK server.
  2. Under prospectors:
input_type: log
document_type: syslog
  1. Under output:
    1. Uncomment the line that begins with logstash.
    2. Indicate the IP address of your ELK server and port where Logstash is listening in hosts.
    3. Make sure the path to the certificate points to the actual file you created in Step I (Logstash section) above.

The above steps are illustrated in the following image:

Configure Filebeat in Client Servers

Configure Filebeat in Client Servers

Save changes, and then restart Filebeat on the clients:

# systemctl restart filebeat

Once we have completed the above steps on the clients, feel free to proceed.

Testing Filebeat

In order to verify that the logs from the clients can be sent and received successfully, run the following command on the ELK server:

# curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'

The output should be similar to (notice how messages from /var/log/messages and /var/log/secure are being received from client1 and client2):

Testing Filebeat

Testing Filebeat

Otherwise, check the Filebeat configuration file for errors.

# journalctl -xe

after attempting to restart Filebeat will point you to the offending line(s).

Testing Kibana

After we have verified that logs are being shipped by the clients and received successfully on the server. The first thing that we will have to do in Kibana is configuring an index pattern and set it as default.

You can describe an index as a full database in a relational database context. We will go with filebeat-* (or you can use a more precise search criteria as explained in the official documentation).

Enter filebeat-* in the Index name or pattern field and then click Create:

Testing Kibana

Testing Kibana

Please note that you will be allowed to enter a more fine-grained search criteria later. Next, click the star inside the green rectangle to configure it as the default index pattern:

Configure Default Kibana Index Pattern

Configure Default Kibana Index Pattern

Finally, in the Discover menu you will find several fields to add to the log visualization report. Just hover over them and click Add:

Add Log Visualization Report

Add Log Visualization Report

The results will be shown in the central area of the screen as shown above. Feel free to play around (add and remove fields from the log report) to become familiar with Kibana.

By default, Kibana will display the records that were processed during the last 15 minutes (see upper right corner) but you can change that behavior by selecting another time frame:

Kibana Log Reports

Kibana Log Reports

Summary

In this article we have explained how to set up an ELK stack to collect the system logs sent by two clients, a CentOS 7 and a Debian 8 machines.

Now you can refer to the official Elasticsearch documentation and find more details on how to use this setup to inspect and analyze your logs more efficiently.

If you have any questions, don’t hesitate to ask. We look forward to hearing from you.

Source

10 7zip (File Archive) Command Examples in Linux

7-Zip is a free open source, cross-platform, powerful, and fully-featured file archiver with a high compression ratio, for Windows. It has a powerful command line version that has been ported to Linux/POSIX systems.

It has a high compression ratio in 7z format with LZMA and LZMA2 compression, supports many other archive formats such as XZ, BZIP2, GZIP, TAR, ZIP and WIM for both packing and unpacking; AR, RAR, MBR, EXT, NTFS, FAT, GPT, HFS, ISO, RPM, LZMA, UEFI, Z, and many others for extracting only.

It provides strong AES-256 encryption in 7z and ZIP formats, offers a compression ratio that of 2-10 % for ZIP and GZIP formats (much better than those offered by PKZip and WinZip). It also comes with self-extracting capability for 7z format and it’s localized in up-to 87 languages.

How to Install 7zip in Linux

The port of 7zip on Linux systems is called p7zip, this package comes pre-installed on many mainstream Linux distributions. You need to install the p7zip-full package to get the 7z, 7za, and 7zr CLI utilities on your system, as follows.

Install 7zip on Debian, Ubuntu or Linux Mint

Debian-based Linux distributions comes with three software packages related to 7zip and they are p7zipp7zip-full and p7zip-rar. It is suggested to install p7zip-full package, which supports many archive formats.

$ sudo apt-get install p7zip-full

Install 7zip on Fedora or CentOS/RHEL

Red Hat-based Linux distributions comes with two packages related to 7zip and they are p7zip and p7zip-plugins. It is suggested to install both packages.

To install these two packages, you need to enable EPEL repository on CentOS/RHEL distributions. On Fedora, no need to setup additional repository.

$ sudo yum install p7zip p7zip-plugins

Once the 7zip package installed, you can move further to learn some useful 7zip command examples to pack or unpack various types of archives in the following section.

Learn 7zip Command Examples in Linux

1. To create an .7z archive file, use "a" option. The supported archive formats for creation are 7z, XZ, GZIP, TAR, ZIP and BZIP2. If the given archive file exists already, it will “add” the files to an existing archive, instead of overwriting it.

$ 7z a hyper.7z hyper_1.4.2_i386.deb

Create 7z Archive File in Linux

Create 7z Archive File in Linux

2. To extract an .7z archive file, use "e" option, which will extract the archive in the present working directory.

$ 7z e hyper.7z

Extract 7z Archive File in Linux

Extract 7z Archive File in Linux

3. To select an archive format, use -t (format name) option, which will allows you to select the archive format such as zip, gzip, bzip2 or tar (the default is 7z):

$ 7z a -tzip hyper.zip hyper_1.4.2_i386.deb

Create 7z Zip File in Linux

Create 7z Zip File in Linux

4. To see a list of files in an archive, use "l" (list) function, which will displays the type of archive format, method used, files in the archive among other information as shown.

$ 7z l hyper.7z

List 7z File Information

List 7z File Information

5. To test the integrity of an archive file, use "t" (test) function as shown.

$ 7z t hyper.7z

Check 7z File Integrity

Check 7z File Integrity

6. To backup a directory, you should use the 7za utility which preserves owner/group of a file, unlike 7z, the -sioption enables reading of files from stdin.

$ tar -cf - tecmint_files | 7za a -si tecmint_files.tar.7z

7. To restore a backup, use -so option, which will sends output to stdout.

$ 7za x -so tecmint_files.tar.7z | tar xf -

8. To set a compression level, use the -mx option as shown.

$ tar -cf - tecmint_files | 7za a -si -mx=9 tecmint_files.tar.7z

9. To update an existing archive file or remove file(s) from an archive file, use "u" and "d" options, respectively.

$ 7z u <archive-filename> <list-of-files-to-update>
$ 7z d <archive-filename> <list-of-files-to-delete>

10. To set a password to an archive file, use -p {password_here} flag as shown.

$ 7za a -p{password_here} tecmint_secrets.tar.7z

For more information refer to the 7z man page, or go to the 7zip Homepage: https://www.7-zip.org/.

That’s all for now! In this article, we have explained 10 7zip (File Archive) command examples in Linux. Use the feedback form below to ask any questions or share your thoughts with us.

Source

How To Parse And Pretty Print JSON With Linux Commandline Tools

parse and pretty print json

JSON is a lightweight and language independent data storage format, easy to integrate with most programming languages and also easy to understand by humans, of course when properly formatted. The word JSON stands for JavaScript Object Notation, though it starts with JavaScript, and primarily used to exchange data between server and browser, but now being used in many fields including embedded systems. Here we’re going to parse and pretty print JSON with command line tools on Linux. It’s extremely useful for handling large JSON data in a shell scripts, or manipulating JSON data in a shell script.

What is pretty printing?

The JSON data is structured to be somewhat more human readable. However in most cases, JSON data is stored in a single line, even without a line ending character.

Obviously that’s not very convenient for reading and editing manually.

That’s when pretty print is useful. The name is quite self explanatory, re-formatting the JSON text to be more legible by humans. This is known as JSON pretty printing.

Parse And Pretty Print JSON With Linux Commandline Tools

JSON data could be parsed with command line text processors like awksed and gerp. In fact JSON.awk is an awk script to do that. However there are some dedicated tools for the same purpose.

  1. jq or jshon, JSON parser for shell, both of them are quite useful.

  2. Shell scripts like JSON.sh or jsonv.sh to parse JSON in bash, zsh or dash shell.

  3. JSON.awk, JSON parser awk script.

  4. Python modules like json.tool.

  5. underscore-cli, Node.js and javascript based.

In this tutorial I’m focusing only on jq, which is quite powerful JSON parser for shells with advanced filtering and scripting capability.

JSON pretty printing

JSON data could be in one and nearly illegible for humans, so to make it somewhat readable, JSON pretty printing is here.

Example: A data from jsonip.com, to get external IP address in JSON format, use curl or wget tools like below.

$ wget -cq http://jsonip.com/ -O -

The actual data looks like this:

{"ip":"111.222.333.444","about":"/about","Pro!":"http://getjsonip.com"}

Now pretty print it with jq:

$ wget -cq http://jsonip.com/ -O - | jq '.'

This should look like below, after filtering the result with jq.

{

   "ip": "111.222.333.444",

   "about": "/about",

   "Pro!": "http://getjsonip.com"

}

The Same thing could be done with python json.tool module. Here is an example:

$ cat anything.json | python -m json.tool

This Python based solution should be fine for most users, but it’s not that useful where Python is not pre-installed or could not be installed, like on embedded systems.

However the json.tool python module has a distinct advantage, it’s cross platform. So, you can use it seamlessly on Windows, Linux or mac OS.


Suggested read:


How to parse JSON with jq

First, you need to install jq, it’s already picked up by most GNU/Linux distributions, install it with their respective package installer commands.

On Arch Linux:

$ sudo pacman -S jq

On Debian, Ubuntu, Linux Mint:

$ sudo apt-get install jq

On Fedora:

$ sudo dnf install jq

On openSUSE:

$ sudo zypper install jq

For other OS or platforms, see the official installation instructions.

Basic filters and identifiers of jq

jq could read the JSON data either from stdin or a file. You’ve to use both depending on the situation.

The single symbol of . is the most basic filter. These filters are also called as object identifier-index. Using a single along with jq basically pretty prints the input JSON file.

Single quotes – You don’t have to use the single quote always. But if you’re combining several filters in a single line, then you must use them.

Double quotes – You’ve to enclose any special character like @#$ within two double quotes, like this example, jq .foo.”@bar”

Raw data print – For any reason, if you need only the final parsed data, not enclosed within a double quote, use the -r flag with the jq command, like this. – jq -r .foo.bar.

Parsing specific data

To filter out a specific part of JSON, you’ve to look into the pretty printed JSON file’s data hierarchy.

An example of JSON data, from Wikipedia:

{

  "firstName": "John",

  "lastName": "Smith",

  "age": 25,

  "address": {

    "streetAddress": "21 2nd Street",

    "city": "New York",

    "state": "NY",

    "postalCode": "10021"

},

  "phoneNumber": [

{

  "type": "home",

  "number": "212 555-1234"

},

{

  "type": "fax",

  "number": "646 555-4567"

}

],

  "gender": {

  "type": "male"

  }

}

I’m going to use this JSON data as an example in this tutorial, saved this as sample.json.

Let’s say I want to filter out the address from sample.json file. So the command should be like:

$ jq .address sample.json

Sample output:

{

  "streetAddress": "21 2nd Street",

  "city": "New York",

  "state": "NY",

  "postalCode": "10021"

}

Again let’s say I want the postal code, then I’ve to add another object identifier-index, i.e. another filter.

$ cat sample.json | jq .address.postalCode

Also note that the filters are case sensitive and you’ve to use the exact same string to get something meaningful output instead of null.

Parsing elements from JSON array

Elements of JSON array are enclosed within square brackets, undoubtedly quite versatile to use.

To parse elements from a array, you’ve to use the []identifier along with other object identifier-index.

In this sample JSON data, the phone numbers are stored inside an array, to get all the contents from this array, you’ve to use only the brackets, like this example.

$ jq .phoneNumber[] sample.json

Let’s say you just want the first element of the array, then use the array object numbers starting for 0, for the first item, use [0], for the next items, it should be incremented by one each step.

$ jq .phoneNumber[0] sample.json

Scripting examples

Let’s say I want only the the number for home, not entire JSON array data. Here’s when scripting within jq command comes handy.

$ cat sample.json | jq -r '.phoneNumber[] | select(.type == "home") | .number'

Here first I’m piping the results of one filer to another, then using the select attribute to select a particular type of data, again piping the result to another filter.

Explaining every type of jq filters and scripting is beyond the scope and purpose of this tutorial. It’s highly suggested to read the JQ manual for better understanding given below.

Resources:

Source

How to Use Awk and Regular Expressions to Filter Text or String in Files

When we run certain commands in Unix/Linux to read or edit text from a string or file, we most times try to filter output to a given section of interest. This is where using regular expressions comes in handy.

Read Also: 10 Useful Linux Chaining Operators with Practical Examples

What are Regular Expressions?

A regular expression can be defined as a strings that represent several sequence of characters. One of the most important things about regular expressions is that they allow you to filter the output of a command or file, edit a section of a text or configuration file and so on.

Features of Regular Expression

Regular expressions are made of:

  1. Ordinary characters such as space, underscore(_), A-Z, a-z, 0-9.
  2. Meta characters that are expanded to ordinary characters, they include:
    1. (.) it matches any single character except a newline.
    2. (*) it matches zero or more existences of the immediate character preceding it.
    3. [ character(s) ] it matches any one of the characters specified in character(s), one can also use a hyphen (-) to mean a range of characters such as [a-f][1-5], and so on.
    4. ^ it matches the beginning of a line in a file.
    5. $ matches the end of line in a file.
    6. \ it is an escape character.

In order to filter text, one has to use a text filtering tool such as awk. You can think of awk as a programming language of its own. But for the scope of this guide to using awk, we shall cover it as a simple command line filtering tool.

The general syntax of awk is:

# awk 'script' filename

Where 'script' is a set of commands that are understood by awk and are execute on file, filename.

It works by reading a given line in the file, makes a copy of the line and then executes the script on the line. This is repeated on all the lines in the file.

The 'script' is in the form '/pattern/ action' where pattern is a regular expression and the action is what awk will do when it finds the given pattern in a line.

How to Use Awk Filtering Tool in Linux

In the following examples, we shall focus on the meta characters that we discussed above under the features of awk.

A simple example of using awk:

The example below prints all the lines in the file /etc/hosts since no pattern is given.

# awk '//{print}'/etc/hosts

Awk Prints all Lines in a File

Awk Prints all Lines in a File

Use Awk with Pattern:

I the example below, a pattern localhost has been given, so awk will match line having localhost in the /etc/hosts file.

# awk '/localhost/{print}' /etc/hosts 

Awk Print Given Matching Line in a File

Awk Print Given Matching Line in a File

Using Awk with (.) wild card in a Pattern

The (.) will match strings containing loclocalhostlocalnet in the example below.

That is to say * l some_single_character c *.

# awk '/l.c/{print}' /etc/hosts

Use Awk to Print Matching Strings in a File

Use Awk to Print Matching Strings in a File

Using Awk with (*) Character in a Pattern

It will match strings containing localhostlocalnetlinescapable, as in the example below:

# awk '/l*c/{print}' /etc/localhost

Use Awk to Match Strings in File

Use Awk to Match Strings in File

You will also realize that (*) tries to a get you the longest match possible it can detect.

Let look at a case that demonstrates this, take the regular expression t*t which means match strings that start with letter t and end with t in the line below:

this is tecmint, where you get the best good tutorials, how to's, guides, tecmint. 

You will get the following possibilities when you use the pattern /t*t/:

this is t
this is tecmint
this is tecmint, where you get t
this is tecmint, where you get the best good t
this is tecmint, where you get the best good tutorials, how t
this is tecmint, where you get the best good tutorials, how tos, guides, t
this is tecmint, where you get the best good tutorials, how tos, guides, tecmint

And (*) in /t*t/ wild card character allows awk to choose the the last option:

this is tecmint, where you get the best good tutorials, how to's, guides, tecmint

Using Awk with set [ character(s) ]

Take for example the set [al1], here awk will match all strings containing character a or l or 1 in a line in the file /etc/hosts.

# awk '/[al1]/{print}' /etc/hosts

Use-Awk to Print Matching Character in File

Use-Awk to Print Matching Character in File

The next example matches strings starting with either K or k followed by T:

# awk '/[Kk]T/{print}' /etc/hosts 

Use Awk to Print Matched String in File

Use Awk to Print Matched String in File

Specifying Characters in a Range

Understand characters with awk:

  1. [0-9] means a single number
  2. [a-z] means match a single lower case letter
  3. [A-Z] means match a single upper case letter
  4. [a-zA-Z] means match a single letter
  5. [a-zA-Z 0-9] means match a single letter or number

Lets look at an example below:

# awk '/[0-9]/{print}' /etc/hosts 

Use Awk To Print Matching Numbers in File

Use Awk To Print Matching Numbers in File

All the line from the file /etc/hosts contain at least a single number [0-9] in the above example.

Use Awk with (^) Meta Character

It matches all the lines that start with the pattern provided as in the example below:

# awk '/^fe/{print}' /etc/hosts
# awk '/^ff/{print}' /etc/hosts

Use Awk to Print All Matching Lines with Pattern

Use Awk to Print All Matching Lines with Pattern

Use Awk with ($) Meta Character

It matches all the lines that end with the pattern provided:

# awk '/ab$/{print}' /etc/hosts
# awk '/ost$/{print}' /etc/hosts
# awk '/rs$/{print}' /etc/hosts

Use Awk to Print Given Pattern String

Use Awk to Print Given Pattern String

Use Awk with (\) Escape Character

It allows you to take the character following it as a literal that is to say consider it just as it is.

In the example below, the first command prints out all line in the file, the second command prints out nothing because I want to match a line that has $25.00, but no escape character is used.

The third command is correct since a an escape character has been used to read $ as it is.

# awk '//{print}' deals.txt
# awk '/$25.00/{print}' deals.txt
# awk '/\.00/{print}' deals.txt

Use Awk with Escape Character

Use Awk with Escape Character

Summary

That is not all with the awk command line filtering tool, the examples above a the basic operations of awk. In the next parts we shall be advancing on how to use complex features of awk. Thanks for reading through and for any additions or clarifications, post a comment in the comments section.

Source

NLTK Tutorial in Python – Linux Hint

The era of data is already here. The rate at which the data is generated today is higher than ever and it is always growing. Most of the times, the people who deal with data everyday work mostly with unstructured textual data. Some of this data has associated elements like images, videos, audios etc. Some of the sources of this data are websites, daily blogs, news websites and many more. Analysing all of this data at a faster rate is necessary and many time, crucial too.

For example, a business might run a text analysis engine which processes the tweets about its business mentioning the company name, location, process and analyse the emotion related to that tweet. Correct actions can be taken faster if that business get to know about growing negative tweets for it in a particular location to save itself from a blunder or anything else. Another common example will for Youtube. The Youtube admins and moderators get to know about effect of a video depending on the type of comments made on a video or the video chat messages. This will help them find inappropriate content on the website much faster because now, they have eradicated the manual work and employed automated smart text analysis bots.

In this lesson, we will study some of the concepts related to text analysis with the help of NLTK library in Python. Some of these concepts will involve:

  • Tokenization, how to break a piece of text into words, sentences
  • Avoiding stop words based on English language
  • Performing stemming and lemmatization on a piece of text
  • Identifying the tokens to be analysed

NLP will be the main area of focus in this lesson as it is applicable to enormous real-life scenarios where it can solve big and crucial problems. If you think this sounds complex, well it does but the concepts are equally easy to understand if you try examples side by side. Let’s jump into installing NLTK on your machine to get started with it.

Installing NLTK

Just a note before starting, you can use a virtual environment for this lesson which we can be made with the following command:

python -m virtualenv nltk
source nltk/bin/activate

Once the virtual environment is active, you can install NLTK library within the virtual env so that examples we create next can be executed:

pip install nltk

We will make use of Anaconda and Jupyter in this lesson. If you want to install it on your machine, look at the lesson which describes “How to Install Anaconda Python on Ubuntu 18.04 LTS” and share your feedback if you face any issues. To install NLTK with Anaconda, use the following command in the terminal from Anaconda:

conda install -c anaconda nltk

We see something like this when we execute the above command:

Once all of the packages needed are installed and done, we can get started with using the NLTK library with the following import statement:

import nltk

Let’s get started with basic NLTK examples now that we have the prerequisites packages installed.

Tokenization

We will start with Tokenization which is the first step in performing text analysis. A token can be any smaller part of a piece of text which can be analysed. There are two types of Tokenization which can be performed with NLTK:

  • Sentence Tokenization
  • Word Tokenization

You can guess what happens on each of the Tokenization so let’s dive into code examples.

Sentence Tokenization

As the name reflects, Sentence Tokenizers breaks a piece of text into sentences. Let’s try a simple code snippet for the same where we make use of a text we picked from Apache Kafka tutorial. We will perform the necessary imports

import nltk
from nltk.tokenize import sent_tokenize

Please note that you might face an error due to a missing dependency for nltk called punkt. Add the following line right after the imports in the program to avoid any warnings:

nltk.download(‘punkt’)

For me, it gave the following output:

Next, we make use of the sentence tokenizer we imported:

text = “””A Topic in Kafka is something where a message is sent. The consumer
applications which are interested in that topic pulls the message inside that
topic and can do anything with that data. Up to a specific time, any number of
consumer applications can pull this message any number of times.”””

sentences = sent_tokenize(text)
print(sentences)

We see something like this when we execute the above script:

As expected, the text was correctly organised into sentences.

Word Tokenization

As the name reflects, Word Tokenizers breaks a piece of text into words. Let’s try a simple code snippet for the same with the same text as the previous example:

from nltk.tokenize import word_tokenize

words = word_tokenize(text)
print(words)

We see something like this when we execute the above script:

As expected, the text was correctly organised into words.

Frequency Distribution

Now that we have broken the text, we can also calculate frequency of each word in the text we used. It is very simple to do with NLTK, here is the code snippet we use:

from nltk.probability import FreqDist

distribution = FreqDist(words)
print(distribution)

We see something like this when we execute the above script:

Next, we can find most common words in the text with a simple function which accepts the number of words to show:

# Most common words
distribution.most_common(2)

We see something like this when we execute the above script:

Finally, we can make a frequency distribution plot to clear out the words and their count in the given text and clearly understand the distribution of words:

Stopwords

Just like when we talk to another person via a call, there tends to be some noise over the call which is unwanted information. In the same manner, text from real world also contain noise which is termed as Stopwords. Stopwords can vary from language to language but they can be easily identified. Some of the Stopwords in English language can be – is, are, a, the, an etc.

We can look at words which are considered as Stopwords by NLTK for English language with the following code snippet:

from nltk.corpus import stopwords
nltk.download(‘stopwords’)

language = “english”
stop_words = set(stopwords.words(language))
print(stop_words)

As of course the set of stop words can be big, it is stored as a separate dataset which can be downloaded with NLTK as we shown above. We see something like this when we execute the above script:

These stop words should be removed from the text if you want to perform a precise text analysis for the piece of text provided. Let’s remove the stop words from our textual tokens:

filtered_words = []

for word in words:
if word not in stop_words:
filtered_words.append(word)

filtered_words

We see something like this when we execute the above script:

Word Stemming

A stem of a word is the base of that word. For example:

We will perform stemming upon the filtered words from which we removed stop words in the last section. Let’s write a simple code snippet where we use NLTK’s stemmer to perform the operation:

from nltk.stem import PorterStemmer
ps = PorterStemmer()

stemmed_words = []
for word in filtered_words:
stemmed_words.append(ps.stem(word))

print(“Stemmed Sentence:”, stemmed_words)

We see something like this when we execute the above script:

POS Tagging

Next step in textual analysis is after stemming is to identify and group each word in terms of their value, i.e. if each of the word is a noun or a verb or something else. This is termed as Part of Speech tagging. Let’s perform POS tagging now:

tokens=nltk.word_tokenize(sentences[0])
print(tokens)

We see something like this when we execute the above script:

Now, we can perform the tagging, for which we will have to download another dataset to identify the correct tags:

nltk.download(‘averaged_perceptron_tagger’)
nltk.pos_tag(tokens)


Here is the output of the tagging:

Now that we have finally identified the tagged words, this is the dataset on which we can perform sentiment analysis to identify the emotions behind a sentence.

Conclusion

In this lesson, we looked at an excellent natural language package, NLTK which allows us to work with unstructured textual data to identify any stop words and perform deeper analysis by preparing a sharp data set for text analysis with libraries like sklearn.

Find all of the source code used in this lesson on Github.

Source

10 Useful Tips for Writing Effective Bash Scripts in Linux

Shell scripting is the easiest form of programming you can learn/do in Linux. More so, it is a required skill for system administration for automating tasks, developing new simple utilities/tools just to mention but a few.

In this article, we will share 10 useful and practical tips for writing effective and reliable bash scripts and they include:

1. Always Use Comments in Scripts

This is a recommended practice which is not only applied to shell scripting but all other kinds of programming. Writing comments in a script helps you or some else going through your script understand what the different parts of the script do.

For starters, comments are defined using the # sign.

#TecMint is the best site for all kind of Linux articles

Sometimes bash may continue to execute a script even when a certain command fails, thus affecting the rest of the script (may eventually result in logical errors). Use the line below to exit a script when a command fails:

#let script exit if a command fails
set -o errexit 
OR
set -e

3. Make a Script exit When Bash Uses Undeclared Variable

Bash may also try to use an undeclared script which could cause a logical error. Therefore use the following line to instruct bash to exit a script when it attempts to use an undeclared variable:

#let script exit if an unsed variable is used
set -o nounset
OR
set -u

4. Use Double Quotes to Reference Variables

Using double quotes while referencing (using a value of a variable) helps to prevent word splitting (regarding whitespace) and unnecessary globbing (recognizing and expanding wildcards).

Check out the example below:

#!/bin/bash
#let script exit if a command fails
set -o errexit 

#let script exit if an unsed variable is used
set -o nounset

echo "Names without double quotes" 
echo
names="Tecmint FOSSMint Linusay"
for name in $names; do
        echo "$name"
done
echo

echo "Names with double quotes" 
echo
for name in "$names"; do
        echo "$name"
done

exit 0

Save the file and exit, then run it as follows:

$ ./names.sh

Use Double Quotes in Scripts

Use Double Quotes in Scripts

5. Use functions in Scripts

Except for very small scripts (with a few lines of code), always remember to use functions to modularize your code and make scripts more readable and reusable.

The syntax for writing functions is as follows:

function check_root(){
	command1; 
	command2;
}

OR
check_root(){
	command1; 
	command2;
}

For single line code, use termination characters after each command like this:

check_root(){ command1; command2; }

6. Use = instead of == for String Comparisons

Note that == is a synonym for =, therefore only use a single = for string comparisons, for instance:

value1=”tecmint.com”
value2=”fossmint.com”
if [ "$value1" = "$value2" ]

7. Use $(command) instead of legacy ‘command’ for Substitution

Command substitution replaces a command with its output. Use $(command) instead of backquotes `command` for command substitution.

This is recommended even by shellcheck tool (shows warnings and suggestions for shell scripts). For example:

user=`echo “$UID”`
user=$(echo “$UID”)

8. Use Read-only to Declare Static Variables

A static variable doesn’t change; its value can not be altered once it’s defined in a script:

readonly passwd_file=”/etc/passwd”
readonly group_file=”/etc/group”

9. Use Uppercase Names for ENVIRONMENT Variables and Lowercase for Custom Variables

All bash environment variables are named with uppercase letters, therefore use lowercase letters to name your custom variables to avoid variable name conflicts:

#define custom variables using lowercase and use uppercase for env variables
nikto_file=”$HOME/Downloads/nikto-master/program/nikto.pl”
perl “$nikto_file” -h  “$1”

10. Always Perform Debugging for Long Scripts

If you are writing bash scripts with thousands of lines of code, finding errors may become a nightmare. To easily fix things before executing a script, perform some debugging. Master this tip by reading through the guides provided below:

  1. How To Enable Shell Script Debugging Mode in Linux
  2. How to Perform Syntax Checking Debugging Mode in Shell Scripts
  3. How to Trace Execution of Commands in Shell Script with Shell Tracing

That’s all! Do you have any other best bash scripting practices to share? If yes, then use the comment form below to do that.

Source

10 Practical Examples Using Wildcards to Match Filenames in Linux

Wildcards (also referred to as meta characters) are symbols or special characters that represent other characters. You can use them with any command such as ls command or rm command to list or remove files matching a given criteria, receptively.

Read Also: 10 Useful Practical Examples on Chaining Operators in Linux

These wildcards are interpreted by the shell and the results are returned to the command you run. There are three main wildcards in Linux:

  • An asterisk (*) – matches one or more occurrences of any character, including no character.
  • Question mark (?) – represents or matches a single occurrence of any character.
  • Bracketed characters ([ ]) – matches any occurrence of character enclosed in the square brackets. It is possible to use different types of characters (alphanumeric characters): numbers, letters, other special characters etc.

You need to carefully choose which wildcard to use to match correct filenames: it is also possible to combine all of them in one operation as explained in the examples below.

How to Match Filenames Using Wildcards in Linux

For the purpose of this article, we will use following files to demonstrate each example.

createbackup.sh  list.sh  lspace.sh        speaker.sh
listopen.sh      lost.sh  rename-files.sh  topprocs.sh

1. This command matches all files with names starting with l (which is the prefix) and ending with one or more occurrences of any character.

$ ls -l l*	

List Files with Character

List Files with Character

2. This example shows another use of * to copy all filenames prefixed with users-0 and ending with one or more occurrences of any character.

$ mkdir -p users-info
$ ls users-0*
$ mv -v users-0* users-info/	# Option -v flag enables verbose output

List and Copy All Files

List and Copy All Files

3. The following command matches all files with names beginning with l followed by any single character and ending with st.sh (which is the suffix).

$ ls l?st.sh	

Match File with Character Name

Match File with Character Name

4. The command below matches all files with names starting with l followed by any of the characters in the square bracket but ending with st.sh.

$ ls l[abdcio]st.sh 

Matching Files with Names

Matching Files with Names

How to Combine Wildcards to Match Filenames in Linux

You can combine wildcards to build a complex filename matching criteria as described in the following examples.

5. This command will match all filenames prefixed with any two characters followed by st but ending with one or more occurrence of any character.

$ ls
$ ls ??st*

Match File Names with Prefix

Match File Names with Prefix

6. This example matches filenames starting with any of these characters [clst] and ending with one or more occurrence of any character.

$ ls
$ ls [clst]*

Match Files with Characters

Match Files with Characters

7. In this examples, only filenames starting with any of these characters [clst] followed by one of these [io] and then any single character, followed by a t and lastly, one or more occurrence of any character will be listed.

$ ls
$ ls [clst][io]?t*

List Files with Multiple Characters

List Files with Multiple Characters

8. Here, filenames prefixed with one or more occurrence of any character, followed by the letters tar and ending with one or more occurrence of any character will be removed.

$ ls
$ rm *tar*
$ ls

Remove Files with Character Letters

Remove Files with Character Letters

How to Match Characters Set in Linux

9. Now lets look at how to specify a set of characters. Consider the filenames below containing system users information.

$ ls

users-111.list  users-1AA.list  users-22A.list  users-2aB.txt   users-2ba.txt
users-111.txt   users-1AA.txt   users-22A.txt   users-2AB.txt   users-2bA.txt
users-11A.txt   users-1AB.list  users-2aA.txt   users-2ba.list
users-12A.txt   users-1AB.txt   users-2AB.list  users-2bA.list

This command will match all files whose name starts with users-i, followed by a number, a lower case letter or number, then a number and ends with one or more occurrences of any character.

$ ls users-[0-9][a-z0-9][0-9]*

The next command matches filenames beginning with users-i, followed by a number, a lower or upper case letter or number, then a number and ends with one or more occurrences of any character.

$ ls users-[0-9][a-zA-Z0-9][0-9]*

This command that follows will match all filenames beginning with users-i, followed by a number, a lower or upper case letter or number, then a lower or upper case letter and ends with one or more occurrences of any character.

$ ls users-[0-9][a-zA-Z0-9][a-zA-Z]*

Match Characters in Filenames

Match Characters in Filenames

How to Negate a Set of Characters in Linux

10. You can as well negate a set of characters using the ! symbol. The following command lists all filenames starting with users-i, followed by a number, any valid file naming character apart from a number, then a lower or upper case letter and ends with one or more occurrences of any character.

$ ls users-[0-9][!0-9][a-zA-Z]*

That’s all for now! If you have tried out the above examples, you should now have a good understanding of how wildcards work to match filenames in Linux.

You might also like to read these following articles that shows examples of using wildcards in Linux:

  1. How to Extract Tar Files to Specific or Different Directory in Linux
  2. 3 Ways to Delete All Files in a Directory Except One or Few Files with Extensions
  3. 10 Useful Tips for Writing Effective Bash Scripts in Linux
  4. How to Use Awk and Regular Expressions to Filter Text or String in Files

If you have any thing to share or a question(s) to ask, use the comment form below.

Source

WP2Social Auto Publish Powered By : XYZScripts.com