HTTPie – A Modern HTTP Client Similar to Curl and Wget Commands

HTTPie (pronounced aitch-tee-tee-pie) is a cURL-like, modern, user-friendly, and cross-platform command line HTTP client written in Python. It is designed to make CLI interaction with web services easy and as user-friendly as possible.

HTTPie - A Command Line HTTP Client

HTTPie – A Command Line HTTP Client

It has a simple http command that enables users to send arbitrary HTTP requests using a straightforward and natural syntax. It is used primarily for testing, trouble-free debugging, and mainly interacting with HTTP servers, web services and RESTful APIs.

  • HTTPie comes with an intuitive UI and supports JSON.
  • Expressive and intuitive command syntax.
  • Syntax highlighting, formatted and colorized terminal output.
  • HTTPS, proxies, and authentication support.
  • Support for forms and file uploads.
  • Support for arbitrary request data and headers.
  • Wget-like downloads and extensions.
  • Supports ython 2.7 and 3.x.

In this article, we will show how to install and use httpie with some basic examples in Linux.

How to Install and Use HTTPie in Linux

Most Linux distributions provide a HTTPie package that can be easily installed using the default system package manager, for example:

# apt-get install httpie  [On Debian/Ubuntu]
# dnf install httpie      [On Fedora]
# yum install httpie      [On CentOS/RHEL]
# pacman -S httpie        [On Arch Linux]

Once installed, the syntax for using httpie is:

$ http [options] [METHOD] URL [ITEM [ITEM]]

The most basic usage of httpie is to provide it a URL as an argument:

$ http example.com

Basic HTTPie Usage

Basic HTTPie Usage

Now let’s see some basic usage of httpie command with examples.

Send a HTTP Method

You can send a HTTP method in the request, for example, we will send the GET method which is used to request data from a specified resource. Note that the name of the HTTP method comes right before the URL argument.

$ http GET tecmint.lan

Send GET HTTP Method

Send GET HTTP Method

Upload a File

This example shows how to upload a file to transfer.sh using input redirection.

$ http https://transfer.sh < file.txt

Download a File

You can download a file as shown.

$ http https://transfer.sh/Vq3Kg/file.txt > file.txt		#using output redirection
OR
$ http --download https://transfer.sh/Vq3Kg/file.txt  	        #using wget format

Submit a Form

You can also submit data to a form as shown.

$ http --form POST tecmint.lan date='Hello World'

View Request Details

To see the request that is being sent, use -v option, for example.

$ http -v --form POST tecmint.lan date='Hello World'

View HTTP Request Details

View HTTP Request Details

Basic HTTP Auth

HTTPie also supports basic HTTP authentication from the CLI in the form:

$ http -a username:password http://tecmint.lan/admin/

Custom HTTP Headers

You can also define custom HTTP headers in using the Header:Value notation. We can test this using the following URL, which returns headers. Here, we have defined a custom User-Agent called ‘strong>TEST 1.0’:

$ http GET https://httpbin.org/headers User-Agent:'TEST 1.0'

Custom HTTP Headers

Custom HTTP Headers

See a complete list of usage options by running.

$ http --help
OR
$ man  ttp

You can find more usage examples from the HTTPie Github repository: https://github.com/jakubroztocil/httpie.

HTTPie is a cURL-like, modern, user-friendly command line HTTP client with simple and natural syntax, and displays colorized output. In this article, we have shown how to install and use httpie in Linux. If you have any questions, reach us via the comment form below.

Source

11 Cron Scheduling Task Examples in Linux

In this article we are going to review and see how we can schedule and run tasks in the background automatically at regular intervals using Crontab command. Dealing a frequent job manually is a daunting task for system administrator. Such process can be schedule and run automatically in the background without human intervene using cron daemon in Linux or Unix-like operating system.

For instance, you can automate process like backupschedule updates and synchronization of files and many more. Cron is a daemon to run schedule tasks. Cron wakes up every minute and checks schedule tasks in crontable. Crontab (CRON TABle) is a table where we can schedule such kind of repeated tasks.

Tips: Each user can have their own crontab to create, modify and delete tasks. By default cron is enable to users, however we can restrict adding entry in /etc/cron.deny file.

Linux Cron Examples

11 Cron Command Examples in Linux


Crontab file consists of command per line and have six fields actually and separated either of space or tab. The beginning five fields represent time to run tasks and last field is for command.

  1. Minute (hold values between 0-59)
  2. Hour (hold values between 0-23)
  3. Day of Month (hold values between 1-31)
  4. Month of the year (hold values between 1-12 or Jan-Dec, you can use first three letters of each month’s name i.e Jan or Jun.)
  5. Day of week (hold values between 0-6 or Sun-Sat, Here also you can use first three letters of each day’s name i.e Sun or Wed. )
  6. Command

1. List Crontab Entries

List or manage the task with crontab command with -l option for current user.

# crontab -l

00 10 * * * /bin/ls >/ls.txt

2. Edit Crontab Entries

To edit crontab entry, use -e option as shown below. In the below example will open schedule jobs in VI editor. Make a necessary changes and quit pressing :wq keys which saves the setting automatically.

# crontab -e

3. List Scheduled Cron Jobs

To list scheduled jobs of a particular user called tecmint using option as -u (User) and -l (List).

# crontab -u tecmint -l

no crontab for tecmint

Note: Only root user have complete privileges to see other users crontab entry. Normal user can’t view it others.

4. Remove Crontab Entry

Caution: Crontab with -r parameter will remove complete scheduled jobs without confirmation from crontab. Use -i option before deleting user’s crontab.

# crontab -r

5. Prompt Before Deleting Crontab

crontab with -i option will prompt you confirmation from user before deleting user’s crontab.

# crontab -i -r

crontab: really delete root's crontab?

6. Allowed special character (*, -, /, ?, #)

  1. Asterik(*) – Match all values in the field or any possible value.
  2. Hyphen(-) – To define range.
  3. Slash (/) – 1st field /10 meaning every ten minute or increment of range.
  4. Comma (,) – To separate items.

7. System Wide Cron Schedule

System administrator can use predefine cron directory as shown below.

  1. /etc/cron.d
  2. /etc/cron.daily
  3. /etc/cron.hourly
  4. /etc/cron.monthly
  5. /etc/cron.weekly

8. Schedule a Jobs for Specific Time

The below jobs delete empty files and directory from /tmp at 12:30 am daily. You need to mention user name to perform crontab command. In below example root user is performing cron job.

# crontab -e

30 0 * * *   root   find /tmp -type f -empty -delete

9. Special Strings for Common Schedule

Strings Meanings
@reboot Command will run when the system reboot.
@daily Once per day or may use @midnight.
@weekly Once per week.
@yearly Once per year. we can use @annually keyword also.

Need to replace five fields of cron command with keyword if you want to use the same.

10. Multiple Commands with Double amper-sand(&&)

In below example command1 and command2 run daily.

# crontab -e

@daily <command1> && <command2>

11. Disable Email Notification.

By default cron send mail to user account executing cronjob. If you want to disable it add your cron job similar to below example. Using >/dev/null 2>&1 option at the end of the file will redirect all the output of the cron results under /dev/null.

[root@tecmint ~]# crontab -e
* * * * * >/dev/null 2>&1

conclusion: Automation of tasks may help us to perform our task better ways, error free and efficiently. You may refer manual page of crontab for more information typing ‘man crontab‘ command in your terminal.

Source

How to Setup and Manage Log Rotation Using Logrotate in Linux

One of the most interesting (and perhaps one of the most important as well) directories in a Linux system is /var/log. According to the Filesystem Hierarchy Standard, the activity of most services running in the system are written to a file inside this directory or one of its subdirectories.

Such files are known as logs and are the key to examining how the system is operating (and how it has behaved in the past). Logs are also the first source of information where administrators and engineers look while troubleshooting.

If we look at the contents of /var/log on a CentOS/RHEL/Fedora and Debian/Ubuntu (for variety) we will see the following log files and subdirectories.

Please note that the result may be somewhat different in your case depending on the services running on your system(s) and the time they have been running.

In RHEL/CentOS and Fedora

# ls /var/log

Log Files and Directories under CentOS 7

Log Files and Directories under CentOS 7

In Debian and Ubuntu

# ls /var/log

Log Files and Directories in Debian 8

Log Files and Directories in Debian 8

On both cases, we can observe that some of the log names end as expected in “log”, while other are either renamed using a date (for example, maillog-20160822 on CentOS) or compressed (consider auth.log.2.gz and mysql.log.1.gz on Debian).

This is not a default behavior based on the chosen distribution, but can be changed at will using directives in the configuration files, as we will see in this article.

If logs were kept forever, they would eventually end up filling the filesystem where /var/log resides. In order to prevent that, the system administrator can use a nice utility called logrotate to clean up the logs on a periodic basis.

In few words, logrotate will rename or compress the main log when a condition is met (more about that in a minute) so that the next event is recorded on an empty file.

In addition, it will remove “old” log files and will keep the most recent ones. Of course, we get to decide what “old” means and how often we want logrotate to clean up the logs for us.

Installing Logrotate in Linux

To install logrotate, just use your package manager:

---------- On Debian and Ubuntu ---------- 
# aptitude update && aptitude install logrotate 

---------- On CentOS, RHEL and Fedora ---------- 
# yum update && yum install logrotate

It is worth and well to note that the configuration file (/etc/logrotate.conf) may indicate that other, more specific settings may be placed on individual .conf files inside /etc/logrotate.d.

Suggested Read: Manage System Logs (Configure, Rotate and Import Into Database) Using Logrotate

This will be the case if and only if the following line exists and is not commented out:

include /etc/logrotate.d

We will stick with this approach, as it will help us to keep things in order, and use the Debian box for the following examples.

Options

Being a very versatile tool, logrotate provides plenty of directives to help us configure when and how the logs will be rotated, and what should happen right afterwards.

Let’s insert the following contents in /etc/logrotate.d/apache2.conf (note that most likely you will have to create that file) and examine each line to indicate its purpose:

apache2.conf
/var/log/apache2/* {
    weekly
    rotate 3
    size 10M
    compress
    delaycompress
}

The first line indicates that the directives inside the block apply to all logs inside /var/log/apache2:

  1. weekly means that the tool will attempt to rotate the logs on a weekly basis. Other possible values are daily and monthly.
  2. rotate 3 indicates that only 3 rotated logs should be kept. Thus, the oldest file will be removed on the fourth subsequent run.
  3. size=10M sets the minimum size for the rotation to take place to 10M. In other words, each log will not be rotated until it reaches 10MB.
  4. compress and delaycompress are used to tell that all rotated logs, with the exception of the most recent one, should be compressed.

Let’s execute a dry-run to see what logrotate would do if it was actually executed now. Use the -d option followed by the configuration file (you can actually run logrotate by omitting this option):

# logrotate -d /etc/logrotate.d/apache2.conf

The results are shown below:

Rotate Apache Logs with Logrotate

Rotate Apache Logs with Logrotate

Instead of compressing the logs, we could rename them after the date when they were rotated. To do that, we will use the dateext directive. If our date format is other than the default yyyymmdd, we can specify it using dateformat.

Suggested Read: Install ‘atop’ to Monitor Logging Activity of Linux System Processes

Note that we can even prevent the rotation from happening if the log is empty with notifempty. In addition, let’s tell logrotate to mail the rotated log to the system administrator (gabriel@mydomain.com in this case) for his / her reference (this will require a mail server to be set up, which is out of the scope of this article).

If you want to get mails about logrotate, you can setup Postfix mail server as show here: Install Postfix Mail Server

This time we will use /etc/logrotate.d/squid.conf to only rotate /var/log/squid/access.log:

squid.conf
/var/log/squid/access.log {
    monthly
    create 0644 root root
    rotate 5
    size=1M
    dateext
    dateformat -%d%m%Y
    notifempty
    mail gabriel@mydomain.com
}

As we can see in the image below, this log did not need to be rotated. However, when the size condition is met (size=1M), the rotated log will be renamed access.log-25082016 (if the log was rotated on August 25, 2016) and the main log (access.log) will be re-created with access permissions set to 0644 and with root as owner and group owner.

Finally, when the number of logs finally reaches 6, the oldest log will be mailed to gabriel@mydomain.com.

Rotate Squid Logs with Logrotate

Rotate Squid Logs with Logrotate

Now let’s suppose you want to run a custom command when the rotation takes place. To do that, place the line with such command between the postrotate and endscript directives.

For example, let’s suppose we want to send an email to root when any of the logs inside /var/log/myservicegets rotated. Let’s add the lines in red to /etc/logrotate.d/squid.conf:

squid.conf
/var/log/myservice/* {
	monthly
	create 0644 root root
	rotate 5
	size=1M
    	postrotate
   		echo "A rotation just took place." | mail root
    	endscript
}

Last, but not least, it is important to note that options present in /etc/logrotate.d/*.conf override those in the main configuration file in case of conflicts.

Logrotate and Cron

By default, the installation of logrotate creates a crontab file inside /etc/cron.daily named logrotate. As it is the case with the other crontab files inside this directory, it will be executed daily starting at 6:25 am if anacron is not installed.

Suggested Read: 11 Cron Scheduling Task Examples in Linux

 

Otherwise, the execution will begin around 7:35 am. To verify, watch for the line containing cron.daily in either /etc/crontab or /etc/anacrontab.

Summary

In a system that generates several logs, the administration of such files can be greatly simplified using logrotate. As we have explained in this article, it will automatically rotate, compress, remove, and mail logs on a periodic basis or when the file reaches a given size.

Just make sure it is set to run as a cron job and logrotate will make things much easier for you. For more details, refer to the man page.

Source

ngrep – A Network Packet Analyzer for Linux

Ngrep (network grep) is a simple yet powerful network packet analyzer. It is a grep-like tool applied to the network layer – it matches traffic passing over a network interface. It allows you to specify an extended regular or hexadecimal expression to match against data payloads (the actual information or message in transmitted data, but not auto-generated metadata) of packets.

This tool works with various types of protocols, including IPv4/6, TCP, UDP, ICMPv4/6, IGMP as well as Raw on a number of interfaces. It operates in the same fashion as tcpdump packet sniffing tool.

The package ngrep is available to install from the default system repositories in mainstream Linux distributions using package management tool as shown.

$ sudo apt install ngrep
$ sudo yum install ngrep
$ sudo dnf install ngrep

After installing ngrep, you can start analyzing traffic on your Linux network using following examples.

1. The following command will help you match all ping requests on the default working interface. You need to open another terminal and try to ping another remote machine. The -q flag tell ngrep to work quietly, to not output any information other than packet headers and their payloads.

$ sudo ngrep -q '.' 'icmp'

interface: enp0s3 (192.168.0.0/255.255.255.0)
filter: ( icmp ) and ((ip || ip6) || (vlan && (ip || ip6)))
match: .

I 192.168.0.104 -> 192.168.0.103 8:0
  ]...~oG[....j....................... !"#$%&'()*+,-./01234567                                                                                                             

I 192.168.0.103 -> 192.168.0.104 0:0
  ]...~oG[....j....................... !"#$%&'()*+,-./01234567                                                                                                             

I 192.168.0.104 -> 192.168.0.103 8:0
  ]....oG[............................ !"#$%&'()*+,-./01234567                                                                                                             

I 192.168.0.103 -> 192.168.0.104 0:0
  ]....oG[............................ !"#$%&'()*+,-./01234567  

You can press Ctrl + C to terminate it.

2. To match only traffic going to a particular destination site, for instance ‘google.com’, run the following command, then try to access it from a browser.

$ sudo ngrep -q '.' 'host google.com'

interface: enp0s3 (192.168.0.0/255.255.255.0)
filter: ( host google.com ) and ((ip || ip6) || (vlan && (ip || ip6)))
match: .

T 172.217.160.174:443 -> 192.168.0.103:54008 [AP]
  ..................;.(...RZr..$....s=..l.Q+R.U..4..g.j..I,.l..:{y.a,....C{5>......p..@..EV..                                                                       

T 172.217.160.174:443 -> 192.168.0.103:54008 [AP]
  .............l.......!,0hJ....0.%F..!...l|.........PL..X...t..T.2DC..... ..y...~Y;.$@Yv.Q6

3. If you are surfing the web, then run the following command to monitor which files your browser is requesting:.

$ sudo ngrep -q '^GET .* HTTP/1.[01]'

interface: enp0s3 (192.168.0.0/255.255.255.0)
filter: ((ip || ip6) || (vlan && (ip || ip6)))
match: ^GET .* HTTP/1.[01]

T 192.168.0.104:43040 -> 172.217.160.174:80 [AP]
  GET / HTTP/1.1..Host: google.com..User-Agent: Links (2.13; Linux 4.17.6-1.el7.elrepo.x86_64 x86_64; 
  GNU C 4.8.5; text)..Accept: */*..Accept-Language: en,*;q=0.1..Accept-
  Encoding: gzip, deflate, bzip2..Accept-Charset: us-ascii,ISO-8859-1,ISO-8859-2,ISO-8859-3,ISO-8859-4,
  ISO-8859-5,ISO-8859-6,ISO-8859-7,ISO-8859-8,ISO-8859-9,ISO-8859-10,I
  SO-8859-13,ISO-8859-14,ISO-8859-15,ISO-8859-16,windows-1250,windows-1251,windows-1252,windows-1256,
  windows-1257,cp437,cp737,cp850,cp852,cp866,x-cp866-u,x-mac,x-mac-ce,x-
  kam-cs,koi8-r,koi8-u,koi8-ru,TCVN-5712,VISCII,utf-8..Connection: keep-alive.... 

4. To see all activity crossing source or destination port 25 (SMTP), run the following command.

$ sudo ngrep port 25

5. To monitor any network-based syslog traffic for the occurrence of the word “error”, use the following command.

 
$ sudo ngrep -d any 'error' port 514

Importantly, this tool can convert service port names stored in “/etc/services” (on Unix-like systems such as Linux) to port numbers. This command is equivalent to the above command.

$ sudo ngrep -d any 'error' port syslog

6. You can also run ngrep against an HTTP server (port 80), it will match all requests to the destination host as shown.

$ sudo ngrep port 80

interface: eth0 (64.90.164.72/255.255.255.252)
filter: ip and ( port 80 )
####
T 67.169.59.38:42167 -> 64.90.164.74:80 [AP]
  GET / HTTP/1.1..User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; X11; Linux i
  686) Opera 7.21  [en]..Host: www.darkridge.com..Accept: text/html, applicat
  ion/xml;q=0.9, application/xhtml+xml;q=0.9, image/png, image/jpeg, image/gi
  f, image/x-xbitmap, */*;q=0.1..Accept-Charset: iso-8859-1, utf-8, utf-16, *
  ;q=0.1..Accept-Encoding: deflate, gzip, x-gzip, identity, *;q=0..Cookie: SQ
  MSESSID=5272f9ae21c07eca4dfd75f9a3cda22e..Cookie2: $Version=1..Connection:
  Keep-Alive, TE..TE: deflate, gzip, chunked, identity, trailers....
##

As you can see in the above output all HTTP headers transmission are displayed in their gory detail. It’s hard to parse though, so let’s watch what happens when you apply -W byline mode.

$ sudo ngrep -W byline port 80

interface: eth0 (64.90.164.72/255.255.255.252)
filter: ip and ( port 80 )
####
T 67.169.59.38:42177 -> 64.90.164.74:80 [AP]
GET / HTTP/1.1.
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; X11; Linux i686) Opera ...
Host: www.darkridge.com.
Accept: text/html, application/xml;q=0.9, application/xhtml+xml;q=0.9 ...
Accept-Charset: iso-8859-1, utf-8, utf-16, *;q=0.1.
Accept-Encoding: deflate, gzip, x-gzip, identity, *;q=0.
Cookie: SQMSESSID=5272f9ae21c07eca4dfd75f9a3cda22e.
Cookie2: $Version=1.
Cache-Control: no-cache.
Connection: Keep-Alive, TE.
TE: deflate, gzip, chunked, identity, trailers.

7. To print a timestamp in the form of YYYY/MM/DD HH:MM:SS.UUUUUU every time a packet is matched, use the -t flag.

$ sudo ngrep -t -W byline port 80

interface: enp0s3 (192.168.0.0/255.255.255.0)
filter: ( port 80 ) and ((ip || ip6) || (vlan && (ip || ip6)))
####
T 2018/07/12 16:33:19.348084 192.168.0.104:43048 -> 172.217.160.174:80 [AP]
GET / HTTP/1.1.
Host: google.com.
User-Agent: Links (2.13; Linux 4.17.6-1.el7.elrepo.x86_64 x86_64; GNU C 4.8.5; text).
Accept: */*.
Accept-Language: en,*;q=0.1.
Accept-Encoding: gzip, deflate, bzip2.
Accept-Charset: us-ascii,ISO-8859-1,ISO-8859-2,ISO-8859-3,ISO-8859-4,ISO-8859-5,utf-8.
Connection: keep-alive.

8. To avoid putting the interface being monitored into promiscuous mode (where it intercepts and reads each network packet that arrives in its entirety), add the -p flag.

$ sudo ngrep -p -W byline port 80

9. Another important option is -N which is useful in case you are observing raw or unknown protocols. It tells ngrep to display the sub-protocol number along with single-character identifier.

$ sudo ngrep -N -W byline

For more information, see the ngrep man page.

$ man ngrep

ngrep Github repository: https://github.com/jpr5/ngrep

That’s all! Ngrep (network grep) is a network packet analyzer that understands BPF filter logic in the same fashion tcpdump. We would like to know your thoughts about ngrep in the comments section.

Source

Amplify – NGINX Monitoring Made Easy

Nginx amplify is a collection of useful tools for extensively monitoring a open source Nginx web server and NGINX Plus. With NGINX Amplify you can monitor performance, keep track of systems running Nginx and enables for practically examining and fixing problems associated with running and scaling web applications.

It can be used to visualize and determine a Nginx web server performance bottlenecks, overloaded servers, or potential DDoS attacks; enhance and optimize Nginx performance with intelligent advice and recommendations.

In addition, it can notify you when something is wrong with the any of your application setup, and it also serves as a web application capacity and performance planner.

The Nginx amplify architecture is built on 3 key components, which are described below:

  • NGINX Amplify Backend – the core system component, implemented as a SaaS (Software as a Service). It incorporates scalable metrics collection framework, a database, an analytics engine, and a core API.
  • NGINX Amplify Agent – a Python application which should be installed and run on monitored systems. All communications between the agent and the SaaS backend are done securely over SSL/TLS; all traffic is always initiated by the agent.
  • NGINX Amplify Web UI – a user interface compatible with all major browsers and it is only accessible only via TLS/SSL.

The web UI displays graphs for Nginx and operating system metrics, allows for the creation of a user-defined dashboard, offers a static analyzer to improve Nginx configuration and an alert system with automated notifications.

Step 1: Install Amplify Agent on Linux System

1. Open your web browser, type the address below and create an account. A link will be sent to your email, use it to verify the email address andlogin to your new account.

https://amplify.nginx.com

2. After that, log into your remote server to be monitored, via SSH and download the nginx amplify agent auto-install script using curl or wget command.

$ wget https://github.com/nginxinc/nginx-amplify-agent/raw/master/packages/install.sh
OR
$ curl -L -O https://github.com/nginxinc/nginx-amplify-agent/raw/master/packages/install.sh 

3. Now run the command below with superuser privileges using the sudo command, to install the amplify agent package (the API_KEY will probably be different, unique for every system that you add).

$ sudo API_KEY='e126cf9a5c3b4f89498a4d7e1d7fdccf' sh ./install.sh 

Install Nginx Amplify Agent

Install Nginx Amplify Agent

Note: You will possibly get an error indicating that sub_status has not been configured, this will be done in the next step.

4. Once the installation is complete, go back to the web UI and after about 1 minute, you will be able to see the new system in the list on the left.

Step 2: Configure stub_status in NGINX

5. Now, you need to setup stub_status configuration to build key Nginx graphs (Nginx Plus users need to configure either the stub_status module or the extended status module).

Create a new configuration file for stub_status under /etc/nginx/conf.d/.

$ sudo vi /etc/nginx/conf.d/sub_status.conf

Then copy and paste the following stub_status configuration in the file.

server {
    listen 127.0.0.1:80;
    server_name 127.0.0.1;
    location /nginx_status {
        stub_status;
        allow 127.0.0.1;
        deny all;
    }
}

Save and close the file.

6. Next, restart Nginx services to activate the stub_status module configuration, as follows.

$ sudo systemctl restart nginx

Step 3: Configure Additional NGINX Metrics for Monitoring

7. In this step, you need to setup additional Nginx metrics to keep a close eye on your applications performance. The agent will gather metrics from active and growing access.log and error.log files, whose locations it automatically detects. And importantly, it should be allowed to read these files.

All you have to do is define a specific log_format as the one below in your main Nginx configuration file, /etc/nginx/nginx.conf.

log_format main_ext '$remote_addr - $remote_user [$time_local] "$request" '
                                '$status $body_bytes_sent "$http_referer" '
                                '"$http_user_agent" "$http_x_forwarded_for" '
                                '"$host" sn="$server_name" ' 'rt=$request_time '
                                'ua="$upstream_addr" us="$upstream_status" '
                                'ut="$upstream_response_time" ul="$upstream_response_length" '
                                'cs=$upstream_cache_status' ;

Then use the above log format when defining your access_log and the error_log log level should be set to warnas shown.

access_log /var/log/nginx/suasell.com/suasell.com_access_log main_ext;
error_log /var/log/nginx/suasell.com/suasell.com_error_log  warn;

8. Now restart Nginx services once more, to effect the latest changes.

$ sudo systemctl restart nginx

Step 4: Monitor Nginx Web Server Via Amplify Agent

9. Finally, you can begin monitoring your Nginx web server from the Amplify Web UI.

Nginx Amplify Overview

Nginx Amplify Overview

Nginx Amplify Graph

Nginx Amplify Graph

To add a another system to monitor, simply go to Graphs and click on “New System” and follow the steps above.

Nginx Amplify Homepagehttps://amplify.nginx.com/signup/

Amplify is a powerful SaaS solution for monitoring your OS, Nginx web server as well as Nginx based applications. It offers a single, unified web UI for keeping an eye on multiple remote systems running Nginx.

Source

Nmon: Analyze and Monitor Linux System Performance

If you are looking for a very easy to use performance monitoring tool for Linux, I highly recommend to install and use the Nmon command-line utility.

Install Nmon in Linux

Nmon Monitoring Tool

Nmon is a system’s administrator tuner, benchmark tool that can be used to display performance data about the followings:

  1. cpu
  2. memory
  3. network
  4. disks
  5. file systems
  6. nfs
  7. top processes
  8. resources
  9. power micro-partition

A very nice thing I really like about this tool is the fact that it is fully interactive and helps the Linux user or the system administrator with the necessary command to get the most out of it.

Installing Nmon Monitoring Tool in Linux

If you are using a Debian/Ubuntu based Linux distribution you can easily install the Nmon command-line utility by grabbing it from the default repositories.

To install, Open a new terminal (CTRL+ALT+T) and use the following command.

$ sudo apt-get install nmon

Are you a Fedora user? To install in your machine open a new terminal and run the following command.

# yum install nmon

CentOS/RHEL users can install it, by installing EPEL repository as shown:

# yum install epel-release
# yum install nmon

How to use Nmon to Monitor Linux Performance

Once the installation of Nmon has been finished and you launch it from the terminal by typing the ‘nmon‘ command you will be presented with the following output.

# nmon

Nmon Preview

Nmon Preview

As you guys can see from the above screenshot, the nmon command-line utility runs completely in interactive mode and it presents the user with the keys to toggle statistics.

Check CPU by processor

For example, if you would like to collect some statistics on CPU performance you should hit the ‘c‘ key on the keyboard of the system you are using. After hitting the ‘c‘ key on my keyboard I get a very nice output that gives me information on my CPU usage.

Monitor CPU by Processor

CPU by Processor

The following are the keys you can use with the  utility to get information on other system resources present in your machine.

  1. m = Memory
  2. j = Filesystems
  3. d = Disks
  4. n = Network
  5. V = Virtual Memory
  6. r = Resource
  7. N = NFS
  8. k = kernel
  9. t = Top-processes
  10. . = only busy disks/procs

Top Process Statistics

To get stats on top processes that are running on your Linux system press the key ‘t‘ on your keyboard and wait for the information to show up.

Monitor Linux Running Processes

Top Processes

Those familiar with the top utility will understand and be able to interpret the above information very easy. If you are new to Linux system administering and have never used the top utility before, run the following command in your terminal and try to compare the produced output with the above one. Do they look similar, or is it the same output?

# top

It looks like I am running the top process monitoring utility when I use the key ‘t‘ with the Nmon tool to me.

Check Network Statistics

How about some network stats? Just press ‘n‘ on your keyboard.

Monitor Linux Network Statistics

Network Statistics

Disk I/O Graphs

Use the ‘d‘ key to get information on disks.

Monitor Linux Disk I/O

Monitor Disk I/O

Check Kernel Information

A very important key to use with this tool is ‘k‘, it is used to display some brief information on the kernel of your system.

Kernel Information

Check Linux Kernel Information

Get System Information

A very useful key for me is the key ‘r‘ which is used to give information on different resources such as machine architecture, operating system version, Linux version and CPU. You can get an idea of the importance of the key ‘r‘ by looking the following screenshot.

Get Linux System Information

System Information

Check File System Statistics

To get stats on the file systems press ‘j‘ on your keyboard.

Monitor Linux File System

File System Statistics

As you can see from the above screenshot, we get information on size of the file system, used space, free space, type of the file system and the mount point.

Display NFS Data

The key ‘N‘ can help to collect and display data on NFS.

Monitor NFS

NFS Data

So far it has been very easy to work with the Nmon utility. There are many other thing you need to know about the utility and one of them is the fact that you can use in data captured mode. If you don’t like the data to be displayed on the screen you can easily capture a small sample file with the following command.

# nmon -f -s13 -c 30

After running the above command you will get a file with ‘.nmon‘ extension in the directory where you were while working with the tool. What is the ‘-f‘ option? The following is a simple and short explanation of the options used in the above command.

  1. The -f means you want the data saved to a file and not displayed on the screen.
  2. The -s13 means you want to capture data every 13 seconds.
  3. The -c 30 means you want thirty data points or snap shots.

Conclusion

There are many tools that can do the job of the Nmon utility, but none of them is so easy to use and friendly to a Linux beginner. Unfortunately the tool does not have as many features as other tools such as collectl and it can not provide in-depth stats to the user.

At the end I can say it is a very nice utility for a Linux system administrator, especially for someone that is not familiar with command-line options and commands.

Source

Will ‘Htop’ Replace Default ‘Top’ Monitoring Tool in Linux?

top is a traditional command-line tool for monitoring real-time processes in a Unix/Linux systems, it’s comes preinstalled on most if not all Linux distributions and shows a useful summary of system information including uptime, total number of processes (and number of: running, sleeping, stopped and zombie processes), CPU and RAM usage, and a list of processes or threads currently being managed by the kernel.

Linux Process Monitoring with Top

Linux Process Monitoring with Top

Read AlsoFind Top 15 Processes By Memory Usage in Linux

Htop is an interactive, ncurses-based processes viewer for Linux systems. It is practically a top-like tool, but it displays colorful text, and uses ncurses to implement a text-graphical interface, and allows for output scrolling. It doesn’t come preinstalled on most mainstream Linux distributions.

Linux Process Monitoring with Htop

Linux Process Monitoring with Htop

Why Htop is Better Than Top Monitoring Tool

Htop has become increasingly popular among Linux users, due to its modern features and ease of use. In fact, this has sparked a “top Vs htop” debate. The following are some of htop features not present in top – why Linux users now prefer htop to its old counterpart top:

  • It has a nicer text-graphics interface, with colored output.
  • It is easy to use and highly configurable.
  • Allows for scrolling process list vertically and horizontally to see all processes and complete command lines.
  • It also displays a process tree and comes with mouse support.
  • Allows you to easily perform certain functions related to processes (killing, renicing etc) which can be done without entering their PIDs.
  • Htop is also much faster than top.

Another important thing to share that, in recent version of Ubuntu 18.04, htop package comes preinstalled, it’s in the list of default Bionic packages.
Read Also20 Command Line Tools to Monitor Linux Performance

In addition, the htop package has been moved from the Universe repository (which contains community-maintained free and open-source packages) into the main repository (which contains free and open-source packages supported by Canonical), as shown by the publishing history of htop package in Ubuntu, on Launchpad.

Bearing in mind these recent advancements concerning the htop package in the Ubuntu repositories, coupled with its growing popularity among Linux users, the big question here is, will htop replace top as the default process monitoring tool on Linux Systems? Let’s watch the space!

There are also other tools in the mix, such as glances and atop; the former is cross-platform, the most advanced of them all, and it’s becoming popular as well. Glances is highly configurable, it can run in: standalone, client/server and web server mode.

Read AlsoUse Glances to Monitor Remote Linux in Web Server Mode

Although htop has modern process monitoring features and is easier to use, top has been around for a long time, and it is proven and tested. What is your take on this issue? Which of these tools would you say is better for Linux process monitoring? Use the feedback form below to share your thoughts with us.

Source

Iotop – Monitor Linux Disk I/O Activity and Usage Per-Process Basis

Iotop is an open source and free utility similar to top command, that provides an easy way to monitor Linux Disk I/O usage details and prints a table of existing I/O utilization by process or threads on the systems.

Iotop tool is based on Python programming and requires Kernel accounting function to monitor and display processes. It is very useful tool for system administrator to trace the specific process that may causing a high disk I/O read/writes.

Iotop Pre-requisites

  1. Kernel 2.6.20 or higher
  2. Python 2.7 or higher

This article explains how to install iotop program to monitor and trace Linux device I/O (input/output) on a per-process basis in Linux systems.

Install Iotop Disk I/O Monitoring Tool in Linux

As I already said above that iotop requires latest Kernel 2.6.20 and Python 2.7, let’s first update both of them with the help of following command.

-------------- On RHEL, CentOS and Fedora -------------- 
# yum update     

-------------- On Fedora 22+ Releases -------------- 
# dnf update

-------------- On Debian, Ubuntu and Linux Mint -------------- 
# apt-get update

Next, verify your kernel and python version by running:

# uname -r
# python -V

Important: At the time of this writing, CentOS/RHEL 5.x uses an older version of python and not possible to install iotop. However, they can use dstat program, which does a similar function of iotop.

Install iotop using Package Manager

To install iotop from your package manager, select the appropriate command from the following list.

-------------- On RHEL, CentOS and Fedora -------------- 
# yum install iotop

-------------- On Fedora 22+ Releases -------------- 
# dnf install iotop

-------------- On Debian, Ubuntu and Linux Mint -------------- 
# apt-get install iotop

Important: Installing iotop from your default repositories will give you an older version. If you looking to have a most recent version of iotop, consider compiling from source using following instructions.

Install iotop from Source

To install most recent version of iotop, go the official project page and download the latest source package and compile it from source using following series of commands:

# wget http://guichaz.free.fr/iotop/files/iotop-0.6.tar.bz2
# tar -xjvf iotop-0.6.tar.bz2
# cd iotop-0.6/
# ./setup.py install

Important: You can run iotop within the directory i.e. (by running ./iotop.py) or you can run the installer ./setup.py install command to install iotop under /usr/bin:

How to Use iotop in Linux

At its easiest you can execute iotop without any arguments as shown.

# iotop

You should get a list of running processes along with information about their current disk I/O usage:

Linux Disk I/O Monitor Per Process Basis

The each column heading is self-explanatory, but there are two important things to consider here:

  1. IO – The “IO” column display total I/O (disk and swap) usage for each process.
  2. SWAPIN – The “SwapIn” column displays swap usage for each process.

I recommend start using iotop with -o or –only option to see current processes or threads actually doing I/O, instead of watching all processes or threads.

# iotop --only

Linux Processes or Threads Disk I/O Monitoring

Get Alerts On Linux Disk I/O Activity

You can use cron job scheduling program to run iotop every minute to track any I/O activity it detects and send an alert to your email address.

# vi /etc/cron.d/iotop

And add the following lines into file:

MAILTO=username@domain.com
* * * * * root iotop -botqqq --iter=3 >> /var/log/iotop

If you want, you can tweak the above command as per your requirements.

To know more usage and options about iotop program, run the following command to check the man pages.

# man iotop

Some important iotop usage and keyboard shortcuts.

  1. Move left or right arrow key to change the sorting.
  2. Use –version option to see version number and exit.
  3. Use -h option to see information of usage.
  4. Use -r option to reverse the sorting order.
  5. Use -o option to check processes or thread.
  6. Use -b option to Turn On non-interactive mode to enable logging I/O usage.
  7. Use -p PID to list all processes/threads to monitor.
  8. Use -u USER option to list all the users to monitor.
  9. Use -P option to list only processes. Normally iotop displays all threads.
  10. Use -a option to check accumulated I/O instead of bandwidth.

All the above iotop options are fairly straightforward. The interface almost looks and functions exactly same as Linux top command.

Iotop can be extremely handy in tracking down Linux process which are using high swap memory usage or is causing an high amount of disk IO activity.

Source

A Shell Script to Send Email Alert When Memory Gets Low

A powerful aspect of Unix/Linux shell programs such as bash, is their amazing support for common programming constructs that enable you to make decisions, execute commands repeatedly, create new functions, and so much more. You can write commands in a file known as a shell script and execute them collectively.

This offers you a reliable and effective means of system administration. You can write scripts to automate tasks, for instance daily back ups, system updates etc; create new custom commands/utilities/tools and beyond. You can write scripts to help you keep up with what’s unfolding on a server.

One of the critical components of a server is memory (RAM), it greatly impacts on overall performance of a system.

In this article, we will share a small but useful shell script to send an alert email to one or more system administrator(s), if server memory is running low.

This is script is particularly useful for keeping an eye on Linux VPS (Virtual Private Servers) with small amount of memory, say of about 1GB (approximately 990MB).

Testing Environment Setup

  1. CentOS/RHEL 7 production server with mailx utility installed with working postfix mail server.

This is how the alertmemory.sh script works: first it checks the free memory size, then determines if amount of free memory is less or equal to a specified size (100 MB for the purpose of this guide), used as a bench mark for the least acceptable free memory size.

If this condition is true, it will generate a list of the top 10 processes consuming server RAM and sends an alert email to specified email addresses.

Note: You will have to make a few changes to script (especially the mail sender utility, use the appropriate flags) to meet your Linux distributions requirements.

Shell Script to Check Server Memory
#!/bin/bash 
#######################################################################################
#Script Name    :alertmemory.sh
#Description    :send alert mail when server memory is running low
#Args           :       
#Author         :Aaron Kili Kisinga
#Email          :aaronkilik@gmail.com
#License       : GNU GPL-3	
#######################################################################################
## declare mail variables
##email subject 
subject="Server Memory Status Alert"
##sending mail as
from="server.monitor@example.com"
## sending mail to
to="admin1@example.com"
## send carbon copy to
also_to="admin2@example.com"

## get total free memory size in megabytes(MB) 
free=$(free -mt | grep Total | awk '{print $4}')

## check if free memory is less or equals to  100MB
if [[ "$free" -le 100  ]]; then
        ## get top processes consuming system memory and save to temporary file 
        ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%mem | head >/tmp/top_proccesses_consuming_memory.txt

        file=/tmp/top_proccesses_consuming_memory.txt
        ## send email if system memory is running low
        echo -e "Warning, server memory is running low!\n\nFree memory: $free MB" | mailx -a "$file" -s "$subject" -r "$from" -c "$to" "$also_to"
fi

exit 0

After creating your script /etc/scripts/alertmemory.sh, make it executable and symlink to cron.hourly.

# chmod +x /etc/scripts/alertmemory.sh
# ln -s -t /etc/cron.hourly/alertmemory.sh /etc/scripts/alertmemory.sh

This means that the above script will be run after every 1 hour as long as the server is running.

Tip: You can test if it is working as intended, set the bench mark value a little high to easily trigger an email to be sent, and specify a small interval of about 5 minutes.

Then keep on checking from the command line using the free command provided in the script. Once you confirm that it is working, define the actual values you would like to use.

Below is a screenshot showing a sample alert email.

Linux Memory Email Alert

Linux Memory Email Alert

That’s all! In this article, we explained how to use shell script to send alert emails to system administrators in case server memory (RAM) is running low. You can share any thoughts relating to this topic, with us via the feedback form below.

Source

How to Install and Setup Monit (Linux Process and Services Monitoring) Program

Monit is a free open source and very useful tool that automatically monitors and manages server processfilesdirectorieschecksumspermissionsfilesystems and services like ApacheNginxMySQLFTPSSHSendmailand so on in a UNIX/Linux based systems and provides an excellent and helpful monitoring functionality to system administrators.

The monit has user friendly web interface where you can directly view the system status and setup up processes using native HTTP(S) web server or via the command line interface. This means you must have web server like Apache or Nginx installed on your system to access and view monit web interface.

Read Also : 10 Linux Performance Monitoring Tools

What Monit can do

Monit has a ability to start a process if it is not running, restart a process if not responding and stop a process if uses high resources. Additionally you can also use Monit to Monitor filesdirectories and filesystems for changeschecksum changesfile size changes or timestamp changes. With Monit you can able to monitor remote hosts TCP/IP port, server protocols and ping. Monit keeps its own log file and alerts about any critical error conditions and recovery status.

This article is written to describe a simple guide on Monit installation and configuration on a RHELCentOSFedoraUbuntuLinux Mint and Debian Linux Operating Systems, but it should be easily compatible to Scientific Linux as well.

Step 1: Installing Monit

By default, Monit tool is not available from the system base repositories, you need to add and enable third party epel repository to install monit package under your RHEL/CentOS systems. Once you’ve added epel repository, install package by running the following yum command. For Ubuntu/Debian/Linux Mint user’s can easily install using apt-get command as shown.

On RedHat/CentOS/Fedora/
# yum install monit
On Ubuntu/Debian/Linux Mint
$ sudo apt-get install monit

Step 2: Configuring Monit

Monit is very easy to configure, in fact the configuration files are created to be very easily readable and making them easier for users to understand. It is designed to monitor the running services in every 2 minutes and keeps the logs in “/var/log/monit“.

Monit has it’s web interface that runs on port 2812 using web server. To enable web interface you need to make changes in monit configuration file. The main configuration file of monit located at /etc/monit.conf under (RedHat/CentOS/Fedora) and /etc/monit/monitrc file for (Ubuntu/Debian/Linux Mint). Open this file using your choice of editor.

# vi /etc/monit.conf
$ sudo vi /etc/monit/monitrc

Next, uncomment the following section and add the IP address or domain name of your server, allow anyone to connect and change monit user and password or you can use default ones.

 set httpd port 2812 and
     use address localhost  # only accept connection from localhost
     allow localhost        # allow localhost to connect to the server and
     allow admin:monit      # require user 'admin' with password 'monit'
     allow @monit           # allow users of group 'monit' to connect (rw)
     allow @users readonly  # allow users of group 'users' to connect readonly

Once you’ve configured it, you need to start the monit service to reload the new configuration settings.

# /etc/init.d/monit start
$ sudo /etc/init.d/monit start

Now, you will able to access the monit web interface by navigating to the “http://localhost:2812” or “http://example.com:2812“. Then enter user name as “admin” and password as “monit“. You should get screen similar to below.

Install Monit in Fedora

Monit Web Interface

Step 3: Adding Monitoring Services

Once monit web interface correctly setup, start adding the programs that you want to monitor into the /etc/monit.conf under (RedHat/CentOS/Fedora) and /etc/monit/monitrc file for (Ubuntu/Debian/Linux Mint) at the bottom.

Following are some useful configuration examples for monit, that can be very helpful to see how a service is running, where it keeps its pidfile and how to start and stop a service etc.

Apache
check process httpd with pidfile /var/run/httpd.pid
group apache
start program = "/etc/init.d/httpd start"
stop program = "/etc/init.d/httpd stop"
if failed host 127.0.0.1 port 80
protocol http then restart
if 5 restarts within 5 cycles then timeout
Apache2
check process apache with pidfile /run/apache2.pid
start program = "/etc/init.d/apache2 start" with timeout 60 seconds
stop program  = "/etc/init.d/apache2 stop"
Nginx
check process nginx with pidfile /var/run/nginx.pid
start program = "/etc/init.d/nginx start"
stop program = "/etc/init.d/nginx stop"
MySQL
check process mysqld with pidfile /var/run/mysqld/mysqld.pid
group mysql
start program = "/etc/init.d/mysqld start"
stop program = "/etc/init.d/mysqld stop"
if failed host 127.0.0.1 port 3306 then restart
if 5 restarts within 5 cycles then timeout
SSHD
check process sshd with pidfile /var/run/sshd.pid
start program "/etc/init.d/sshd start"
stop program "/etc/init.d/sshd stop"
if failed host 127.0.0.1 port 22 protocol ssh then restart
if 5 restarts within 5 cycles then timeout

Once you’ve configured all programs for monitoring, check monit syntax for errors. If found any errors fix them, it’s not so tough to figure out what’s went wrong. When you get message like “Control file syntax OK“, or if you see no errors, you can proceed ahead.

# monit -t
$ sudo monit -t

After fixing all possible errors, you can type the following command to start the monit service.

# /etc/init.d/monit restart
$ sudo /etc/init.d/monit restart

You can verify that monit service is started by checking log file.

# tail -f /var/log/monit
$ sudo tail -f /var/log/monit.log
Sample Output
[BDT Apr  3 03:06:04] info     : Starting monit HTTP server at [localhost:2812]
[BDT Apr  3 03:06:04] info     : monit HTTP server started
[BDT Apr  3 03:06:04] info     : 'tecmint.com' Monit started
[BDT Apr  3 03:06:04] error    : 'nginx' process is not running
[BDT Apr  3 03:06:04] info     : 'nginx' trying to restart
[BDT Apr  3 03:06:04] info     : 'nginx' start: /etc/init.d/nginx
Monit Screenshot

This is how looks monit after adding all process for monitoring.

Monit Monitoring Process

Monit Monitoring All Process

Reference Links

  1. Monit Home Page
  2. Monit Documentation
  3. Monit Configuration Examples

Source

WP2Social Auto Publish Powered By : XYZScripts.com