SARG – Squid Analysis Report Generator and Internet Bandwidth Monitoring Tool

SARG is an open source tool that allows you to analyse the squid log files and generates beautiful reports in HTML format with informations about users, IP addresses, top accessed sites, total bandwidth usage, elapsed time, downloads, access denied websites, daily reports, weekly reports and monthly reports.

The SARG is very handy tool to view how much internet bandwidth is utilized by individual machines on the network and can watch on which websites the network’s users are accessing.

Install Sarg Squid Log Analyzer

Install Sarg Squid Log Analyzer in Linux

In this article I will guide you on how to install and configure SARG – Squid Analysis Report Generator on RHEL/CentOS/Fedora and Debian/Ubuntu/Linux Mint systems.

Installing Sarg – Squid Log Analyzer in Linux

I assume that you already installed, configured and tested Squid server as a transparent proxy and DNS for the name resolution in caching mode. If not, please install and configure them first before moving further installation of Sarg.

Important: Please remember without the Squid and DNS setup, no use of installing sarg on the system it will won’t work at all. So, it’s a request to install them first before proceeding further to Sarg installation.

Follow these guides to install DNS and Squid in your Linux systems:

Install Cache-Only DNS Server
  1. Install Cache Only DSN Server in RHEL/CentOS 7
  2. Install Cache Only DSN Server in RHEL/CentOS 6
  3. Install Cache Only DSN Server in Ubuntu and Debian
Install Squid as Transparent Proxy
  1. Setting Up Squid Transparent Proxy in Ubuntu and Debian
  2. Install Squid Cache Server on RHEL and CentOS

Step 1: Installing Sarg from Source

The ‘sarg‘ package by default not included in RedHat based distributions, so we need to manually compile and install it from source tarball. For this, we need some additional pre-requisites packages to be installed on the system before compiling it from source.

On RedHat/CentOS/Fedora
# yum install –y gcc gd gd-devel make perl-GD wget httpd

Once you’ve installed all the required packages, download the latest sarg source tarball or you may use the following wget command to download and install it as shown below.

# wget http://liquidtelecom.dl.sourceforge.net/project/sarg/sarg/sarg-2.3.10/sarg-2.3.10.tar.gz
# tar -xvzf sarg-2.3.10.tar.gz
# cd sarg-2.3.10
# ./configure
# make
# make install
On Debian/Ubuntu/Linux Mint

On Debian based distributions, sarg package can be easily install from the default repositories using apt-getpackage manager.

$ sudo apt-get install sarg

Step 2: Configuring Sarg

Now it’s time to edit some parameters in SARG main configuration file. The file contains lots of options to edit, but we will only edit required parameters like:

  1. Access logs path
  2. Output directory
  3. Date Format
  4. Overwrite report for the same date.

Open sarg.conf file with your choice of editor and make changes as shown below.

# vi /usr/local/etc/sarg.conf        [On RedHat based systems]
$ sudo nano /etc/sarg/sarg.conf        [On Debian based systems]

Now Uncomment and add the original path to your squid access log file.

# sarg.conf
#
# TAG:  access_log file
#       Where is the access.log file
#       sarg -l file
#
access_log /var/log/squid/access.log

Next, add the correct Output directory path to save the generate squid reports in that directory. Please note, under Debian based distributions the Apache web root directory is ‘/var/www‘. So, please be careful while adding correct web root paths under your Linux distributions.

# TAG:  output_dir
#       The reports will be saved in that directory
#       sarg -o dir
#
output_dir /var/www/html/squid-reports

Set the correct date format for reports. For example, ‘date_format e‘ will display reports in ‘dd/mm/yy‘ format.

# TAG:  date_format
#       Date format in reports: e (European=dd/mm/yy), u (American=mm/dd/yy), w (Weekly=yy.ww)
#
date_format e

Next, uncomment and set Overwrite report to ‘Yes’.

# TAG: overwrite_report yes|no
#      yes - if report date already exist then will be overwritten.
#       no - if report date already exist then will be renamed to filename.n, filename.n+1
#
overwrite_report yes

That’s it! Save and close the file.

Step 3: Generating Sarg Report

Once, you’ve done with the configuration part, it’s time to generate the squid log report using the following command.

# sarg -x        [On RedHat based systems]
# sudo sarg -x        [On Debian based systems]
Sample Output
[root@localhost squid]# sarg -x

SARG: Init
SARG: Loading configuration from /usr/local/etc/sarg.conf
SARG: Deleting temporary directory "/tmp/sarg"
SARG: Parameters:
SARG:           Hostname or IP address (-a) =
SARG:                    Useragent log (-b) =
SARG:                     Exclude file (-c) =
SARG:                  Date from-until (-d) =
SARG:    Email address to send reports (-e) =
SARG:                      Config file (-f) = /usr/local/etc/sarg.conf
SARG:                      Date format (-g) = USA (mm/dd/yyyy)
SARG:                        IP report (-i) = No
SARG:             Keep temporary files (-k) = No
SARG:                        Input log (-l) = /var/log/squid/access.log
SARG:               Resolve IP Address (-n) = No
SARG:                       Output dir (-o) = /var/www/html/squid-reports/
SARG: Use Ip Address instead of userid (-p) = No
SARG:                    Accessed site (-s) =
SARG:                             Time (-t) =
SARG:                             User (-u) =
SARG:                    Temporary dir (-w) = /tmp/sarg
SARG:                   Debug messages (-x) = Yes
SARG:                 Process messages (-z) = No
SARG:  Previous reports to keep (--lastlog) = 0
SARG:
SARG: sarg version: 2.3.7 May-30-2013
SARG: Reading access log file: /var/log/squid/access.log
SARG: Records in file: 355859, reading: 100.00%
SARG:    Records read: 355859, written: 355859, excluded: 0
SARG: Squid log format
SARG: Period: 2014 Jan 21
SARG: Sorting log /tmp/sarg/172_16_16_55.user_unsort
......

Note: The ‘sarg -x’ command will read the ‘sarg.conf‘ configuration file and takes the squid ‘access.log‘ path and generates a report in html format.

Step 4: Assessing Sarg Report

The generated reports placed under ‘/var/www/html/squid-reports/‘ or ‘/var/www/squid-reports/‘ which can be accessed from the web browser using the address.

http://localhost/squid-reports
OR
http://ip-address/squid-reports
Sarg Main Window

Squid Log Analyzer

Sarg Main Window

Specific Date

Date Wise Report

Date Wise Report

User Report

User Bandwidth Report

User Bandwidth Report

Top Accessed Sites

Squid Top Accessed Sites

Top Accessed Sites

Top Sites and Users

Squid Top Accessed Sites and Users

Top Accessed Sites and Users

Top Downloads

Squid Top Downloads

Top Downloads

Denied Access

Squid Denied Access

Denied Access Sites

Authentication Failures

Squid Authentication Failures

Proxy Authentication Failures

Step 5: Automatic Generating Sarg Report

To automate the process of generating sarg report in given span of time via cron jobs. For example, let’s assume you want to generate reports on hourly basis automatically, to do this, you need to configure a Cron job.

# crontab -e

Next, add the following line at the bottom of the file. Save and close it.

* */1 * * * /usr/local/bin/sarg -x

The above Cron rule will generate SARG report every 1 hour.

Reference Links

Sarg Homepage

That’s it with SARG! I will be coming up with few more interesting articles on Linux, till then stay tuned to TecMint.com and don’t forget to add your valuable comments.

Source

CBM – Shows Network Bandwidth in Ubuntu

CBM (Color Bandwidth Meter) is a simple tool that shows the current network traffic on all connected devices in colors in Ubuntu Linux. It is used to monitor network bandwidth. It shows the network interface, bytes received, bytes transmitted and total bytes.

Read Alsoiftop – A Real Time Linux Network Bandwidth Monitoring Tool

In this article, we will show you how to install and use cbm network bandwidth monitoring tool in Ubuntu and its derivative such as Linux Mint.

How to Install CBM Network Monitoring Tool in Ubuntu

This cbm network bandwidth monitoring tool is available to install from the default Ubuntu repositories using the APT package manager as shown.

$ sudo apt install cbm

Once you have installed cbm, you can start the program using the following command.

$ cbm 

Ubuntu Network Bandwidth Monitoring

Ubuntu Network Bandwidth Monitoring

While cbm is running, you can control its behavior with the following keys:

  • Up/Down – arrows keys to select an interface to show details about.
  • b – Switch between bits per second and bytes per second.
  • + – increase the update delay by 100ms.
  • -- – decrease the update delay by 100ms.
  • q – exit from the program.

If you are having any network connection issues, check out MTR – a network diagnostic tool for Linux. It combines the functionality of commonly used traceroute and ping programs into a single diagnostics tool.

However, to monitor multiple hosts on a network, you need robust network monitoring tools such as the ones listed below:

    1. How to Install Nagios 4 in Ubuntu
    2. LibreNMS – A Fully Featured Network Monitoring Tool for Linux
    3. Monitorix – A Lightweight System and Network Monitoring Tool for Linux
    4. Install Cacti (Network Monitoring) on RHEL/CentOS 7.x/6.x/5.x and Fedora 24-12
    5. Install Munin (Network Monitoring) in RHEL, CentOS and Fedora

That’s it. In this article, we have explained how to install and use cbm network bandwidth monitoring tool in Ubuntu and its derivative such as Linux Mint. Share your thoughts about cbm via the command form below.

Source

Cpustat – Monitors CPU Utilization by Running Processes in Linux

Cpustat is a powerful system performance measure program for Linux, written using Go programming language. It attempts to reveal CPU utilization and saturation in an effective way, using The Utilization Saturation and Errors (USE) Method (a methodology for analyzing the performance of any system).

It extracts higher frequency samples of every process being executed on the system and then summarizes these samples at a lower frequency. For instance, it can measure every process every 200ms and summarize these samples every 5 seconds, including min/average/max values for certain metrics.

Suggested Read: 20 Command Line Tools to Monitor Linux Performance

Cpustat outputs data in two possible ways: a pure text list of the summary interval and a colorful scrolling dashboard of each sample.

How to Install Cpustat in Linux

You must have Go (GoLang) installed on your Linux system in order to use cpustat, click on the link below to follow the GoLang installation steps that is if you do not have it installed:

  1. Install GoLang (Go Programming Language) in Linux

Smem – Reports Memory Consumption Per-Process and Per-User Basis in Linux

Memory management in terms of monitoring memory usage is one important thing to do on your Linux system, there are many tools available for monitoring your memory usage that you can find on different Linux distributions. But they work in different ways, in this how to guide, we shall take a look at how to install and use one such tool called smem.

Don’t Miss: 20 Command Line Tools to Monitor Linux Performance

Smem is a command-line memory reporting tool thats gives a user diverse reports on memory usage on a Linux system. There is one unique thing about smem, unlike other traditional memory reporting tools, it reports PSS(Proportional Set Size), a more meaningful representation of memory usage by applications and libraries in a virtual memory setup.

Smem - Linux Memory Reporting Tool

Smem – Linux Memory Reporting Tool

Existing traditional tools focus mainly on reading RSS (Resident Set Size) which is a standard measure to monitor memory usage in a physical memory scheme, but tends to overestimate memory usage by applications.

PSS on the other hand, gives a reasonable measure by determining the “fair-share” of memory used by applications and libraries in a virtual memory scheme.

You can read this guide (about memory RSS and PSS) to understand memory consumption in a Linux system, but let us proceed to looking at some of the features of smem.

Features of Smem Tool

  1. System overview listing
  2. Listings and also filtering by process, mappings or user
  3. Using data from /proc filesystem
  4. Configurable listing columns from several data sources
  5. Configurable output units and percentages
  6. Easy to configure headers and totals in listings
  7. Using data snapshots from directory mirrors or compressed tar files
  8. Built-in chart generation mechanism
  9. Lightweight capture tool used in embedded systems

How to Install Smem – Memory Reporting Tool in Linux

Before you proceed with installation of smem, your system must meet the following requirements:

  1. modern kernel (> 2.6.27 or so)
  2. a recent version of Python (2.4 or so)
  3. optional matplotlib library for generation of charts

Most of the today’s Linux distributions comes with latest Kernel version with Python 2 or 3 support, so the only requirement is to install matplotlib library which is used to generate nice charts.

On RHEL, CentOS and Fedora

First enable EPEL (Extra Packages for Enterprise Linux) repository and then install as follows:

# yum install smem python-matplotlib python-tk

On Debian and Ubuntu

$ sudo apt-get install smem

On Linux Mint

$ sudo apt-get install smem python-matplotlib python-tk

on Arch Linux

Use this AUR repository.

How to Use Smem – Memory Reporting Tool in Linux

To view a report of memory usage across the whole system, by all system users, run the following command:

$ sudo smem 
Monitor Memory Usage of Linux System
 PID User     Command                         Swap      USS      PSS      RSS 
 6367 tecmint  cat                                0      100      145     1784 
 6368 tecmint  cat                                0      100      147     1676 
 2864 tecmint  /usr/bin/ck-launch-session         0      144      165     1780 
 7656 tecmint  gnome-pty-helper                   0      156      178     1832 
 5758 tecmint  gnome-pty-helper                   0      156      179     1916 
 1441 root     /sbin/getty -8 38400 tty2          0      152      184     2052 
 1434 root     /sbin/getty -8 38400 tty5          0      156      187     2060 
 1444 root     /sbin/getty -8 38400 tty3          0      156      187     2060 
 1432 root     /sbin/getty -8 38400 tty4          0      156      188     2124 
 1452 root     /sbin/getty -8 38400 tty6          0      164      196     2064 
 2619 root     /sbin/getty -8 38400 tty1          0      164      196     2136 
 3544 tecmint  sh -c /usr/lib/linuxmint/mi        0      212      224     1540 
 1504 root     acpid -c /etc/acpi/events -        0      220      236     1604 
 3311 tecmint  syndaemon -i 0.5 -K -R             0      252      292     2556 
 3143 rtkit    /usr/lib/rtkit/rtkit-daemon        0      300      326     2548 
 1588 root     cron                               0      292      333     2344 
 1589 avahi    avahi-daemon: chroot helpe         0      124      334     1632 
 1523 root     /usr/sbin/irqbalance               0      316      343     2096 
  585 root     upstart-socket-bridge --dae        0      328      351     1820 
 3033 tecmint  /usr/bin/dbus-launch --exit        0      328      360     2160 
 1346 root     upstart-file-bridge --daemo        0      348      371     1776 
 2607 root     /usr/bin/xdm                       0      188      378     2368 
 1635 kernoops /usr/sbin/kerneloops               0      352      386     2684 
  344 root     upstart-udev-bridge --daemo        0      400      427     2132 
 2960 tecmint  /usr/bin/ssh-agent /usr/bin        0      480      485      992 
 3468 tecmint  /bin/dbus-daemon --config-f        0      344      515     3284 
 1559 avahi    avahi-daemon: running [tecm        0      284      517     3108 
 7289 postfix  pickup -l -t unix -u -c            0      288      534     2808 
 2135 root     /usr/lib/postfix/master            0      352      576     2872 
 2436 postfix  qmgr -l -t unix -u                 0      360      606     2884 
 1521 root     /lib/systemd/systemd-logind        0      600      650     3276 
 2222 nobody   /usr/sbin/dnsmasq --no-reso        0      604      669     3288 
....

When a normal user runs smem, it displays memory usage by process that the user has started, the processes are arranged in order of increasing PSS.

Take a look at the output below on my system for memory usage by processes started by user aaronkilik:

$ smem
Monitor User Memory Usage in Linux
 PID User     Command                         Swap      USS      PSS      RSS 
 6367 tecmint  cat                                0      100      145     1784 
 6368 tecmint  cat                                0      100      147     1676 
 2864 tecmint  /usr/bin/ck-launch-session         0      144      166     1780 
 3544 tecmint  sh -c /usr/lib/linuxmint/mi        0      212      224     1540 
 3311 tecmint  syndaemon -i 0.5 -K -R             0      252      292     2556 
 3033 tecmint  /usr/bin/dbus-launch --exit        0      328      360     2160 
 3468 tecmint  /bin/dbus-daemon --config-f        0      344      515     3284 
 3122 tecmint  /usr/lib/gvfs/gvfsd                0      656      801     5552 
 3471 tecmint  /usr/lib/at-spi2-core/at-sp        0      708      864     5992 
 3396 tecmint  /usr/lib/gvfs/gvfs-mtp-volu        0      804      914     6204 
 3208 tecmint  /usr/lib/x86_64-linux-gnu/i        0      892     1012     6188 
 3380 tecmint  /usr/lib/gvfs/gvfs-afc-volu        0      820     1024     6396 
 3034 tecmint  //bin/dbus-daemon --fork --        0      920     1081     3040 
 3365 tecmint  /usr/lib/gvfs/gvfs-gphoto2-        0      972     1099     6052 
 3228 tecmint  /usr/lib/gvfs/gvfsd-trash -        0      980     1153     6648 
 3107 tecmint  /usr/lib/dconf/dconf-servic        0     1212     1283     5376 
 6399 tecmint  /opt/google/chrome/chrome -        0      144     1409    10732 
 3478 tecmint  /usr/lib/x86_64-linux-gnu/g        0     1724     1820     6320 
 7365 tecmint  /usr/lib/gvfs/gvfsd-http --        0     1352     1884     8704 
 6937 tecmint  /opt/libreoffice5.0/program        0     1140     2328     5040 
 3194 tecmint  /usr/lib/x86_64-linux-gnu/p        0     1956     2405    14228 
 6373 tecmint  /opt/google/chrome/nacl_hel        0     2324     2541     8908 
 3313 tecmint  /usr/lib/gvfs/gvfs-udisks2-        0     2460     2754     8736 
 3464 tecmint  /usr/lib/at-spi2-core/at-sp        0     2684     2823     7920 
 5771 tecmint  ssh -p 4521 tecmnt765@212.7        0     2544     2864     6540 
 5759 tecmint  /bin/bash                          0     2416     2923     5640 
 3541 tecmint  /usr/bin/python /usr/bin/mi        0     2584     3008     7248 
 7657 tecmint  bash                               0     2516     3055     6028 
 3127 tecmint  /usr/lib/gvfs/gvfsd-fuse /r        0     3024     3126     8032 
 3205 tecmint  mate-screensaver                   0     2520     3331    18072 
 3171 tecmint  /usr/lib/mate-panel/notific        0     2860     3495    17140 
 3030 tecmint  x-session-manager                  0     4400     4879    17500 
 3197 tecmint  mate-volume-control-applet         0     3860     5226    23736 
...

There are many options to invoke while using smem, for example, to view system wide memory consumption, run the following command:

$ sudo smem -w
Monitor System Wide Memory User Consumption
Area                           Used      Cache   Noncache 
firmware/hardware                 0          0          0 
kernel image                      0          0          0 
kernel dynamic memory       1425320    1291412     133908 
userspace memory            2215368     451608    1763760 
free memory                 4424936    4424936          0 

To view memory usage on a per-user basis, run the command below:

$ sudo smem -u
Monitor Memory Consumption Per-User Basis in Linux
User     Count     Swap      USS      PSS      RSS 
rtkit        1        0      300      326     2548 
kernoops     1        0      352      385     2684 
avahi        2        0      408      851     4740 
postfix      2        0      648     1140     5692 
messagebus     1        0     1012     1173     3320 
syslog       1        0     1396     1419     3232 
www-data     2        0     5100     6572    13580 
mpd          1        0     7416     8302    12896 
nobody       2        0     4024    11305    24728 
root        39        0   323876   353418   496520 
tecmint     64        0  1652888  1815699  2763112 

You can also report memory usage by mappings as follows:

$ sudo smem -m
Monitor Memory Usage by Mappings in Linux
Map                                       PIDs   AVGPSS      PSS 
/dev/fb0                                     1        0        0 
/home/tecmint/.cache/fontconfig/7ef2298f    18        0        0 
/home/tecmint/.cache/fontconfig/c57959a1    18        0        0 
/home/tecmint/.local/share/mime/mime.cac    15        0        0 
/opt/google/chrome/chrome_material_100_p     9        0        0 
/opt/google/chrome/chrome_material_200_p     9        0        0 
/usr/lib/x86_64-linux-gnu/gconv/gconv-mo    41        0        0 
/usr/share/icons/Mint-X-Teal/icon-theme.    15        0        0 
/var/cache/fontconfig/0c9eb80ebd1c36541e    20        0        0 
/var/cache/fontconfig/0d8c3b2ac0904cb8a5    20        0        0 
/var/cache/fontconfig/1ac9eb803944fde146    20        0        0 
/var/cache/fontconfig/3830d5c3ddfd5cd38a    20        0        0 
/var/cache/fontconfig/385c0604a188198f04    20        0        0 
/var/cache/fontconfig/4794a0821666d79190    20        0        0 
/var/cache/fontconfig/56cf4f4769d0f4abc8    20        0        0 
/var/cache/fontconfig/767a8244fc0220cfb5    20        0        0 
/var/cache/fontconfig/8801497958630a81b7    20        0        0 
/var/cache/fontconfig/99e8ed0e538f840c56    20        0        0 
/var/cache/fontconfig/b9d506c9ac06c20b43    20        0        0 
/var/cache/fontconfig/c05880de57d1f5e948    20        0        0 
/var/cache/fontconfig/dc05db6664285cc2f1    20        0        0 
/var/cache/fontconfig/e13b20fdb08344e0e6    20        0        0 
/var/cache/fontconfig/e7071f4a29fa870f43    20        0        0 
....

There are also options for filtering smem output and we shall look at two examples here.

To filter output by username, invoke the -u or --userfilter="regex" option as below:

$ sudo smem -u
Report Memory Usage by User
User     Count     Swap      USS      PSS      RSS 
rtkit        1        0      300      326     2548 
kernoops     1        0      352      385     2684 
avahi        2        0      408      851     4740 
postfix      2        0      648     1140     5692 
messagebus     1        0     1012     1173     3320 
syslog       1        0     1400     1423     3236 
www-data     2        0     5100     6572    13580 
mpd          1        0     7416     8302    12896 
nobody       2        0     4024    11305    24728 
root        39        0   323804   353374   496552 
tecmint     64        0  1708900  1871766  2819212 

To filter output by process name, invoke the -P or --processfilter="regex" option as follows:

$ sudo smem --processfilter="firefox"
Report Memory Usage by Process Name
PID User     Command                         Swap      USS      PSS      RSS 
 9212 root     sudo smem --processfilter=f        0     1172     1434     4856 
 9213 root     /usr/bin/python /usr/bin/sm        0     7368     7793    11984 
 4424 tecmint  /usr/lib/firefox/firefox           0   931732   937590   961504 

Output formatting can be very important, and there are options to help you format memory reports and we shall take a look at few examples below.

To show desired columns in the report, use -c or --columns option as follows:

$ sudo smem -c "name user pss rss"
Report Memory Usage by Columns
Name                     User          PSS      RSS 
cat                      tecmint       145     1784 
cat                      tecmint       147     1676 
ck-launch-sessi          tecmint       165     1780 
gnome-pty-helpe          tecmint       178     1832 
gnome-pty-helpe          tecmint       179     1916 
getty                    root          184     2052 
getty                    root          187     2060 
getty                    root          187     2060 
getty                    root          188     2124 
getty                    root          196     2064 
getty                    root          196     2136 
sh                       tecmint       224     1540 
acpid                    root          236     1604 
syndaemon                tecmint       296     2560 
rtkit-daemon             rtkit         326     2548 
cron                     root          333     2344 
avahi-daemon             avahi         334     1632 
irqbalance               root          343     2096 
upstart-socket-          root          351     1820 
dbus-launch              tecmint       360     2160 
upstart-file-br          root          371     1776 
xdm                      root          378     2368 
kerneloops               kernoops      386     2684 
upstart-udev-br          root          427     2132 
ssh-agent                tecmint       485      992 
...

You can invoke the -p option to report memory usage in percentages, as in the command below:

$ sudo smem -p
Report Memory Usage by Percentages
 PID User     Command                         Swap      USS      PSS      RSS 
 6367 tecmint  cat                            0.00%    0.00%    0.00%    0.02% 
 6368 tecmint  cat                            0.00%    0.00%    0.00%    0.02% 
 9307 tecmint  sh -c { sudo /usr/lib/linux    0.00%    0.00%    0.00%    0.02% 
 2864 tecmint  /usr/bin/ck-launch-session     0.00%    0.00%    0.00%    0.02% 
 3544 tecmint  sh -c /usr/lib/linuxmint/mi    0.00%    0.00%    0.00%    0.02% 
 5758 tecmint  gnome-pty-helper               0.00%    0.00%    0.00%    0.02% 
 7656 tecmint  gnome-pty-helper               0.00%    0.00%    0.00%    0.02% 
 1441 root     /sbin/getty -8 38400 tty2      0.00%    0.00%    0.00%    0.03% 
 1434 root     /sbin/getty -8 38400 tty5      0.00%    0.00%    0.00%    0.03% 
 1444 root     /sbin/getty -8 38400 tty3      0.00%    0.00%    0.00%    0.03% 
 1432 root     /sbin/getty -8 38400 tty4      0.00%    0.00%    0.00%    0.03% 
 1452 root     /sbin/getty -8 38400 tty6      0.00%    0.00%    0.00%    0.03% 
 2619 root     /sbin/getty -8 38400 tty1      0.00%    0.00%    0.00%    0.03% 
 1504 root     acpid -c /etc/acpi/events -    0.00%    0.00%    0.00%    0.02% 
 3311 tecmint  syndaemon -i 0.5 -K -R         0.00%    0.00%    0.00%    0.03% 
 3143 rtkit    /usr/lib/rtkit/rtkit-daemon    0.00%    0.00%    0.00%    0.03% 
 1588 root     cron                           0.00%    0.00%    0.00%    0.03% 
 1589 avahi    avahi-daemon: chroot helpe     0.00%    0.00%    0.00%    0.02% 
 1523 root     /usr/sbin/irqbalance           0.00%    0.00%    0.00%    0.03% 
  585 root     upstart-socket-bridge --dae    0.00%    0.00%    0.00%    0.02% 
 3033 tecmint  /usr/bin/dbus-launch --exit    0.00%    0.00%    0.00%    0.03% 
....

The command below will show totals at the end of each column of the output:

$ sudo smem -t
Report Total Memory Usage Count
PID User     Command                         Swap      USS      PSS      RSS 
 6367 tecmint  cat                                0      100      139     1784 
 6368 tecmint  cat                                0      100      141     1676 
 9307 tecmint  sh -c { sudo /usr/lib/linux        0       96      158     1508 
 2864 tecmint  /usr/bin/ck-launch-session         0      144      163     1780 
 3544 tecmint  sh -c /usr/lib/linuxmint/mi        0      108      170     1540 
 5758 tecmint  gnome-pty-helper                   0      156      176     1916 
 7656 tecmint  gnome-pty-helper                   0      156      176     1832 
 1441 root     /sbin/getty -8 38400 tty2          0      152      181     2052 
 1434 root     /sbin/getty -8 38400 tty5          0      156      184     2060 
 1444 root     /sbin/getty -8 38400 tty3          0      156      184     2060 
 1432 root     /sbin/getty -8 38400 tty4          0      156      185     2124 
 1452 root     /sbin/getty -8 38400 tty6          0      164      193     2064 
 2619 root     /sbin/getty -8 38400 tty1          0      164      193     2136 
 1504 root     acpid -c /etc/acpi/events -        0      220      232     1604 
 3311 tecmint  syndaemon -i 0.5 -K -R             0      260      298     2564 
 3143 rtkit    /usr/lib/rtkit/rtkit-daemon        0      300      324     2548 
 1588 root     cron                               0      292      326     2344 
 1589 avahi    avahi-daemon: chroot helpe         0      124      332     1632 
 1523 root     /usr/sbin/irqbalance               0      316      340     2096 
  585 root     upstart-socket-bridge --dae        0      328      349     1820 
 3033 tecmint  /usr/bin/dbus-launch --exit        0      328      359     2160 
 1346 root     upstart-file-bridge --daemo        0      348      370     1776 
 2607 root     /usr/bin/xdm                       0      188      375     2368 
 1635 kernoops /usr/sbin/kerneloops               0      352      384     2684 
  344 root     upstart-udev-bridge --daemo        0      400      426     2132 
.....
-------------------------------------------------------------------------------
  134 11                                          0  2171428  2376266  3587972 

Further more, there are options for graphical reports that you can also use and we shall dive into them in this sub section.

You can produce a bar graph of processes and their PSS and RSS values, in the example below, we produce a bar graph of processes owned by the root user.

The vertical plane shows the PSS and RSS measure of processes and the horizontal plane represents each root user process:

$ sudo smem --userfilter="root" --bar pid -c"pss rss"

Linux Memory Usage in PSS and RSS Values

Linux Memory Usage in PSS and RSS Values

You can also produce a pie chart showing processes and their memory consumption based on PSS or RSSvalues. The command below outputs a pie chart for processes owned by root user measuring values.

The --pie name means label by name and -s option helps to sort by PSS value.

$ sudo smem --userfilter="root" --pie name -s pss

Linux Memory Consumption by Processes

Linux Memory Consumption by Processes

There are many other known fields apart from PSS and RSS used for labeling charts:

To get help, simply type, smem -h or visit the manual entry page.

We shall stop here with smem, but to understand it better, use it with many other options that you can find in the man page. As usual you can use the comment section below to express any thoughts or concerns.

Reference Linkshttps://www.selenic.com/smem/

Source

Observium: A Complete Network Management and Monitoring System for RHEL/CentOS

Observium is a PHP/MySQL driven Network Observation and Monitoring application, that supports a wide range of operating systems/hardware platforms including, Linux, Windows, FreeBSD, Cisco, HP, Dell, NetApp and many more. It seeks to present a robust and simple web interface to monitor health and performance of your network.

Install Observium in CentOS

Install Observium in CentOS/RHEL

Observium gathers data from devices with the help of SNMP and display those data in graphical pattern via a web interface. It makes hefty use of the RRDtool package. It has a number of thin core design goals, which includes collecting as much historical information about devices, being totally auto-discovered with slight or no manual interruption, and having a very simple yet powerful interface.

Observium Demo

Please have a quick online demo of the Observium deployed by the developer at the following location.

  1. http://demo.observium.org/

This article will guide you on how to install Observium on RHELCentOS and Scientific Linux, the supported version is EL (Enterprise Linux) 6.x. Currently, Observium unsupported for EL release 4 and 5 respectively. So, please don’t use following instructions on these releases.

RPMForge and EPEL is a repository that provides many add-on rpm software packages for RHEL, CentOS and Scientific Linux. Let’s install and enable these two community based repositories using the following serious of commands.

On i386 Systems
# yum install wget
# wget http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.3-1.el5.rf.i386.rpm
# wget http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
# wget http://apt.sw.be/RPM-GPG-KEY.dag.txt
# rpm --import RPM-GPG-KEY.dag.txt
# rpm -Uvh rpmforge-release-0.5.3-1.el5.rf.i386.rpm
# rpm -Uvh epel-release-6-8.noarch.rpm
On x86_64 Systems
# yum install wget
# wget http://packages.sw.be/rpmforge-release/rpmforge-release-0.5.2-2.el6.rf.rpm
# wget http://epel.mirror.net.in/epel/6/x86_64/epel-release-6-8.noarch.rpm
# wget http://apt.sw.be/RPM-GPG-KEY.dag.txt
# rpm --import RPM-GPG-KEY.dag.txt
# rpm -Uvh rpmforge-release-0.5.2-2.el6.rf.rpm
# rpm -Uvh epel-release-6-8.noarch.rpm

Install RPMForge Repository

Install RPMForge Repository

Install EPEL Repository

Install EPEL Repository

Installing Repositories

Installing Repositories

Step 2: Install Needed Software Packages

Now let’s install the required software packages needed for Observium.

# yum install httpd php php-mysql php-gd php-snmp vixie-cron php-mcrypt \
php-pear net-snmp net-snmp-utils graphviz subversion mysql-server mysql rrdtool \
fping ImageMagick jwhois nmap ipmitool php-pear.noarch MySQL-python

Install Needed Packages

Install Needed Packages

If you wish to monitor virtual machines, please install ‘libvirt‘ package.

# yum install libvirt

Step 3: Downloading Observium

For your information, Observium has two following editions

  1. Community/Open Source Edition: This edition is freely available for download with less features and few security fixes.
  2. Subscription Edition: This edition is comes with additional features, rapid feature/fixes, hardware support and easy to use SVN-based release mechanism.

Firstly navigate to the /opt directly, here we will going to install Observium as default. If you wish to install somewhere else, please modify commands and configuration accordingly. We strongly suggest you to first deploy under /opt directory. Once you verify that everything works perfectly, you can install at your desired location.

If you have a active Observium subscription, you can use SVN repositories to download most recent version. A valid subscription account only valid for a single installation and two testing or development installations with daily security patches, new features and bug fixes.

To download most recent stable and current version of Observium, you need to have a svn package installed on the system, in order to pull the files from the SVN repository.

# yum install svn
Development Version
# svn co http://svn.observium.org/svn/observium/trunk observium
Stable Version
# svn co http://svn.observium.org/svn/observium/branches/stable observium

We don’t have a valid subscription, So we we are going to try out Observium using the Community/Open Source Edition. Download the latest ‘observium-community-latest.tar.gz’ stable version and unpack it as shown.

# cd /opt
# wget http://www.observium.org/observium-community-latest.tar.gz
# tar zxvf observium-community-latest.tar.gz

Download Observium Community Edition

Download Observium Community Edition

Step 4: Creating Observium MySQL Database

This is a clean installation of MySQL. So, we are going to set a new root password with the help of following command.

# service mysqld start
# /usr/bin/mysqladmin -u root password 'yourmysqlpassword'

Now login into mysql shell and create the new Observium database.

# mysql -u root -p

mysql> CREATE DATABASE observium;
mysql> GRANT ALL PRIVILEGES ON observium.* TO 'observium'@'localhost' IDENTIFIED BY 'dbpassword';

Step 5: Configure Observium

Configuring SELinux to work with Observium is beyond the scope of this article, so we disabled SELinux. If you are familiar with SELinux rules, then you can configure it, but no guarantee that the Observium work with active SELinux. So, better disable it permanently. To do, open ‘/etc/sysconfig/selinux‘ file and change the option from ‘permissive‘ to ‘disabled‘.

# vi /etc/sysconfig/selinux
SELINUX=disabled

Copy the default configuration file ‘config.php.default‘ to ‘config.php‘ and modify the settings as shown.

# /opt/observium
# cp config.php.default config.php

Now open ‘config.php‘ file and enter MySQL details such as database name, username and password.

# vi config.php
// Database config
$config['db_host'] = 'localhost';
$config['db_user'] = 'observium';
$config['db_pass'] = 'dbpassword';
$config['db_name'] = 'observium';

Then add an entry for fping binary location to config.php. In RHEL distribution the location is different.

$config['fping'] = "/usr/sbin/fping";

Enter MySQL Settings

Enter MySQL Settings

Next, run the following command to setup the MySQL database and insert the database default file schema.

# php includes/update/update.php

Insert Observium Database Schema

Insert Observium Database Schema

Step 6: Configure Apache for Observium

Now create a ‘rrd‘ directory under ‘/opt/observium‘ directory for storing RRD’s.

# /opt/observium
# mkdir rrd

Next, grant Apache ownership to ‘rrd‘ directory to write and store RRD’s under this directory.

# chown apache:apache rrd

Create a Apache Virtual Host directive for Obervium in ‘/etc/httpd/conf/httpd.conf‘ file.

# vi /etc/httpd/conf/httpd.conf

Add the following Virtual Host directive at the bottom of the file and enable Virtualhost section as shown in the screenshot below.

<VirtualHost *:80>
  DocumentRoot /opt/observium/html/
  ServerName  observium.domain.com
  CustomLog /opt/observium/logs/access_log combined
  ErrorLog /opt/observium/logs/error_log
  <Directory "/opt/observium/html/">
  AllowOverride All
  Options FollowSymLinks MultiViews
  </Directory>
  </VirtualHost>

Create Observium Virtual Host

Create Observium Virtual Host

To maintain observium logs, create a ‘logs‘ directory for Apache under ‘/op/observium‘ and apply Apache ownership to write logs.

# mkdir /opt/observium/logs
# chown apache:apache /opt/observium/logs

After all settings, restart Apache service.

# service httpd restart

Step 7: Create Observium Admin User

Add a first user, give level of 10 for admin. Make sure to replace username and password with your choice.

# cd /opt/observium
# ./adduser.php tecmint tecmint123 10

User tecmint added successfully.

Next add a New Device and run following commands to populate the data for new device.

# ./add_device.php <hostname> <community> v2c
# ./discovery.php -h all
# ./poller.php -h all

Populate Observium Data

Populate Observium Data

Next set a cron jobs, create a new file ‘/etc/cron.d/observium‘ and add the following contents.

33  */6   * * *   root    /opt/observium/discovery.php -h all >> /dev/null 2>&1
*/5 *      * * *   root    /opt/observium/discovery.php -h new >> /dev/null 2>&1
*/5 *      * * *   root    /opt/observium/poller-wrapper.py 1 >> /dev/null 2>&1

Reload cron process to take new entries.

# /etc/init.d/cron reload

The final step is to add httpd and mysqld services system-wide, to automatically start after system boot.

# chkconfig mysqld on
# chkconfig httpd on

Finally, open your favourite browser and point to http://Your-Ip-Address.

Observium Login Screen

Observium Login Screen

Observium Dashboard

Observium Dashboard

Observium Screenshot Tour

Following are the screen grabs of last mid-2013, taken from the Observium website. For up-to-date view, please check live demo.

Complete System Information

Complete System Information

Load Average Graphs

Load Average Graphs

Historical Usage Overview

Historical Usage Overview

CPU Frequency Monitoring

CPU Frequency Monitoring

Conclusion

Observium doesn’t mean to completely remove other monitoring tools such as Nagios or Cacti, but rather to addition them with terrific understanding of certain devices. For this reason, its important to deploy Observium with Naigos or other monitoring systems to provide alerting and Cacti to produce customized graphing of your network devices.

Reference Links:

  1. Observium Homepage
  2. Observium Documentation

Source

Sysstat – All-in-One System Performance and Usage Activity Monitoring Tool For Linux

Sysstat is really a handy tool which comes with number of utilities to monitor system resources, their performance and usage activities. Number of utilities that we all use in our daily bases comes with sysstat package. It also provide the tool which can be scheduled using cron to collect all performance and activity data.

Install Sysstat in CentOS

Install Sysstat in Linux

Following are the list of tools included in sysstat packages.

Sysstat Features

  1. iostat: Reports all statistics about your CPU and I/O statistics for I/O devices.
  2. mpstat: Details about CPUs (individual or combined).
  3. pidstat: Statistics about running processes/task, CPU, memory etc.
  4. sar: Save and report details about different resources (CPU, Memory, IO, Network, kernel etc..).
  5. sadc: System activity data collector, used for collecting data in backend for sar.
  6. sa1: Fetch and store binary data in sadc data file. This is used with sadc.
  7. sa2: Summaries daily report to be used with sar.
  8. Sadf: Used for displaying data generated by sar in different formats (CSV or XML).
  9. Sysstat: Man page for sysstat utility.
  10. nfsiostat-sysstat: I/O statistics for NFS.
  11. cifsiostat: Statistics for CIFS.

Recenlty, on 17th of June 2014, Sysstat 11.0.0 (stable version) has been released with some new interesting features as follows.

pidstat command has been enhanced with some new options: first is “-R” which will provide information about the policy and task scheduling priority. And second one is “-G” which we can search processes with name and to get the list of all matching threads.

Some new enhancement have been brought to sar, sadc and sadf with regards to the data files: Now data files can be renamed using “saYYYYMMDD” instead of “saDD” using option –D and can be located in directory different from “/var/log/sa”. We can define new directory by setting variable “SA_DIR”, which is being used by sa1 and sa2.

Installation of Sysstat in Linux

The ‘Sysstat‘ package also available to install from default repository as a package in all major Linux distributions. However, the package which is available from the repo is little old and outdated version. So, that’s the reason, we here going to download and install the latest version of sysstat (i.e. version 11.0.0) from source package.

First download the latest version of sysstat package using the following link or you may also use wgetcommand to download directly on the terminal.

  1. http://sebastien.godard.pagesperso-orange.fr/download.html
# wget http://pagesperso-orange.fr/sebastien.godard/sysstat-11.0.0.tar.gz

Download Sysstat Package

Download Sysstat Package

Next, extract the downloaded package and go inside that directory to begin compile process.

# tar -xvf sysstat-11.0.0.tar.gz 
# cd sysstat-11.0.0/

Here you will have two options for compilation:

a). Firstly, you can use iconfig (which will give you flexibility for choosing/entering the customized values for each parameters).

# ./iconfig

Sysstat iconfig Command

Sysstat iconfig Command

b). Secondly, you can use standard configure command to define options in single line. You can run ./configure –help command to get list of different supported options.

# ./configure --help

Sysstat Configure Help

Sysstat Configure Help

Here, we are moving ahead with standard option i.e. ./configure command to compile sysstat package.

# ./configure
# make
# make install		

Configure Sysstat in Linux

Configure Sysstat in Linux

After compilation process completes, you will see the output similar to above. Now, verify the sysstat version by running following command.

# mpstat -V

sysstat version 11.0.0
(C) Sebastien Godard (sysstat <at> orange.fr)

Updating Sysstat in Linux

By default sysstat use “/usr/local” as its prefix directory. So, all binary/utilities will get installed in “/usr/local/bin” directory. If you have existing sysstat package installed, then those will be there in “/usr/bin”.

Due to existing sysstat package, you will not get your updated version reflected, because your “$PATH” variable don’t have “/usr/local/bin set”. So, make sure that “/usr/local/bin” exist there in your “$PATH” or set –prefixoption to “/usr” during compilation and remove existing version before starting updating.

# yum remove sysstat			[On RedHat based System]
# apt-get remove sysstat		[On Debian based System]
# ./configure --prefix=/usr
# make
# make install

Now again, verify the updated version of systat using same ‘mpstat’ command with option ‘-V’.

# mpstat -V

sysstat version 11.0.0
(C) Sebastien Godard (sysstat <at> orange.fr)

Reference: For more information please go through Sysstat Documentation

That’s it for now, in my upcoming article, I will show some practical examples and usages of sysstat command, till then stay tuned to updates and don’t forget to add your valuable thoughts about the article at below comment section.

Source

10 Tips On How to Use Wireshark to Analyze Packets in Your Network

In any packet-switched network, packets represent units of data that are transmitted between computers. It is the responsibility of network engineers and system administrators alike to monitor and inspect the packets for security and troubleshooting purposes.

To do this, they rely on software programs called network packet analyzers, with Wireshark perhaps being the most popular and used due to its versatility and easiness of use. On top of this, Wireshark allows you to not only monitor traffic in real-time, but also to save it to a file for later inspection.

In this article we will share 10 tips on how to use Wireshark to analyze packets in your network, and hope that when you reach the Summary section you will feel inclined to add it to your bookmarks.

Installing Wireshark in Linux

To install Wireshark, select the right installer for your operating system / architecture from https://www.wireshark.org/download.html.

Particularly, if you are using Linux, Wireshark must be available directly from your distribution’s repositories for an easier install at your convenience. Although versions may differ, the options and menus should be similar – if not identical in each one.

------------ On Debian/Ubuntu based Distros ------------ 
$ sudo apt-get install wireshark

------------ On CentOS/RHEL based Distros ------------
$ sudo yum install wireshark

------------ On Fedora 22+ Releases ------------
$ sudo dnf install wireshark

There is a known bug in Debian and derivatives that may prevent listing the network interfaces unless you use sudo to launch Wireshark. To fix this, follow the accepted answer in this post.

Once Wireshark is running, you can select the network interface that you want to monitor under Capture:

Wireshark Network Analyzer

Wireshark Network Analyzer

In this article we will use eth0, but you can choose another one if you wish. Don’t click on the interface yet – we will do so later once we have reviewed a few capture options.

Setting Capture Options

The most useful capture options we will consider are:

  1. Network interface – As we explained before, we will only analyze packets coming through eth0, either incoming or outcoming.
  2. Capture filter – This option allows us to indicate what kind of traffic we want to monitor by port, protocol, or type.

Before we proceed with the tips, it is important to note that some organizations forbid the use of Wireshark in their networks. That said, if you are not utilizing Wireshark for personal purposes make sure your organization allows its use.

For the time being, just select eth0 from the dropdown list and click Start at the button. You will start seeing all traffic passing through that interface. Not really useful for monitoring purposes due to the high amount of packets inspected, but it’s a start.

Monitor Network Interface Traffic

Monitor Network Interface Traffic

In the above image we can also see the icons to list the available interfaces, to stop the current capture, and to restart it (red box on the left) and to configure and edit a filter (red box on the right). When you hover over one of these icons, a tooltip will be displayed to indicate what it does.

We will begin by illustrating capture options, whereas tips #7 through #10 will discuss how to do actually do something useful with a capture.

TIP #1 – Inspect HTTP Traffic

Type http in the filter box and click Apply. Launch your browser and go to any site you wish:

Inspect HTTP Network Traffic

Inspect HTTP Network Traffic

To begin every subsequent tip, stop the live capture and edit the capture filter.

TIP #2 – Inspect HTTP Traffic from a Given IP Address

In this particular tip, we will prepend ip==192.168.0.10&& to the filter stanza to monitor HTTP traffic between the local computer and 192.168.0.10:

Inspect HTTP Traffic on IP Address

Inspect HTTP Traffic on IP Address

TIP #3 – Inspect HTTP Traffic to a Given IP Address

Closely related with #2, in this case we will use ip.dst as part of the capture filter as follows:

ip.dst==192.168.0.10&&http

Monitor HTTP Network Traffic to IP Address

Monitor HTTP Network Traffic to IP Address

To combine tips #2 and #3, you can use ip.addr in the filter rule instead of ip.src or ip.dst.

TIP #4 – Monitor Apache and MySQL Network Traffic

Sometimes you will be interested in inspecting traffic that matches either (or both) conditions whatsoever. For example, to monitor traffic on TCP ports 80 (web server) and 3306 (MySQL / MariaDB database server), you can use an OR condition in the capture filter:

tcp.port==80||tcp.port==3306

Monitor Apache and MySQL Traffic

Monitor Apache and MySQL Traffic

In tips #2 and #3|| and the word or produce the same results. Same with && and the word and.

TIP #5 – Reject Packets to Given IP Address

To exclude packets not matching the filter rule, use ! and enclose the rule within parentheses. For example, to exclude packages originating from or being directed to a given IP address, you can use:

!(ip.addr == 192.168.0.10)

TIP #6 – Monitor Local Network Traffic (192.168.0.0/24)

The following filter rule will display only local traffic and exclude packets going to and coming from the Internet:

ip.src==192.168.0.0/24 and ip.dst==192.168.0.0/24

Monitor Local Network Traffic

Monitor Local Network Traffic

TIP #7 – Monitor the Contents of a TCP Conversation

To inspect the contents of a TCP conversation (data exchange), right click on a given packet and choose Follow TCP stream. A window will pop-up with the content of the conversation.

This will include HTTP headers if we are inspecting web traffic, and also any plain text credentials transmitted during the process, if any.

Monitor TCP Conversation

Monitor TCP Conversation

TIP #8 – Edit Coloring Rules

By now I am sure you already noticed that each row in the capture window is colored. By default, HTTP traffic appears in green background with black text, whereas checksum errors are shown in red text with black background.

If you wish to change these settings, click the Edit coloring rules icon, choose a given filter and click Edit.

Customize Wireshark Output in Colors

Customize Wireshark Output in Colors

TIP #9 – Save the Capture to a File

Saving the contents of a capture will allow us to be able to inspect it with greater detail. To do this, go to File → Export and choose an export format from the list:

Save Wireshark Capture to File

Save Wireshark Capture to File

TIP #10 – Practice with Capture Samples

If you think your network is “boring”, Wireshark provides a series of sample capture files that you can use to practice and learn. You can download these SampleCaptures and import them via the File → Import menu.

Summary

Wireshark is free and open source software, as you can see in the FAQs section of the official website. You can configure a capture filter either before or after starting an inspection.

In case you didn’t notice, the filter has an autocomplete feature that allows you to easily search for the most used options that you can customize later. With that, the sky is the limit!

Source

VnStat PHP: A Web Based Interface for Monitoring Network Bandwidth Usage

VnStat PHP a graphical interface application for most famous console mode network logger utility called “vnstat“. This VnStat PHP is a graphical frontend to VnStat, to view and monitor network traffic bandwidth usage report in nicely graphical format. It display IN and OUT network traffic statistics in hourlydaysmonths or full summary.

This article shows you how to install VnStat and VnStat PHP  in Linux systems.

VnStat PHP Prerequisites

You need to install the following software packages on your system.

  1. VnStat : A command-line network bandwidth monitoring tool, must be installed, configured and should collect network bandwidth statistics.
  2. Apache : A Web Server to serve web pages.
  3. PHP 5 : A server-side scripting language for executing php scripts on the server.
  4. php-gd extension : A GD extension for serving graphic images.

Step 1: Installing and Configuring VnStat Command Line Tool

VnStat is an command line network bandwidth monitoring utility which counts bandwidth (transmit and received) on network devices and keeps the data in its own database.

Vnstat is a third party tool and can be installed via enabling epel repository under Red Hat based systems. Once you’ve enabled, you can install it using yum command as shown below.

On RHEL/CentOS and Fedora
# yum install vnstat
On Debian/Ubuntu and Linux Mint

Debian user’s simply apt-get to install

$ sudo apt-get install vnstat

As I said Vnstat maintains its own database to keep all network information. To create new database for network interface called “eth0“, issue the following command. Make sure to replace interface name as per your requirements.

# vnstat -i eth0

Error: Unable to read database "/var/lib/vnstat/eth0".
Info: -> A new database has been created.

If you get above error, don’t worry about such error, because you are executing the command first time. So, its creates new database for eth0.

Now run following command to update all enabled databases or only specific interface with -i parameter as shown. It will generate traffic statistics of IN and OUT of a IN and OUT of a eth0 interface.

# vnstat -u -i eth0

Next, add a crontab that runs every 5min and update eth0 database to generate traffic statistics.

*/5 * * * * /usr/bin/vnstat -u >/dev/null 2>&1

Step 2: Installing Apache, Php and Php-gd Extension

Install the following software packages with the help of package manager tool called “yum” for Red Hat based systems and “apt-get” for Debian based systems.

On RHEL/CentOS and Fedora
# yum install httpd php php-gd

Turn on Apache at system start-up and start the service.

# chkconfig httpd on
# service httpd start

Run the following “iptables” command to open Apache port “80” on firewall and then restart the service.

# iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
# service iptables restart
On Debian/Ubuntu and Linux Mint
$ sudo apt-get install apache2 php5 php5-gd
$ sudo /etc/init.d/apache2 start

Open port 80 for Apache.

$ sudo ufw allow 80

Step 3: Downloading VnStat PHP Frontend

Download latest VnStat PHP source tarball file using “wget command” as shown below or visit THIS PAGE to grab latest version.

# cd /tmp
# wget http://www.sqweek.com/sqweek/files/vnstat_php_frontend-1.5.1.tar.gz

Extract the source tarball file, using “tar command” as shown given.

# tar xvf vnstat_php_frontend-1.5.1.tar.gz

Step 4: Installing VnStat PHP Frontend

Once extracted, you will see a directory called “vnstat_php_frontend-1.5.1“. Copy the contents of this directory to web server root location as directory vnstat as shown below.

On RHEL/CentOS and Fedora
# cp -fr vnstat_php_frontend-1.5.1/ /var/www/html/vnstat

If SELinux enabled on your system, run the “restorecon” command to restore files default SELinux security contexts.

# restorecon -Rv /var/www/html/vnstat/
On Debian/Ubuntu and Linux Mint
# cp -fr vnstat_php_frontend-1.5.1/ /var/www/vnstat

Step 5: Configuring VnStat PHP Frontend

Configure it to match your setup. To do open the following file with VI editor and change the parameters as shown below.

On RHEL/CentOS and Fedora
# vi /var/www/html/vnstat/config.php
On Debian/Ubuntu and Linux Mint
# vi /var/www/vnstat/config.php

Set your default Lagrange.

// edit these to reflect your particular situation
$locale = 'en_US.UTF-8';
$language = 'en';

Define your network interfaces to be monitored.

// list of network interfaces monitored by vnStat
$iface_list = array('eth0', 'eth1');

You can set custom names for your network interfaces.

// optional names for interfaces
// if there's no name set for an interface then the interface identifier.
// will be displayed instead
$iface_title['eth0'] = 'Internal';
$iface_title['eth1'] = 'External';

Save and close the file.

Step 6: Access VnStat PHP and View Graphs

Open your favourite browser and navigate to any of the following link. Now you will see a fancy network graphs that shows you a summary of network bandwidth usage in hoursdays and months.

http://localhost/vnstat/
http://your-ip-address/vnstat/
Sample Output

Install Vnstat PHP in Linux

VnStat PHP Network Summary

Reference Link

VnStat PHP Homepage

Source

How To Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS/RHEL 7

If you are a person who is, or has been in the past, in charge of inspecting and analyzing system logs in Linux, you know what a nightmare that task can become if multiple services are being monitored simultaneously.

In days past, that task had to be done mostly manually, with each log type being handled separately. Fortunately, the combination of ElasticsearchLogstash, and Kibana on the server side, along with Filebeat on the client side, makes that once difficult task look like a walk in the park today.

The first three components form what is called an ELK stack, whose main purpose is to collect logs from multiple servers at the same time (also known as centralized logging).

Suggested Read: 4 Good Open Source Log Monitoring and Management Tools for Linux

A built-in java-based web interface allows you to inspect logs quickly at a glance for easier comparison and troubleshooting. These client logs are sent to a central server by Filebeat, which can be described as a log shipping agent.

Let’s see how all of these pieces fit together. Our test environment will consist of the following machines:

Central Server: CentOS 7 (IP address: 192.168.0.29). 2 GB of RAM.
Client #1: CentOS 7 (IP address: 192.168.0.100). 1 GB of RAM.
Client #2: Debian 8 (IP address: 192.168.0.101). 1 GB of RAM.

Please note that the RAM values provided here are not strict prerequisites, but recommended values for successful implementation of the ELK stack on the central server. Less RAM on clients will not make much difference, if any, at all.

Installing ELK Stack on the Server

Let’s begin by installing the ELK stack on the server, along with a brief explanation on what each component does:

  1. Elasticsearch stores the logs that are sent by the clients.
  2. Logstash processes those logs.
  3. Kibana provides the web interface that will help us to inspect and analyze the logs.

Install the following packages on the central server. First off, we will install Java JDK version 8 (update 102, the latest one at the time of this writing), which is a dependency of the ELK components.

You may want to check first in the Java downloads page here to see if there is a newer update available.

# yum update
# cd /opt
# wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u102-b14/jre-8u102-linux-x64.rpm"
# rpm -Uvh jre-8u102-linux-x64.rpm

Time to check whether the installation completed successfully:

# java -version
Check Java Version from Commandline

Check Java Version from Commandline

To install the latest versions of ElasticsearchLogstash, and Kibana, we will have to create repositories for yummanually as follows:

Enable Elasticsearch Repository

1. Import the Elasticsearch public GPG key to the rpm package manager:

# rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch

2. Insert the following lines to the repository configuration file elasticsearch.repo:

/etc/yum.repos.d/elasticsearch.repo
[elasticsearch]
name=Elasticsearch repository
baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1

3. Install the Elasticsearch package.

# yum install elasticsearch

When the installation is complete, you will be prompted to start and enable elasticsearch:

Install Elasticsearch in Linux

Install Elasticsearch in Linux

4. Start and enable the service.

# systemctl daemon-reload
# systemctl enable elasticsearch
# systemctl start elasticsearch

5. Allow traffic through TCP port 9200 in your firewall:

# firewall-cmd --add-port=9200/tcp
# firewall-cmd --add-port=9200/tcp --permanent

6. Check if Elasticsearch responds to simple requests over HTTP:

# curl -X GET http://localhost:9200

The output of the above command should be similar to:

Verify Elasticsearch Installation

Verify Elasticsearch Installation

Make sure you complete the above steps and then proceed with Logstash. Since both Logstash and Kibanashare the Elasticsearch GPG key, there is no need to re-import it before installing the packages.

Suggested Read: Manage System Logs (Configure, Rotate and Import Into Database) in CentOS 7

Enable Logstash Repository

7. Insert the following lines to the repository configuration file logstash.repo:

/etc/yum.repos.d/logstash.repo
[logstash]
name=Logstash
baseurl=http://packages.elasticsearch.org/logstash/2.2/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1

8. Install the Logstash package:

# yum install logstash

9. Add a SSL certificate based on the IP address of the ELK server at the the following line below the [ v3_ca ] section in /etc/pki/tls/openssl.cnf:

[ v3_ca ]
subjectAltName = IP: 192.168.0.29
Add Elasticsearch Server IP Address

Add Elasticsearch Server IP Address

10. Generate a self-signed certificate valid for 365 days:

# cd /etc/pki/tls
# openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

11. Configure Logstash input, output, and filter files:

Input: Create /etc/logstash/conf.d/input.conf and insert the following lines into it. This is necessary for Logstash to “learn” how to process beats coming from clients. Make sure the path to the certificate and key match the right paths as outlined in the previous step:

/etc/logstash/conf.d/input.conf
input {
  beats {
	port => 5044
	ssl => true
	ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
	ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}

Output (/etc/logstash/conf.d/output.conf) file:

/etc/logstash/conf.d/output.conf
output {
  elasticsearch {
	hosts => ["localhost:9200"]
	sniffing => true
	manage_template => false
	index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
	document_type => "%{[@metadata][type]}"
  }
}

Filter (/etc/logstash/conf.d/filter.conf) file. We will log syslog messages for simplicity:

/etc/logstash/conf.d/filter.conf
filter {
if [type] == "syslog" {
	grok {
  	match => { "message" => "%{SYSLOGLINE}" }
	}

	date {
match => [ "timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
  }
}

12. Verify the Logstash configuration files.

# service logstash configtest
Verify Logstash Configuration

Verify Logstash Configuration

13. Start and enable logstash:

# systemctl daemon-reload
# systemctl start logstash
# systemctl enable logstash

14. Configure the firewall to allow Logstash to get the logs from the clients (TCP port 5044):

# firewall-cmd --add-port=5044/tcp
# firewall-cmd --add-port=5044/tcp --permanent

Enable Kibana Repository

14. Insert the following lines to the repository configuration file kibana.repo:

/etc/yum.repos.d/kibana.repo
[kibana]
name=Kibana repository
baseurl=http://packages.elastic.co/kibana/4.4/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1

15. Install the Kibana package:

# yum install kibana

16. Start and enable Kibana.

# systemctl daemon-reload
# systemctl start kibana
# systemctl enable kibana

17. Make sure you can access access Kibana’s web interface from another computer (allow traffic on TCP port 5601):

# firewall-cmd --add-port=5601/tcp
# firewall-cmd --add-port=5601/tcp --permanent

18. Launch Kibana (http://192.168.0.29:5601) to verify that you can access the web interface:

Access Kibana Web Interface

Access Kibana Web Interface

We will return here after we have installed and configured Filebeat on the clients.

Suggested Read: Monitor Server Logs in Real-Time with “Log.io” Tool in Linux

Install Filebeat on the Client Servers

We will show you how to do this for Client #1 (repeat for Client #2 afterwards, changing paths if applicable to your distribution).

1. Copy the SSL certificate from the server to the clients:

# scp /etc/pki/tls/certs/logstash-forwarder.crt root@192.168.0.100:/etc/pki/tls/certs/

2. Import the Elasticsearch public GPG key to the rpm package manager:

# rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch

3. Create a repository for Filebeat (/etc/yum.repos.d/filebeat.repo) in CentOS based distributions:

/etc/yum.repos.d/filebeat.repo
[filebeat]
name=Filebeat for ELK clients
baseurl=https://packages.elastic.co/beats/yum/el/$basearch
enabled=1
gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
gpgcheck=1

4. Configure the source to install Filebeat on Debian and its derivatives:

# aptitude install apt-transport-https
# echo "deb https://packages.elastic.co/beats/apt stable main" > /etc/apt/sources.list.d/filebeat.list
# aptitude update

5. Install the Filebeat package:

# yum install filebeat        [On CentOS and based Distros]
# aptitude install filebeat   [On Debian and its derivatives]

6. Start and enable Filebeat:

# systemctl start filebeat
# systemctl enable filebeat

Configure Filebeat

A word of caution here. Filebeat configuration is stored in a YAML file, which requires strict indentation. Be careful with this as you edit /etc/filebeat/filebeat.yml as follows:

  1. Under paths, indicate which log files should be “shipped” to the ELK server.
  2. Under prospectors:
input_type: log
document_type: syslog
  1. Under output:
    1. Uncomment the line that begins with logstash.
    2. Indicate the IP address of your ELK server and port where Logstash is listening in hosts.
    3. Make sure the path to the certificate points to the actual file you created in Step I (Logstash section) above.

The above steps are illustrated in the following image:

Configure Filebeat in Client Servers

Configure Filebeat in Client Servers

Save changes, and then restart Filebeat on the clients:

# systemctl restart filebeat

Once we have completed the above steps on the clients, feel free to proceed.

Testing Filebeat

In order to verify that the logs from the clients can be sent and received successfully, run the following command on the ELK server:

# curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'

The output should be similar to (notice how messages from /var/log/messages and /var/log/secure are being received from client1 and client2):

Testing Filebeat

Testing Filebeat

Otherwise, check the Filebeat configuration file for errors.

# journalctl -xe

after attempting to restart Filebeat will point you to the offending line(s).

Testing Kibana

After we have verified that logs are being shipped by the clients and received successfully on the server. The first thing that we will have to do in Kibana is configuring an index pattern and set it as default.

You can describe an index as a full database in a relational database context. We will go with filebeat-* (or you can use a more precise search criteria as explained in the official documentation).

Enter filebeat-* in the Index name or pattern field and then click Create:

Testing Kibana

Testing Kibana

Please note that you will be allowed to enter a more fine-grained search criteria later. Next, click the star inside the green rectangle to configure it as the default index pattern:

Configure Default Kibana Index Pattern

Configure Default Kibana Index Pattern

Finally, in the Discover menu you will find several fields to add to the log visualization report. Just hover over them and click Add:

Add Log Visualization Report

Add Log Visualization Report

The results will be shown in the central area of the screen as shown above. Feel free to play around (add and remove fields from the log report) to become familiar with Kibana.

By default, Kibana will display the records that were processed during the last 15 minutes (see upper right corner) but you can change that behavior by selecting another time frame:

Kibana Log Reports

Kibana Log Reports

Summary

In this article we have explained how to set up an ELK stack to collect the system logs sent by two clients, a CentOS 7 and a Debian 8 machines.

Now you can refer to the official Elasticsearch documentation and find more details on how to use this setup to inspect and analyze your logs more efficiently.

If you have any questions, don’t hesitate to ask. We look forward to hearing from you.

Source

10 7zip (File Archive) Command Examples in Linux

7-Zip is a free open source, cross-platform, powerful, and fully-featured file archiver with a high compression ratio, for Windows. It has a powerful command line version that has been ported to Linux/POSIX systems.

It has a high compression ratio in 7z format with LZMA and LZMA2 compression, supports many other archive formats such as XZ, BZIP2, GZIP, TAR, ZIP and WIM for both packing and unpacking; AR, RAR, MBR, EXT, NTFS, FAT, GPT, HFS, ISO, RPM, LZMA, UEFI, Z, and many others for extracting only.

It provides strong AES-256 encryption in 7z and ZIP formats, offers a compression ratio that of 2-10 % for ZIP and GZIP formats (much better than those offered by PKZip and WinZip). It also comes with self-extracting capability for 7z format and it’s localized in up-to 87 languages.

How to Install 7zip in Linux

The port of 7zip on Linux systems is called p7zip, this package comes pre-installed on many mainstream Linux distributions. You need to install the p7zip-full package to get the 7z, 7za, and 7zr CLI utilities on your system, as follows.

Install 7zip on Debian, Ubuntu or Linux Mint

Debian-based Linux distributions comes with three software packages related to 7zip and they are p7zipp7zip-full and p7zip-rar. It is suggested to install p7zip-full package, which supports many archive formats.

$ sudo apt-get install p7zip-full

Install 7zip on Fedora or CentOS/RHEL

Red Hat-based Linux distributions comes with two packages related to 7zip and they are p7zip and p7zip-plugins. It is suggested to install both packages.

To install these two packages, you need to enable EPEL repository on CentOS/RHEL distributions. On Fedora, no need to setup additional repository.

$ sudo yum install p7zip p7zip-plugins

Once the 7zip package installed, you can move further to learn some useful 7zip command examples to pack or unpack various types of archives in the following section.

Learn 7zip Command Examples in Linux

1. To create an .7z archive file, use "a" option. The supported archive formats for creation are 7z, XZ, GZIP, TAR, ZIP and BZIP2. If the given archive file exists already, it will “add” the files to an existing archive, instead of overwriting it.

$ 7z a hyper.7z hyper_1.4.2_i386.deb

Create 7z Archive File in Linux

Create 7z Archive File in Linux

2. To extract an .7z archive file, use "e" option, which will extract the archive in the present working directory.

$ 7z e hyper.7z

Extract 7z Archive File in Linux

Extract 7z Archive File in Linux

3. To select an archive format, use -t (format name) option, which will allows you to select the archive format such as zip, gzip, bzip2 or tar (the default is 7z):

$ 7z a -tzip hyper.zip hyper_1.4.2_i386.deb

Create 7z Zip File in Linux

Create 7z Zip File in Linux

4. To see a list of files in an archive, use "l" (list) function, which will displays the type of archive format, method used, files in the archive among other information as shown.

$ 7z l hyper.7z

List 7z File Information

List 7z File Information

5. To test the integrity of an archive file, use "t" (test) function as shown.

$ 7z t hyper.7z

Check 7z File Integrity

Check 7z File Integrity

6. To backup a directory, you should use the 7za utility which preserves owner/group of a file, unlike 7z, the -sioption enables reading of files from stdin.

$ tar -cf - tecmint_files | 7za a -si tecmint_files.tar.7z

7. To restore a backup, use -so option, which will sends output to stdout.

$ 7za x -so tecmint_files.tar.7z | tar xf -

8. To set a compression level, use the -mx option as shown.

$ tar -cf - tecmint_files | 7za a -si -mx=9 tecmint_files.tar.7z

9. To update an existing archive file or remove file(s) from an archive file, use "u" and "d" options, respectively.

$ 7z u <archive-filename> <list-of-files-to-update>
$ 7z d <archive-filename> <list-of-files-to-delete>

10. To set a password to an archive file, use -p {password_here} flag as shown.

$ 7za a -p{password_here} tecmint_secrets.tar.7z

For more information refer to the 7z man page, or go to the 7zip Homepage: https://www.7-zip.org/.

That’s all for now! In this article, we have explained 10 7zip (File Archive) command examples in Linux. Use the feedback form below to ask any questions or share your thoughts with us.

Source

WP2Social Auto Publish Powered By : XYZScripts.com