Cockpit – A Powerful Tool to Monitor and Administer Multiple Linux Servers Using a Web Browser

Cockpit is an easy-to-use, lightweight and simple yet powerful remote manager for GNU/Linux servers, it’s an interactive server administration user interface that offers a live Linux session via a web browser.

It can run on several Linux distributions including DebianUbuntuFedoraCentOSRHELArch Linux among others.

Cockpit makes Linux discoverable thereby enabling system administrators to easily and reliably carry out tasks such as starting containers, managing storage, network configurations, log inspections coupled with several others.

Suggested Read: 20 Command Line Tools to Monitor Linux Performance

While using it, users can easily switch between the Linux terminal and web browser without any hustles. Importantly, when a user starts a service via Cockpit, it can be stopped via the terminal, and just in case of an error that occurs in the terminal, it is shown in the Cockpit journal interface.

Features of Cockpit:

  1. Enables managing of multiple servers in one Cockpit session.
  2. Offers a web-based shell in a terminal window.
  3. Containers can be managed via Docker.
  4. Supports efficient management of system user accounts.
  5. Collects system performance information using Performance Co-Pilot framework and displays it in a graph.
  6. Supports gathering of system configuration and diagnostic information using sos-report.
  7. Also supports Kubernetes cluster or an Openshift v3 cluster.
  8. Allows modification of network settings and many more.

How to Install Cockpit in Linux Systems

You can install Cockpit in all Linux distributions from their default official repositories as shown:

Install Cockpit on Fedora and CentOS

To install and enable Cockpit on Fedora distributions, use the following commands.

# yum install cockpit
# systemctl enable --now cockpit.socket
# firewall-cmd --add-service=cockpit
# firewall-cmd --add-service=cockpit --permanent

Install Cockpit on RHEL

Cockpit is added to the Red Hat Enterprise Linux Extras repository from versions 7.1 and later:

# subscription-manager repos --enable rhel-7-server-extras-rpms
# systemctl enable --now cockpit.socket
# firewall-cmd --add-service=cockpit
# firewall-cmd --add-service=cockpit --permanent

Install Cockpit on Debian

Cockpit is not included in Debian official repositories, but you install it using following repository that contains weekly builds specially for Debian unstable:

First add the following repository to /etc/apt/sources.list file.

deb https://fedorapeople.org/groups/cockpit/debian unstable main

Next, import Cockpit’s signing key and then run the following series of commands to install it.

$ sudo apt-key adv --keyserver sks-keyservers.net --recv-keys F1BAA57C
$ sudo apt-get update
$ sudo apt-get install cockpit
$ sudo systemctl enable --now cockpit.socket

Install Cockpit on Ubuntu and Linux Mint

In Ubuntu and Linux Mint distributions, Cockpit is not included, but you can install it from official Cockpit PPA by executing the following commands:

$ sudo add-apt-repository ppa:cockpit-project/cockpit
$ sudo apt-get update
$ sudo apt-get install cockpit
$ sudo systemctl enable --now cockpit.socket

Install Cockpit on Arch Linux

Arch Linux users can install Cockpit from the Arch User Repository using following command.

# yaourt cockpit
# systemctl start cockpit
# systemctl enable cockpit.socket

How to Use Cockpit in Linux

After Cockpit installed successfully, you can access it using a web browser at the following locations.

https://ip-address:9090
OR
https://server.domain.com:9090

Enter system username and password to login in the interface below:

Cockpit Web Interface

Cockpit Web Interface

After logging in, you will be presented with a summary of your system information and performance graphs for CPUMemoryDisk I/O, and Network traffic as seen in the next image:

Linux System Performance Summary

Linux System Performance Summary

Next on the dashboard menu, is Services. Here you can view TargetsSystem ServicesSocketsTimers and Paths pages.

The interface below shows running services on your system.

Showing Current Running Services on Linux

Showing Current Running Services on Linux

You can click on a single service to manage it. Simply click on the drop down menus to get the functionality you want.

View Linux Service Summary

View Linux Service Summary

The Logs menu item displays the logs page which allows for logs inspection. The logs are categorized into ErrorsWarningsNotices and All as in the image below.

Additionally you can as well view logs based on time such as logs for the last 24HRs or 7 days.

Suggested Read: 4 Best Log Monitoring and Management Tools for Linux

To inspect a single log entry, simply click on it.

Linux Logs Monitoring

Linux Logs Monitoring

Cockpit also enables you to manage user accounts on the system, go to Tools and click on Accounts. Clicking on a user account allows you to view the users account details.

Manage Linux User Accounts

Manage Linux User Accounts

To add a system user, click on “Create New Account” button and enter the necessary user information in the interface below.

Create User Account in Linux

Create User Account in Linux

To get a terminal window, go to Tools  Terminal.

Cockpit - Linux Web Terminal

Cockpit – Linux Web Terminal

How to Add Linux Server to Cockpit

Important: Be aware that you must install Cockpit on all remote Linux servers in order to monitor them on Cockpit dashboard. So, please install it before adding any new server to Cockpit..

To add another server, click on dashboard, you will see the screen below. Click on the (+) sign and enter the server IP address. Remember that information for each server you add is displayed in Cockpit using a distinct color.

Add Linux Server to Cockpit

Add Linux Server to Cockpit

Cockpit - Remote Linux Server Monitoring

Cockpit – Remote Linux Server Monitoring

Same way, you can add many Linux servers under Cockpit and manage it efficiently without any trouble..

That is it for now, however, you can explore more in case you have installed this simple and wonderful server remote manager.

Cockpit Official Documentationhttp://cockpit-project.org/guide/latest/

For any questions or suggestions as well as feedback on the topic, do not hesitate to use the comment section below to get back to us.

Source

Web VMStat: A Real Time System Statistics (Memory, CPU, Processess, etc) Monitoring Tool for Linux

Web-Vmstat it’s a small application written in Java and HTML which displays live Linux system statistics, such as MemoryCPUI/OProcesses, etc. taken over vmstat monitoring command line in a pretty Web page with charts (SmoothieCharts) and diagrams through WebSocket streams using websocketd program.

Install Web-Vmstat in Linux

Install Web-Vmstat in Linux

I’ve recorded a quick video review of what the application can do on a Gentoo system.

Requirements

On a Linux system the following utilities must be installed.

  1. A wget for retrieving files using HTTP, HTTPS and FTP protocols.
  2. Nano or VI CLI Text Editor.
  3. Unzip Archive Extractor.

This tutorial will guide you through installing Web-Vmstat application on CentOS 6.5, but the procedure is valid for all Linux distributions, the only things that differ are just the init scripts (optional), which helps you manage more easy the entire process.

Read AlsoMonitor Linux Performance using Vmstat Commands

Step 1: Install Web-Vmstat

1. Before proceeding with installing Web-Vmstat, make sure you have all the above required commands installed on your system. You can use package manager such as yumapt-get, etc command to install it. For example, under CentOS systems, we use yum command to install it.

# yum install wget nano unzip

2. Now go to Veb-Vmstat official web page at and download the latest version using Download ZIP button or use wget to download from command line.

# wget https://github.com/joewalnes/web-vmstats/archive/master.zip

Download Web-Vmstat Package

Download Web-Vmstat Package

3. Extract the downloaded master.zip archive using unzip utility and enter to extracted folder.

# unzip master.zip
# cd web-vmstats-master

Extract Web-Vmstat Package

Extract Web-Vmstat Package

Switch to Web-Vmstat Folder

Switch to Web-Vmstat Folder

4. Web directory holds the HTML and Java files needed for the application to run in a Web environment. Create a directory under your system where you want to host the Web files and move all web content to that directory.

This tutorial uses /opt/web_vmstats/ to host all application web files, but you can create any arbitrary path on your system you like it, just assure you retain the absolute web path.

# mkdir /opt/web_vmstats
# cp -r web/* /opt/web_vmstats/

Create Web-Vmstat Folder

Create Web-Vmstat Folder

5. Next step is to download and install websocketd streaming program. Go to the official WebSocket page and download the package to match your system architecture (Linux 64-bit, 32-bit or ARM).

On 32-bit System
# wget https://github.com/joewalnes/websocketd/releases/download/v0.2.9/websocketd-0.2.9-linux_386.zip
On 64-bit System
# wget https://github.com/joewalnes/websocketd/releases/download/v0.2.9/websocketd-0.2.9-linux_amd64.zip

Download WebSocket Package

Download WebSocket Package

6. Extract the WebSocket archive with unzip command and copy websocketd binary to a system executable path to make it available system-wide.

# unzip websocketd-0.2.9-linux_amd64.zip
# cp websocketd /usr/local/bin/

7. Now you can test it by running websocketd command using the following command syntax.

# websocketd --port=8080 --staticdir=/opt/web_vmstats/ /usr/bin/vmstat -n 1

Description of the each parameter explained below.

  1. –port=8080: A port used to connect on HTTP protocol – you can use any port number you want.
  2. –staticdir=/opt/web_vmstats/: The path where all Web-Vmstat web files are hosted.
  3. /usr/bin/vmstat -n 1: A Linux Vmstat command which updates its status every second.

Step 2: Create Init File

8. This step is optional and only works with init script supported systems. To manage WebSocket process as a system daemon create a init service file on /etc/init.d/ path with the following content.

# nano /etc/init.d/web-vmstats

Add the following content.

#!/bin/sh
# source function library
. /etc/rc.d/init.d/functions
start() {
                echo "Starting webvmstats process..."

/usr/local/bin/websocketd --port=8080 --staticdir=/opt/web_vmstats/ /usr/bin/vmstat -n 1 &
}

stop() {
                echo "Stopping webvmstats process..."
                killall websocketd
}

case "$1" in
    start)
       start
        ;;
    stop)
       stop
        ;;
    *)
        echo "Usage: stop start"
        ;;
esac

Create Web-Vmstat Init Script

Create Web-Vmstat Init Script

9. After the file has been created, append execution permissions and manage the process using start or stopswitches.

# chmod +x /etc/init.d/web-vmstats
# /etc/init.d/web-vmstats start

Start Web-Vmstat

Start Web-Vmstat

10. If your Firewall is active edit /etc/sysconfig/iptables firewall file and open the port used by websocketd process to make it available for outside connections.

# nano /etc/sysconfig/iptables

If you use port 8080 as in this tutorial add the following line to iptables file after the rule that opens port 22.

-A INPUT -m state --state NEW -m tcp -p tcp --dport 8080 -j ACCEPT

Open Port 8080 in Iptables

Open Port 8080 in Iptables

11. To finalize the whole process restart iptables service to apply the new rule.

# service iptables restart
# service web-vmstats start

Open a browser and use the following URL to display Vmstats system statistics.

http://system_IP:8080

Watch Vmstats System Statistics

Watch Vmstats System Statistics

12. To display name, version and other details about your current machine and the operating system running on it. Go to Web-Vmstat files path and run the following commands.

# cd /opt/web_vmstats
# cat /etc/issue.net | head -1 > version.txt
# cat /proc/version >> version.txt

13. Then open index.html file and add the following javascript code before <main id=”charts”> line.

# nano index.html

Use the following JavaScript code.

<div align='center'><h3><pre id="contents"></pre></h3></div>
<script>
function populatePre(url) {
    var xhr = new XMLHttpRequest();
    xhr.onload = function () {
        document.getElementById('contents').textContent = this.responseText;
    };
    xhr.open('GET', url);
    xhr.send();
}
populatePre('version.txt');
                </script>

Add Javascript Code

Add Javascript Code

14. To view the final result refresh http://system_IP:8080 web page and you should see information and live statistics about your current machine as in the screenshots below.

Watch Live System Statistics

Watch Live System Statistics

System Live Statistics Graphs

System Live Statistics Graphs

Source

TCPflow – Analyze and Debug Network Traffic in Linux

TCPflow is a free, open source, powerful command line based tool for analyzing network traffic on Unix-like systems such as Linux. It captures data received or transferred over TCP connections, and stores it in a file for later analysis, in a useful format that allows for protocol analysis and debugging.

Read Also16 Best Bandwidth Monitoring Tools to Analyze Network Usage in Linux

It is actually a tcpdump-like tools as it processes packets from the wire or from a stored file. It supports the same powerful filtering expressions supported by its counterpart. The only difference is that tcpflow puts all the TCP packets into order and assembles each flow in a separate file (a file for each direction of flow) for later analysis.

Its feature set includes an advanced plug-in system for decompressing compressed HTTP connections, undoing MIME encoding, or invoking third-party programs for post-processing and much more.

There are many use cases for tcpflow which include to understand network packet flows and also supports for performing network forensics and divulge the contents of HTTP sessions.

How to Install TCPflow in Linux Systems

TCPflow is available in the official repositories of mainstream GNU/Linux distributions, you can install it using your package manager as shown.

$ sudo apt install tcpflow	#Debian/Ubuntu
$ sudo yum install tcpflow	#CentOS/RHEL
$ sudo dnf install tcpflow	#Fedora 22+

After installing tcpflow, you can run it with superuser privileges, otherwise use the sudo command. Note that it listens on the active network interface (for instance enp0s3).

$ sudo tcpflow

tcpflow: listening on enp0s3

By default tcpflow stores all captured data in files that have names in the form (this may be different if you use certain options such as timestamp).

sourceip.sourceport-destip.destport
192.168.043.031.52920-216.058.210.034.00443

Now let’s do a directory listing to see if tcp flow has been captured in any files.

$ ls -1

total 20
-rw-r--r--. 1 root    root     808 Sep 19 12:49 192.168.043.031.52920-216.058.210.034.00443
-rw-r--r--. 1 root    root      59 Sep 19 12:49 216.058.210.034.00443-192.168.043.031.52920

As we mentioned earlier on, each TCP flow is stored in its own file. From the output above, you can see that there are three transcript file, which indicate tcpflow in two opposite directions, where the source IP in the first file and the destination IP in the second file and vice versa.

The first file 192.168.043.031.52920-216.058.210.034.00443 contains data transfered from host 192.168.043.031 (the localhost on which tcpflow was run) via port 52920, to host 216.058.210.034 (the remote host) via port 443.

And the second file 216.058.210.034.00443-192.168.043.031.52920 contains data sent from host 216.058.210.034 (the remote host) via port 443 to host 192.168.043.031 (the localhost on which tcpflow was run) via port 52920.

There is also an XML report generated, which contains information about the program such as how it was compiled, and the computer it was run on and a record of every tcp connection.

As you may have noticed, tcpflow stores the transcript files in the current directory by default. The -o option can help you specify the output directory where the transcript files will be written.

$ sudo tcpflow -o tcpflow_files
$ sudo ls -l tcpflow_files

total 32
-rw-r--r--. 1 root root 1665 Sep 19 12:56 157.240.016.035.00443-192.168.000.103.45986
-rw-r--r--. 1 root root   45 Sep 19 12:56 169.044.082.101.00443-192.168.000.103.55496
-rw-r--r--. 1 root root 2738 Sep 19 12:56 172.217.166.046.00443-192.168.000.103.39954
-rw-r--r--. 1 root root   68 Sep 19 12:56 192.168.000.102.00022-192.168.000.103.42436
-rw-r--r--. 1 root root  573 Sep 19 12:56 192.168.000.103.39954-172.217.166.046.00443
-rw-r--r--. 1 root root 4067 Sep 19 12:56 192.168.000.103.45986-157.240.016.035.00443
-rw-r--r--. 1 root root   38 Sep 19 12:56 192.168.000.103.55496-169.044.082.101.00443
-rw-r--r--. 1 root root 3159 Sep 19 12:56 report.xml

You can also print the contents of packets to stdout as they are received, without storing any captured data to files, using the -c flag as follows.

To test this effectively, open a second terminal and run a ping, or browse the internet. You should be able to see the ping details or your browsing details being captured by tcpflow.

$ sudo tcpflow -c

It is possible to capture all traffic on a particular port, for example port 80 (HTTP). In the case of HTTP traffic, you will be able to see the HTTP Headers followed by the content all on the stdout or in one file if the -c switch is removed.

$ sudo tcpflow port 80

To capture packets from a specific network interface, use the -i flag to specify the interface name.

$ sudo tcpflow -i eth0 port 80

You can also specify a target host (accepted values are IP address, hostname or domains), as shown.

$ sudo tcpflow -c host 192.68.43.1
OR
$ sudo tcpflow -c host www.google.com 

You can enable all processing using all scanners with the -a flag, this is equivalent to the -e all switch.

$ sudo tcpflow -a  
OR
$ sudo tcpflow -e all

A specific scanner can also be activated; the available scanners include md5, http, netviz, tcpdemux and wifiviz (run tcpflow -H to view detailed information about each scanner).

$ sudo tcpflow -e http
OR
$ sudo tcpflow -e md5
OR
$ sudo tcpflow -e netviz
OR
$ sudo tcpflow -e tcpdemux
OR
$ sudo tcpflow -e wifiviz

The following example show how to enable all scanners except tcpdemux.

$ sudo tcpflow -a -x tcpdemux 

TCPflow usually tries to put the network interface into promiscuous mode before capturing packets. You can prevent this using the -p flag as shown.

$ sudo tcpflow -p -i eth0

To read packets from a tcpdump pcap file, use the -r flag.

$ sudo tcpflow -f file.pcap

You can enable verbose mode using the -v or -d 10 options.

$ sudo tcpflow -v
OR
$ sudo tcpflow -d 10

Important: One limitation of tcpflow is that, at the present time it does not understand IP fragments, thus data transmitted as part of TCP connections containing IP fragments will not be properly captured.

For more information and usage options, see the tcpflow man page.

$ man tcpflow 

TCPflow Github repositoryhttps://github.com/simsong/tcpflow

That’s all for now! TCPflow is a powerful TCP flow recorder which is useful for understanding network packet flows and performing network forensics, and so much more. Try it out and share your thoughts about it with us in the comments.

Source

Block SSH Server Attacks (Brute Force Attacks) Using DenyHosts

DenyHosts is an open source and free log-based intrusion prevention security program for SSH servers developed in Python language by Phil Schwartz. It is intended to monitor and analyzes SSH server logs for invalid login attempts, dictionary based attacks and brute force attacks by blocking the originating IP addresses by adding an entry to /etc/hosts.deny file on the server and prevents the IP address from making any further such login attempts.

Block SSH attacks

Install DenyHosts to Block SSH Attacks

DenyHosts is much needed tool for all Linux based systems, specially when we are allowing password based ssh logins. In this article we are going to show you how to install and configure DenyHosts on RHEL 6.3/6.2/6.1/6/5.8CentOS 6.3/6.2/6.1/6/5.8 and Fedora 17,16,15,14,13,12 systems using epel repository.

See also :

  1. Fail2ban (Intrusion Prevention) System for SSH
  2. Disable or Enable SSH Root Login
  3. Linux Malware Detect (LMD)

Installing DenyHosts in RHEL, CentOS and Fedora

By default DenyHosts tool is not included in the Linux systems, we need to install it using third party EPEL repository. Once added repository, install the package using following YUM command.

# yum --enablerepo=epel install denyhosts
OR
# yum install denyhosts

Configuring DenyHosts for Whitelist IP Addresses

Once the Denyhosts installed, make sure to whitelist your own IP address, so you will never get locked out. To do this, open a file /etc/hosts.allow.

# vi /etc/hosts.allow

Below the description, add the each IP address one-by-one on a separate line, that you never want to block. The format should be as follows.

#
# hosts.allow   This file contains access rules which are used to
#               allow or deny connections to network services that
#               either use the tcp_wrappers library or that have been
#               started through a tcp_wrappers-enabled xinetd.
#
#               See 'man 5 hosts_options' and 'man 5 hosts_access'
#               for information on rule syntax.
#               See 'man tcpd' for information on tcp_wrappers
#
sshd: 172.16.25.125
sshd: 172.16.25.126
sshd: 172.16.25.127

Configuring DenyHosts for Email Alerts

The main configuration file is located under /etc/denyhosts.conf. This file is used to send email alerts about suspicious logins and restricted hosts. Open this file using VI editor.

# vi /etc/denyhosts.conf

Search for the ‘ADMIN_EMAIL‘ and add your email address here to receive email alerts about suspicious logins (for multiple email alerts use comma separated). Please have a look at the configuration file of my CentOS 6.3server. Each variable is well documented so configure it according to your liking.

############ DENYHOSTS REQUIRED SETTINGS ############
SECURE_LOG = /var/log/secure
HOSTS_DENY = /etc/hosts.deny
BLOCK_SERVICE  = sshd
DENY_THRESHOLD_INVALID = 5
DENY_THRESHOLD_VALID = 10
DENY_THRESHOLD_ROOT = 1
DENY_THRESHOLD_RESTRICTED = 1
WORK_DIR = /var/lib/denyhosts
SUSPICIOUS_LOGIN_REPORT_ALLOWED_HOSTS=YES
HOSTNAME_LOOKUP=YES
LOCK_FILE = /var/lock/subsys/denyhosts

############ DENYHOSTS OPTIONAL SETTINGS ############
ADMIN_EMAIL = ravisaive@tecmint.com
SMTP_HOST = localhost
SMTP_PORT = 25
SMTP_FROM = DenyHosts <tecmint@tecmint.com>
SMTP_SUBJECT = DenyHosts Daily Report

############ DENYHOSTS OPTIONAL SETTINGS ############
DAEMON_LOG = /var/log/denyhosts
DAEMON_SLEEP = 30s
DAEMON_PURGE = 1h

Restarting DenyHosts Service

Once you’ve done with your configuration, restart the denyhosts service for new changes. We also add the denyhosts service to system start-up.

# chkconfig denyhosts on
# service denyhosts start

Watch DenyHosts Logs

To watch denyhosts ssh logs for how many attackers and hackers are attempted to gain access to your server. Use the following command to view the real-time logs.

# tail -f /var/log/secure
Nov 28 15:01:43 tecmint sshd[25474]: Accepted password for root from 172.16.25.125 port 4339 ssh2
Nov 28 15:01:43 tecmint sshd[25474]: pam_unix(sshd:session): session opened for user root by (uid=0)
Nov 28 16:44:09 tecmint sshd[25474]: pam_unix(sshd:session): session closed for user root
Nov 29 11:08:56 tecmint sshd[31669]: Accepted password for root from 172.16.25.125 port 2957 ssh2
Nov 29 11:08:56 tecmint sshd[31669]: pam_unix(sshd:session): session opened for user root by (uid=0)
Nov 29 11:12:00 tecmint atd[3417]: pam_unix(atd:session): session opened for user root by (uid=0)
Nov 29 11:12:00 tecmint atd[3417]: pam_unix(atd:session): session closed for user root
Nov 29 11:26:42 tecmint sshd[31669]: pam_unix(sshd:session): session closed for user root
Nov 29 12:54:17 tecmint sshd[7480]: Accepted password for root from 172.16.25.125 port 1787 ssh2

Remove Banned IP Address from DenyHosts

If you’ve ever blocked accidentally and want to remove that banned IP address from the denyhosts. You need to stop the service.

# /etc/init.d/denyhosts stop

To remove or delete banned IP address completely. You need to edit the following files and remove the IP address.

# vi /etc/hosts.deny
# vi /var/lib/denyhosts/hosts
# vi /var/lib/denyhosts/hosts-restricted
# vi /var/lib/denyhosts/hosts-root
# vi /var/lib/denyhosts/hosts-valid
# vi /var/lib/denyhosts/users-hosts

After removing the banned IP Address, restart the service again.

# /etc/init.d/denyhosts start

The offending IP address added to all the files under /var/lib/denyhosts directory, so it’s makes very difficult to determine the which files contain the offending IP address. One of the best way to find out the IP address using grep command. For example to find out IP address 172.16.25.125, do.

cd /var/lib/denyhosts
grep 172.16.25.125 *

Whitelist IP Addresses Permanently in DenyHosts

If you’ve list of static IP address that you want to whitelist permanently. Open the file /var/lib/denyhosts/allowed-hosts file. Whatever IP address included in this file will not be banned by default (consider this as a whilelist).

# vi /var/lib/denyhosts/allowed-hosts

And add the each IP address on separate line. Save and close the file.

# We mustn't block localhost
127.0.0.1
172.16.25.125
172.16.25.126
172.16.25.127

Source

Hegemon – A Modular System Monitoring Tool for Linux

There are all kinds of Linux system monitoring tools such as tophtopatop and many more that provide different output of system data such as resource utilization, running processes, CPU temperature and others.

In this article, we are going to review a modular monitoring tool called Hegemon. It’s an open source project written in Rust, which works are still in progress.

Hegemon includes the following features:

  • Monitor CPU, memory and swap usage
  • Monitor system temperatures and fan speeds
  • Adjustable update interval
  • Unit tests
  • Expand data stream for more detailed graphic visualization

How to Install Hegemon in Linux

Hegemon is currently available for Linux only and requires Rust and the development files for libsensors. The latter can be found in the default package repository and can be installed using the following commands.

# yum install lm_sensors-devel   [On CentOS/RHEL] 
# dnf install lm_sensors-devel   [On Fedora 22+]
# apt install libsensors4-dev    [On Debian/Ubuntu]

Detailed instructions how to install Rust programming language on your system are provided in the following article.

  1. How to Install Rust Programming Language in Linux

Once you have install Rust, you can proceed with installing Hegemon by using Rust’s package manager called cargo.

# cargo install hegemon

When the installation is complete run hegemon, by simply issuing the following command.

# hegemon

The hegemon graph will appear. You will have to give it a few seconds to collect data and update its information.

Hegemon Monitoring Tool

Hegemon Monitoring Tool

You will see the following sections:

  • CPU – Shows the CPU utilization
  • Core Num – Utilization of the CPU core
  • Mem – memory utilization
  • Swap – swap memory usage

You can expand each section by pressing “Space” button on your keyboard. This will provide a little more detailed information about the utilization of the resource you have selected.

If you wish to increase or decrease the update interval, you can use the + and - buttons on your keyboard.

How to Add New Streams

Hegemon uses data streams to visualize its data. Their behavior is defined in the stream trait here. Streams only need to provide basic data such as name, description and a method for retrieving numeric data value.

Hegemon will manage the rest – updating the information, rendering layout and computation stats. To learn more how to create data streams and learn how to create your own, you would need to dive deeper into the Hegemon project on git. A good starting point would be the project readme file.

Conclusion

Hegemon is a simple, easy to use tool to help you collect quick stats about your system status. While it’s functionality is rather basic compared to other monitoring tools, it does its job very well and is a reliable source for collecting system information. Future releases are expected to have network monitoring support, which may come quite handy.

Source

Monitorix 3.10.1 Released – A Lightweight System and Network Monitoring Tool for Linux

Monitorix is an open source, free and most powerful lightweight tool designed to monitor system and network resources in Linux. It regularly collects system and network data and display the information in graphs using its own web interface. Monitorix allows to monitor overall system performance and also help in detecting bottlenecks, failures, unwanted long response times and other abnormal activities.

Linux System and Network Monitoring Tool

Monitorix – Linux System and Network Monitoring Tool

It is written in Perl language and licensed under the terms of GNU (General Public License) as published by the FSP (Free Software Foundation). It uses RRDtool to generate graphs and display them using web interface.

This tool is specifically created for monitoring Red HatCentOSFedora based Linux systems, but today it runs on many different flavors of GNU/Linux distributions and even it runs on UNIX systems like OpenBSDNetBSDand FreeBSD.

The development of Monitorix is currently in active state and adding new features, new graphs, new updates and fixing bugs to offer a great tool for Linux system/network administration.

Monitorix Features

  1. System load average, active processes, per-processor kernel usage, global kernel usage and memory allocation.
  2. Monitors Disk drive temperatures and health.
  3. Filesystem usage and I/O activity of filesystems.
  4. Network traffic usage up to 10 network devices.
  5. System services including SSH, FTP, Vsftpd, ProFTP, SMTP, POP3, IMAP, POP3, VirusMail and Spam.
  6. MTA Mail statistics including input and output connections.
  7. Network port traffic including TCP, UDP, etc.
  8. FTP statistics with log file formats of FTP servers.
  9. Apache statistics of local or remote servers.
  10. MySQL statistics of local or remote servers.
  11. Squid Proxy Web Cache statistics.
  12. Fail2ban statistics.
  13. Monitor remote servers (Multihost).
  14. Ability to view statistics in graphs or in plain text tables per day, week, month or year.
  15. Ability to zoom graphs for better view.
  16. Ability to define the number of graphs per row.
  17. Built-in HTTP server.

For a full list of new features and updates, please check out the official feature page.

Installing Monitorix on a RHEL/CentOS/Fedora Linux

First, install following required packages.

# yum install rrdtool rrdtool-perl perl-libwww-perl perl-MailTools perl-MIME-Lite perl-CGI perl-DBI perl-XML-Simple perl-Config-General perl-HTTP-Server-Simple perl-IO-Socket-SSL wget

If in case yum fails to installing one or more of above packages, then you could enable following additional repositories to install them.

  1. Enable EPEL repository
  2. Enable RPMforge repository

Next, download the latest version of ‘Monitorix‘ package using wget command.

# wget http://www.monitorix.org/monitorix-3.10.1-1.noarch.rpm

Once successfully downloaded, install it using the rpm command.

# rpm -ivh monitorix-3.10.1-1.noarch.rpm
Preparing...                ########################################### [100%]
   1:monitorix              ########################################### [100%]

Once successfully installed, please have a look at the main configuration file ‘/etc/monitorix.conf‘ to add some extra settings according to your system and enable or disable graphs.

Finally, add Monitorix service to system start-up and start the service with following commands.

# chkconfig --level 35 monitorix on
# service monitorix start        
# systemctl start monitorix       [On RHEL/CentOS 7 and Fedora 22+ versions ]

Once, you’ve started service, the program will start collecting system information according to configuration set in ‘/etc/monitorix.conf‘ file, and after few minutes you will start seeing system graphs from your browser at.

http://localhost:8080/monitorix/

If you have SELinux in enabled state, then graphs are not visible and you will get tons of error messages in ‘/var/log/messages‘ or ‘/var/log/audit/audit.log‘ file about access denied to RRD database files. To get rid of such errors messages and visible graphs, you need to disable SELinux.

To Turn Off SELinux, simple changing the line “enforcing” to “disabled” in ‘/etc/selinux/config’ file.

SELINUX=disabled

The above will disable SELinux temporarily, until you reboot the machine. If you want the system to start in always disable mode, you need to reboot the system.

Installing Monitorix on a Ubuntu/Debian/Linux Mint

The Monitorix installation can be done in two-ways, using Izzy repository for automatic installation/updates and another using manually download and install .deb package.

The Izzy repository is an experimental repository but the packages from this repository should work on all versions of UbuntuDebian, etc. However, no warranties are given – So, the risk is all yours. If you still want to add this repository for automatic updates via apt-get, simply follow the steps provided below for automatic installation.

Automatic Installation Using Izzy Repository

Add the following line to your ‘/etc/apt/sources.list’ file.

deb http://apt.izzysoft.de/ubuntu generic universe

Get GPG key for this repository, you can get it using wget command.

# wget http://apt.izzysoft.de/izzysoft.asc

Once downloaded, add this GPG key to apt configuration by using the command ‘apt-key‘ as shown below.

# apt-key add izzysoft.asc

Finally, install the package via the repository.

# apt-get update
# apt-get install monitorix

Manual Installation Using .Deb Package

Manually, downloading latest version of .deb package and install it with taking care of required dependencies as shown below.

# apt-get update
# apt-get install rrdtool perl libwww-perl libmailtools-perl libmime-lite-perl librrds-perl libdbi-perl libxml-simple-perl libhttp-server-simple-perl libconfig-general-perl libio-socket-ssl-perl
# wget http://www.monitorix.org/monitorix_3.10.1-izzy1_all.deb
# dpkg -i monitorix_3.10.1-izzy1_all.deb

During installation, a web server configuration takes place. So, you need to reload the Apache web server to reflect new configuration.

# service apache2 restart         [On SysVinit]
# systemctl restart apache2       [On SystemD]

Monitorix comes with a default configuration, if you wish to change or adjust some settings take a look at the configuration file at ‘/etc/monitorix.conf‘. Once you’ve done changes reload the service for new configuration to take effect.

# service monitorix restart         [On SysVinit]
# systemctl restart monitorix       [On SystemD]

Now point your browser to ‘http://localhost:8080/monitorix‘ and start watching graphs of your system. It can be accessed from localhost only, if you wish to allow access to remote IP’s. Simply open the ‘/etc/apache2/conf.d/monitorix.conf‘ file and add IP’s to the ‘Allow from‘ clause. For example see below.

<Directory /usr/share/monitorix/cgi-bin/>
        DirectoryIndex monitorix.cgi
        Options ExecCGI
        Order Deny,Allow
        Deny from all
        Allow from 172.16.16.25
</Directory>

After you made changes to above configuration, do not forget to restart Apache.

# service apache2 restart         [On SysVinit]
# systemctl restart apache2       [On SystemD]

Monitorix Screenshots

Please check out the following are some screenshots.

Monitorix Homepage

Monitorix Homepage

Monitorix Homepage

Monitor Linux Load Average

System load average, active processes and memory allocation.

System load average, active processes and memory allocation.

Monitor Linux Kernel Usage

Global kernel usage

Global kernel usage

Monitor Linux Kernel Processor

Per-processor kernel usage.

Per-processor kernel usage.

Monitor Linux Disk Health

Disk drive temperatures and health.

Disk drive temperatures and health.

Monitor Linux Filesystem and Disk I/O Read

Filesystem usage and I/O activity.

Filesystem usage and I/O activity.

Monitor Linux Network Traffic

eth0 interface traffic

eth0 interface traffic

Monitor Linux System Services

System services demand

System services demand

Monitor Linux Network Port Traffic

Network Port Traffic

Network Port Traffic

Monitor Linux Apache Statistics

Apache Statistics

Apache Statistics

Monitor MySQL/MariaDB Statistics

MySQL Statistics

MySQL Statistics

Reference Links:

  1. Monitorix Homepage
  2. Monitorix Documentation

Source

Dstat – A Resourceful Tool to Monitor Linux Server Performance in Real-Time

Some of the popular and frequently used system resource generating tools available on the Linux platform include vmstatnetstatiostatifstat and mpstat. They are used for reporting statistics from different system components such as virtual memory, network connections and interfaces, CPU, input/output devices and more.

As a system administrator, you may be looking for that one tool that can give your a good amount of the information provided by above tools, even more, a single and powerful tool that has additional features and capabilities, then look no further than dstat.

Suggested Read: 20 Command Line Tools to Monitor Linux Performance

dstat is a powerful, flexible and versatile tool for generating Linux system resource statistics, that is a replacement for all the tools mentioned above. It comes with extra features, counters and it is highly extensible, users with Python knowledge can build their own plugins.

Features of dstat:

  1. Joins information from vmstat, netstat, iostat, ifstat and mpstat tools
  2. Displays statistics simultaneously
  3. Orders counters and highly-extensible
  4. Supports summarizing of grouped block/network devices
  5. Displays interrupts per device
  6. Works on accurate timeframes, no timeshifts when a system is stressed
  7. Supports colored output, it indicates different units in different colors
  8. Shows exact units and limits conversion mistakes as much as possible
  9. Supports exporting of CSV output to Gnumeric and Excel documents

How to Install dstat in Linux Systems

dstat is available to install from default repositories on most Linux distributions, you can install and use it for monitoring a Linux system in the process of performance tuning tests or troubleshooting exercises.

# yum install dstat             [On RedHat/CentOS and Fedora]
$ sudo apt-get install dstat    [On Debian, Ubuntu and Linux Mint]

It works in real-time, outputting selective information in columns, including the magnitude and units for stats displayed after every one second, by default.

Note: The dstat output is aimed specifically for human interpretation, not as input for other tools to process.

Below is an output seen after running the dstat command without any options and arguments (similar to using -cdngy (default) options or -a option).

$ dstat 
Dstat - Linux Performance Statistics Monitoring

Dstat – Linux Performance Statistics Monitoring

The output above indicates:

  1. CPU stats: cpu usage by a user (usr) processes, system (sys) processes, as well as the number of idle (idl) and waiting (wai) processes, hard interrupt (hiq) and soft interrupt (siq).
  2. Disk stats: total number of read (read) and write (writ) operations on disks.
  3. Network stats: total amount of bytes received (recv) and sent (send) on network interfaces.
  4. Paging stats: number of times information is copied into (in) and moved out (out) of memory.
  5. System stats: number of interrupts (int) and context switches (csw).

To display information provided by vmstat, use the -v or --vmstat option:

$ dstat --vmstat
Dstat - Linux Process and Memory Monitoring

Dstat – Linux Process and Memory Monitoring

In the image above, dstat displays:

  1. Process stats: number of running (run), blocked (blk) and new (new) spawned processes.
  2. Memory stats: amount of used (used), buffered (buff), cached (cach) and free (free) memory.

I already explained at the last three sections (pagingdisk and system stats) in the previous example.

Suggested Read: Sysstat – All-in-One System Performance and Usage Activity Monitoring Tool For Linux

Let us dive into some advanced dstat system monitoring commands. In the next example, we want to monitor a single program that is using the most CPU and consuming the most amount of memory.

The options in the command are:

  1. -c – cpu usage
  2. --top-cpu – process using most CPU
  3. -dn – disk and network stats
  4. --top-mem – process consuming the most memory
$ dstat -c --top-cpu -dn --top-mem
Dstat - Monitor Processes by CPU and Memory Usage

Dstat – Monitor Processes by CPU and Memory Usage

Additionally, you can also store the output of dstat in a .csv file for analysis at a latter time by enabling the --output option as in the example below.

0Here, we are displaying the time, cpu, mem, system load stats with a one second delay between 5 updates (counts).

$ dstat --time --cpu --mem --load --output report.csv 1 5 
Dstat - Monitor Linux CPU Memory and Load

Dstat – Monitor Linux CPU Memory and Load

There are several internal (such as options used in previous example) and external dstat plugins you can use with dstat, to view a list of all available plugins, run the command below:

$ dstat --list
List of Dstat Plugins

List of Dstat Plugins

It reads plugins from the paths below, therefore, add external plugins in these directories:

~/.dstat/
(path of binary)/plugins/
/usr/share/dstat/
/usr/local/share/dstat/

For more usage information, look through the dstat man page or visit the homepage at: http://dag.wiee.rs/home-made/dstat/.

Suggested Read: Collectl: An Advanced All-in-One Performance Monitoring Tool for Linux

dstat is a versatile, all-in-one system resources statistics generating tool, it combines information from several other tools such as vmstat, mpstat, iostat, netstat and ifstat.

I hope this review will be helpful to you, most importantly, you can share with us any suggestions, supplementary ideas to improve the article and also give us feedback about your experience using of dstatthrough the comment section below.

 
Source

How to Monitor Linux Server Security with Osquery

Osquery is a free open source, powerful and cross-platform SQL-based operating system instrumentation, monitoring, and analytics framework for Linux, FreeBSD, Windows, and Mac/OS X systems, built by Facebook. It is a simple and easy-to-use operating system explorer.

It combines a number of tools which perform low-level OS analytics and monitoring; these tools reveal an operating system as a high-performance relational database such as MySQL/MariaDBPostgreSQL and more, where OS concepts are represented in tabular form, thus allowing users to employ SQL commands to carry out system monitoring and analytics.

Osquery use a simple plugin and extensions API to implement SQL tables, there is a collection of tables in existence ready for use, and more are being written. Some tables can only be found on a specific operating system, for instance, you only find the kernel_modules table on Linux systems.

Additionally, you can run queries to monitor and analyze OS state on a single host via the osqueryi shell, or on several hosts on a network via a scheduler or execute them from any of your custom applications using osquery Thrift APIs.

How to Install Osquery in Linux

The Osquery can be installed from the official repository using apt yum or dnf package management tool on your respective Linux distribution as shown.

On Debian/Ubuntu

$ export OSQUERY_KEY=1484120AC4E9F8A1A577AEEE97A80C63C9D8B80B
$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys $OSQUERY_KEY
$ sudo add-apt-repository 'deb [arch=amd64] https://pkg.osquery.io/deb deb main'
$ sudo apt update
$ sudo apt install osquery

On RHEL/CentOS

$ curl -L https://pkg.osquery.io/rpm/GPG | sudo tee /etc/pki/rpm-gpg/RPM-GPG-KEY-osquery
$ sudo yum-config-manager --add-repo https://pkg.osquery.io/rpm/osquery-s3-rpm.repo
$ sudo yum-config-manager --enable osquery-s3-rpm-repo
$ sudo yum install osquery

On Fedora 22+

$ curl -L https://pkg.osquery.io/rpm/GPG | sudo tee /etc/pki/rpm-gpg/RPM-GPG-KEY-osquery
$ dnf config-manager --add-repo --add-repo https://pkg.osquery.io/rpm/osquery-s3-rpm.repo
$ sudo dnf config-manager --set-enabled osquery-s3-rpm
$ sudo dnf install osquery

How to Monitor and Analyze Linux Using Osquery

Once you have successfully installed Osquery on your system, launch the osqueryi shell to start querying the state of your OS as shown.

$ osqueryi

Using a virtual database. Need help, type '.help'
osquery> 

To get a summarized Linux system information run the following command.

osquery> SELECT  * FROM system_info;

Get Linux System Info

Get Linux System Info

To get a well formated list of all users on the Linux system, run the following query.

osquery> SELECT * FROM users;

List of All Linux Users

List of All Linux Users

To get a list of all Linux kernel modules and their status, run the following query.

osquery> SELECT * FROM kernel_modules;

List All Kernel Modules in Linux

List All Kernel Modules in Linux

To get a list of all installed RPM packages on CentOS, RHEL and Fedora, run the following query.

osquery> .all rpm_packages;

List All Installed RPM Packages

List All Installed RPM Packages

To get a informatin about running Linux processes, run the following query.

osquery> SELECT DISTINCT processes.name, listening_ports.port, processes.pid FROM listening_ports JOIN processes USING (pid) WHERE listening_ports.address = '0.0.0.0';

List Linux Processes Information

List Linux Processes Information

If you are running osquery on a desktop and have Firefox or Chrome installed, you can list all your add-ons using the following query.

osquery> .all firefox_addons;
osquery> .all  chrome_extensions;

To display a list of all implemented tables in Linux, use the .tables command as shown.

osquery> .tables;	#list all implemented tables
osquery> .help; 	#view help message

Osquery also provides file integrity monitoring (FIM), and process and socket auditing features and more, thus it is an intrusion detection tool, but this calls for certain configurations before you can deploy it for such a purpose. You can find more information from the Osquery Github repository.

Source

BCC – Dynamic Tracing Tools for Linux Performance Monitoring, Networking and More

BCC (BPF Compiler Collection) is a powerful set of appropriate tools and example files for creating resourceful kernel tracing and manipulation programs. It utilizes extended BPF (Berkeley Packet Filters), initially known as eBPF which was one of the new features in Linux 3.15.

BCC/BPF – Dynamic Tracing Tools for Linux Performance Monitoring

BCC/BPF – Dynamic Tracing Tools for Linux Performance Monitoring

Practically, most of the components used by BCC require Linux 4.1 or above, and its noteworthy features include:

  1. Requires no 3rd party kernel module, since all the tools work based on BPF which is built into the kernel and BCC uses features added in Linux 4.x series.
  2. Enables observation of software execution.
  3. Comprises of several performance analysis tools with example files and man pages.

Suggested Read: 20 Command Line Tools to Monitor Linux Performance

Best suited for advanced Linux users, BCC makes it easy to write BPF programs using kernel instrumentation in C, and front-ends in Python and lua. Additionally, it supports multiple tasks such as performance analysis, monitoring, network traffic control plus lots more.

How To Install BCC in Linux Systems

Remember that BCC uses features added in Linux kernel version 4.1 or above, and as a requirement, the kernel should have been compiled with the flags set below:

CONFIG_BPF=y
CONFIG_BPF_SYSCALL=y
# [optional, for tc filters]
CONFIG_NET_CLS_BPF=m
# [optional, for tc actions]
CONFIG_NET_ACT_BPF=m
CONFIG_BPF_JIT=y
CONFIG_HAVE_BPF_JIT=y
# [optional, for kprobes]
CONFIG_BPF_EVENTS=y

To check your kernel flags, view the file /proc/config.gz or run the commands as in the examples below:

tecmint@TecMint ~ $ grep CONFIG_BPF= /boot/config-`uname -r`
CONFIG_BPF=y
tecmint@TecMint ~ $ grep CONFIG_BPF_SYSCALL= /boot/config-`uname -r`
CONFIG_BPF_SYSCALL=y
tecmint@TecMint ~ $ grep CONFIG_NET_CLS_BPF= /boot/config-`uname -r`
CONFIG_NET_CLS_BPF=m
tecmint@TecMint ~ $ grep CONFIG_NET_ACT_BPF= /boot/config-`uname -r`
CONFIG_NET_ACT_BPF=m
tecmint@TecMint ~ $ grep CONFIG_BPF_JIT= /boot/config-`uname -r`
CONFIG_BPF_JIT=y
tecmint@TecMint ~ $ grep CONFIG_HAVE_BPF_JIT= /boot/config-`uname -r`
CONFIG_HAVE_BPF_JIT=y
tecmint@TecMint ~ $ grep CONFIG_BPF_EVENTS= /boot/config-`uname -r`
CONFIG_BPF_EVENTS=y

After verifying kernel flags, it’s time to install BCC tools in Linux systems.

On Ubuntu 16.04

Only the nightly packages are created for Ubuntu 16.04, but the installation instructions are very straightforward. No need of kernel upgrade or compile it from source.

$ echo "deb [trusted=yes] https://repo.iovisor.org/apt/xenial xenial-nightly main" | sudo tee /etc/apt/sources.list.d/iovisor.list
$ sudo apt-get update
$ sudo apt-get install bcc-tools

On Ubuntu 14.04

Begin by installing a 4.3+ Linux kernel, from http://kernel.ubuntu.com/~kernel-ppa/mainline.

As an example, write a small shell script “bcc-install.sh” with the content below.

Note: update PREFIX value to the latest date, and also browse the files in the PREFIX url provided to get the actual REL value, substitute them in the shell script.

#!/bin/bash
VER=4.5.1-040501
PREFIX=http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.5.1-wily/
REL=201604121331
wget ${PREFIX}/linux-headers-${VER}-generic_${VER}.${REL}_amd64.deb
wget ${PREFIX}/linux-headers-${VER}_${VER}.${REL}_all.deb
wget ${PREFIX}/linux-image-${VER}-generic_${VER}.${REL}_amd64.deb
sudo dpkg -i linux-*${VER}.${REL}*.deb

Save the file and exit. Make it executable, then run it as shown:

$ chmod +x bcc-install.sh
$ sh bcc-install.sh

Afterwards, reboot your system.

$ reboot

Next, run the commands below to install signed BCC packages:

$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys D4284CDD
$ echo "deb https://repo.iovisor.org/apt trusty main" | sudo tee /etc/apt/sources.list.d/iovisor.list
$ sudo apt-get update
$ sudo apt-get install binutils bcc bcc-tools libbcc-examples python-bcc

On Fedora 24-23

Install a 4.2+ kernel from http://alt.fedoraproject.org/pub/alt/rawhide-kernel-nodebug, if your system has a version lower than what is required. Below is an example of how to do that:

$ sudo dnf config-manager --add-repo=http://alt.fedoraproject.org/pub/alt/rawhide-kernel-nodebug/fedora-rawhide-kernel-nodebug.repo
$ sudo dnf update
$ reboot

After that, add the BBC tools repository, update your system and install the tools by executing the next series of commands:

$ echo -e '[iovisor]\nbaseurl=https://repo.iovisor.org/yum/nightly/f23/$basearch\nenabled=1\ngpgcheck=0' | sudo tee /etc/yum.repos.d/iovisor.repo
$ sudo dnf update
$ sudo dnf install bcc-tools

On Arch Linux – AUR

You should start by upgrading your kernel to at least version 4.3.1-1, subsequently install the packages below using any Arch package managers such as pacauryaourtcower, etc.

bcc bcc-tools python-bcc python2-bcc

How To Use BCC Tools in Linux Systems

All the BCC tools are installed under /usr/share/bcc/tools directory. However, you can alternatively run them from the BCC Github repository under /tools where they end with a .py extension.

$ ls /usr/share/bcc/tools 

argdist       capable     filetop         offwaketime  stackcount  vfscount
bashreadline  cpudist     funccount       old          stacksnoop  vfsstat
biolatency    dcsnoop     funclatency     oomkill      statsnoop   wakeuptime
biosnoop      dcstat      gethostlatency  opensnoop    syncsnoop   xfsdist
biotop        doc         hardirqs        pidpersec    tcpaccept   xfsslower
bitesize      execsnoop   killsnoop       profile      tcpconnect  zfsdist
btrfsdist     ext4dist    mdflush         runqlat      tcpconnlat  zfsslower
btrfsslower   ext4slower  memleak         softirqs     tcpretrans
cachestat     filelife    mysqld_qslower  solisten     tplist
cachetop      fileslower  offcputime      sslsniff     trace

We shall cover a few examples under – monitoring general Linux system performance and networking.

Trace open() syscalls

Let’s start by tracing all open() syscalls using opensnoop. This enable us tell us how various applications work by identifying their data files, config files and many more:

$ cd /usr/share/bcc/tools 
$ sudo ./opensnoop

PID    COMM               FD ERR PATH
1      systemd            35   0 /proc/self/mountinfo
2797   udisksd            13   0 /proc/self/mountinfo
1      systemd            35   0 /sys/devices/pci0000:00/0000:00:0d.0/ata3/host2/target2:0:0/2:0:0:0/block/sda/sda1/uevent
1      systemd            35   0 /run/udev/data/b8:1
1      systemd            -1   2 /etc/systemd/system/sys-kernel-debug-tracing.mount
1      systemd            -1   2 /run/systemd/system/sys-kernel-debug-tracing.mount
1      systemd            -1   2 /run/systemd/generator/sys-kernel-debug-tracing.mount
1      systemd            -1   2 /usr/local/lib/systemd/system/sys-kernel-debug-tracing.mount
2247   systemd            15   0 /proc/self/mountinfo
1      systemd            -1   2 /lib/systemd/system/sys-kernel-debug-tracing.mount
1      systemd            -1   2 /usr/lib/systemd/system/sys-kernel-debug-tracing.mount
1      systemd            -1   2 /run/systemd/generator.late/sys-kernel-debug-tracing.mount
1      systemd            -1   2 /etc/systemd/system/sys-kernel-debug-tracing.mount.wants
1      systemd            -1   2 /etc/systemd/system/sys-kernel-debug-tracing.mount.requires
1      systemd            -1   2 /run/systemd/system/sys-kernel-debug-tracing.mount.wants
1      systemd            -1   2 /run/systemd/system/sys-kernel-debug-tracing.mount.requires
1      systemd            -1   2 /run/systemd/generator/sys-kernel-debug-tracing.mount.wants
1      systemd            -1   2 /run/systemd/generator/sys-kernel-debug-tracing.mount.requires
1      systemd            -1   2 /usr/local/lib/systemd/system/sys-kernel-debug-tracing.mount.wants
1      systemd            -1   2 /usr/local/lib/systemd/system/sys-kernel-debug-tracing.mount.requires
1      systemd            -1   2 /lib/systemd/system/sys-kernel-debug-tracing.mount.wants
1      systemd            -1   2 /lib/systemd/system/sys-kernel-debug-tracing.mount.requires
1      systemd            -1   2 /usr/lib/systemd/system/sys-kernel-debug-tracing.mount.wants
1      systemd            -1   2 /usr/lib/systemd/system/sys-kernel-debug-tracing.mount.requires
1      systemd            -1   2 /run/systemd/generator.late/sys-kernel-debug-tracing.mount.wants
1      systemd            -1   2 /run/systemd/generator.late/sys-kernel-debug-tracing.mount.requires
1      systemd            -1   2 /etc/systemd/system/sys-kernel-debug-tracing.mount.d
1      systemd            -1   2 /run/systemd/system/sys-kernel-debug-tracing.mount.d
1      systemd            -1   2 /run/systemd/generator/sys-kernel-debug-tracing.mount.d
....

Summarize Block Device I/O Latency

In this example, it shows a summarized distribution of disk I/O latency using biolatecncy. After executing the command, wait for a few minutes and hit Ctrl-C to end it and view the output.

$ sudo ./biolatecncy

Tracing block device I/O... Hit Ctrl-C to end.
^C
     usecs               : count     distribution
         0 -> 1          : 0        |                                        |
         2 -> 3          : 0        |                                        |
         4 -> 7          : 0        |                                        |
         8 -> 15         : 0        |                                        |
        16 -> 31         : 0        |                                        |
        32 -> 63         : 0        |                                        |
        64 -> 127        : 0        |                                        |
       128 -> 255        : 3        |****************************************|
       256 -> 511        : 3        |****************************************|
       512 -> 1023       : 1        |*************                           |

Trace New Processes via exec() Syscalls

In this section, we shall move to tracing new processes in execution using execsnoop tool. Each time a process is forked by fork() and exec() syscalls, it is shown in the output. However, not all processes are captured.

$ sudo ./execsnoop

PCOMM            PID    PPID   RET ARGS
gnome-screensho  14882  14881    0 /usr/bin/gnome-screenshot --gapplication-service
systemd-hostnam  14892  1        0 /lib/systemd/systemd-hostnamed
nautilus         14897  2767    -2 /home/tecmint/bin/net usershare info
nautilus         14897  2767    -2 /home/tecmint/.local/bin/net usershare info
nautilus         14897  2767    -2 /usr/local/sbin/net usershare info
nautilus         14897  2767    -2 /usr/local/bin/net usershare info
nautilus         14897  2767    -2 /usr/sbin/net usershare info
nautilus         14897  2767    -2 /usr/bin/net usershare info
nautilus         14897  2767    -2 /sbin/net usershare info
nautilus         14897  2767    -2 /bin/net usershare info
nautilus         14897  2767    -2 /usr/games/net usershare info
nautilus         14897  2767    -2 /usr/local/games/net usershare info
nautilus         14897  2767    -2 /snap/bin/net usershare info
compiz           14899  14898   -2 /home/tecmint/bin/libreoffice --calc
compiz           14899  14898   -2 /home/tecmint/.local/bin/libreoffice --calc
compiz           14899  14898   -2 /usr/local/sbin/libreoffice --calc
compiz           14899  14898   -2 /usr/local/bin/libreoffice --calc
compiz           14899  14898   -2 /usr/sbin/libreoffice --calc
libreoffice      14899  2252     0 /usr/bin/libreoffice --calc
dirname          14902  14899    0 /usr/bin/dirname /usr/bin/libreoffice
basename         14903  14899    0 /usr/bin/basename /usr/bin/libreoffice
...

Trace Slow ext4 Operations

Using ext4slower to trace the ext4 file system common operations that are slower than 10ms, to help us identify independently slow disk I/O via the file system.

Suggested Read: 13 Linux Performance Monitoring Tools

It only outputs those operations that exceed a threshold:

$ sudo ./execslower

Tracing ext4 operations slower than 10 ms
TIME     COMM           PID    T BYTES   OFF_KB   LAT(ms) FILENAME
11:59:13 upstart        2252   W 48      1          10.76 dbus.log
11:59:13 gnome-screensh 14993  R 144     0          10.96 settings.ini
11:59:13 gnome-screensh 14993  R 28      0          16.02 gtk.css
11:59:13 gnome-screensh 14993  R 3389    0          18.32 gtk-main.css
11:59:25 rs:main Q:Reg  1826   W 156     60         31.85 syslog
11:59:25 pool           15002  R 208     0          14.98 .xsession-errors
11:59:25 pool           15002  R 644     0          12.28 .ICEauthority
11:59:25 pool           15002  R 220     0          13.38 .bash_logout
11:59:27 dconf-service  2599   S 0       0          22.75 user.BHDKOY
11:59:33 compiz         2548   R 4096    0          19.03 firefox.desktop
11:59:34 compiz         15008  R 128     0          27.52 firefox.sh
11:59:34 firefox        15008  R 128     0          36.48 firefox
11:59:34 zeitgeist-daem 2988   S 0       0          62.23 activity.sqlite-wal
11:59:34 zeitgeist-fts  2996   R 8192    40         15.67 postlist.DB
11:59:34 firefox        15008  R 140     0          18.05 dependentlibs.list
11:59:34 zeitgeist-fts  2996   S 0       0          25.96 position.tmp
11:59:34 firefox        15008  R 4096    0          10.67 libplc4.so
11:59:34 zeitgeist-fts  2996   S 0       0          11.29 termlist.tmp
...

Trace Block Device I/O with PID and Latency

Next off, let’s dive into printing a line per disk I/O each second, with details such as process ID, sector, bytes, latency among others using biosnoop:

$ sudo ./biosnoop

TIME(s)        COMM           PID    DISK    T  SECTOR    BYTES   LAT(ms)
0.000000000    ?              0              R  -1        8          0.26
2.047897000    ?              0              R  -1        8          0.21
3.280028000    kworker/u4:0   14871  sda     W  30552896  4096       0.24
3.280271000    jbd2/sda1-8    545    sda     W  29757720  12288      0.40
3.298318000    jbd2/sda1-8    545    sda     W  29757744  4096       0.14
4.096084000    ?              0              R  -1        8          0.27
6.143977000    ?              0              R  -1        8          0.27
8.192006000    ?              0              R  -1        8          0.26
8.303938000    kworker/u4:2   15084  sda     W  12586584  4096       0.14
8.303965000    kworker/u4:2   15084  sda     W  25174736  4096       0.14
10.239961000   ?              0              R  -1        8          0.26
12.292057000   ?              0              R  -1        8          0.20
14.335990000   ?              0              R  -1        8          0.26
16.383798000   ?              0              R  -1        8          0.17
...

Trace Page Cache hit/miss Ratio

Thereafter, we proceed to using cachestat to displays one line of summarized statistics from the system cache every second. This enables for system tuning operations by pointing out low cache hit ratio and high rate of misses:

$ sudo ./cachestat

 HITS   MISSES  DIRTIES  READ_HIT% WRITE_HIT%   BUFFERS_MB  CACHED_MB
       0        0        0       0.0%       0.0%           19        544
       4        4        2      25.0%      25.0%           19        544
    1321       33        4      97.3%       2.3%           19        545
    7476        0        2     100.0%       0.0%           19        545
    6228       15        2      99.7%       0.2%           19        545
       0        0        0       0.0%       0.0%           19        545
    7391      253      108      95.3%       2.7%           19        545
   33608     5382       28      86.1%      13.8%           19        567
   25098       37       36      99.7%       0.0%           19        566
   17624      239      416      96.3%       0.5%           19        520
...

Trace TCP Active Connections

Monitoring TCP connections every second using tcpconnect. Its output includes source and destination address, and port number. This tool is useful for tracing unexpected TCP connections, thereby helping us to identify inefficiencies in application configurations or an attacker.

$ sudo ./tcpconnect

PID    COMM         IP SADDR            DADDR            DPORT
15272  Socket Threa 4  10.0.2.15        91.189.89.240    80  
15272  Socket Threa 4  10.0.2.15        216.58.199.142   443 
15272  Socket Threa 4  10.0.2.15        216.58.199.142   80  
15272  Socket Threa 4  10.0.2.15        216.58.199.174   443 
15272  Socket Threa 4  10.0.2.15        54.200.62.216    443 
15272  Socket Threa 4  10.0.2.15        54.200.62.216    443 
15272  Socket Threa 4  10.0.2.15        117.18.237.29    80  
15272  Socket Threa 4  10.0.2.15        216.58.199.142   80  
15272  Socket Threa 4  10.0.2.15        216.58.199.131   80  
15272  Socket Threa 4  10.0.2.15        216.58.199.131   443 
15272  Socket Threa 4  10.0.2.15        52.222.135.52    443 
15272  Socket Threa 4  10.0.2.15        216.58.199.131   443 
15272  Socket Threa 4  10.0.2.15        54.200.62.216    443 
15272  Socket Threa 4  10.0.2.15        54.200.62.216    443 
15272  Socket Threa 4  10.0.2.15        216.58.199.132   443 
15272  Socket Threa 4  10.0.2.15        216.58.199.131   443 
15272  Socket Threa 4  10.0.2.15        216.58.199.142   443 
15272  Socket Threa 4  10.0.2.15        54.69.17.198     443 
15272  Socket Threa 4  10.0.2.15        54.69.17.198     443 
...

All the tools above can also be used with various options, to enable the help page for a given tool, make use of the -h option, for example:

$ sudo ./tcpconnect -h

usage: tcpconnect [-h] [-t] [-p PID] [-P PORT]

Trace TCP connects

optional arguments:
  -h, --help            show this help message and exit
  -t, --timestamp       include timestamp on output
  -p PID, --pid PID     trace this PID only
  -P PORT, --port PORT  comma-separated list of destination ports to trace.

examples:
    ./tcpconnect           # trace all TCP connect()s
    ./tcpconnect -t        # include timestamps
    ./tcpconnect -p 181    # only trace PID 181
    ./tcpconnect -P 80     # only trace port 80
    ./tcpconnect -P 80,81  # only trace port 80 and 81

Trace Failed exec()s Syscalls

To trace failed exec()s syscalls, employ the -x option with opensnoop as below:

$ sudo ./opensnoop -x

PID    COMM               FD ERR PATH
15414  pool               -1   2 /home/.hidden
15415  (ostnamed)         -1   2 /sys/fs/cgroup/cpu/system.slice/systemd-hostnamed.service/cgroup.procs
15415  (ostnamed)         -1   2 /sys/fs/cgroup/cpu/system.slice/cgroup.procs
15415  (ostnamed)         -1   2 /sys/fs/cgroup/cpuacct/system.slice/systemd-hostnamed.service/cgroup.procs
15415  (ostnamed)         -1   2 /sys/fs/cgroup/cpuacct/system.slice/cgroup.procs
15415  (ostnamed)         -1   2 /sys/fs/cgroup/blkio/system.slice/systemd-hostnamed.service/cgroup.procs
15415  (ostnamed)         -1   2 /sys/fs/cgroup/blkio/system.slice/cgroup.procs
15415  (ostnamed)         -1   2 /sys/fs/cgroup/memory/system.slice/systemd-hostnamed.service/cgroup.procs
15415  (ostnamed)         -1   2 /sys/fs/cgroup/memory/system.slice/cgroup.procs
15415  (ostnamed)         -1   2 /sys/fs/cgroup/pids/system.slice/systemd-hostnamed.service/cgroup.procs
2548   compiz             -1   2 
15416  systemd-cgroups    -1   2 /run/systemd/container
15416  systemd-cgroups    -1   2 /sys/fs/kdbus/0-system/bus
15415  systemd-hostnam    -1   2 /run/systemd/container
15415  systemd-hostnam    -1  13 /proc/1/environ
15415  systemd-hostnam    -1   2 /sys/fs/kdbus/0-system/bus
1695   dbus-daemon        -1   2 /run/systemd/users/0
15415  systemd-hostnam    -1   2 /etc/machine-info
15414  pool               -1   2 /home/tecmint/.hidden
15414  pool               -1   2 /home/tecmint/Binary/.hidden
2599   dconf-service      -1   2 /run/user/1000/dconf/user
...

Trace Particular Process Functions

The last example below demonstrates how to execute a custom trace operation. We are tracing a particular process using its PID.

Suggested Read: Netdata – A Real-Time Performance Monitoring Tool for Linux

First determine the process ID:

$ pidof firefox

15437

Later on, run the custom trace command. In the command below: -p specifies the process ID, do_sys_open() is a kernel function that is traced dynamically including its second argument as a string.

$ sudo ./trace -p 4095 'do_sys_open "%s", arg2'

TIME     PID    COMM         FUNC             -
12:17:14 15437  firefox      do_sys_open      /run/user/1000/dconf/user
12:17:14 15437  firefox      do_sys_open      /home/tecmint/.config/dconf/user
12:18:07 15437  firefox      do_sys_open      /run/user/1000/dconf/user
12:18:07 15437  firefox      do_sys_open      /home/tecmint/.config/dconf/user
12:18:13 15437  firefox      do_sys_open      /sys/devices/system/cpu/present
12:18:13 15437  firefox      do_sys_open      /dev/urandom
12:18:13 15437  firefox      do_sys_open      /dev/urandom
12:18:14 15437  firefox      do_sys_open      /usr/share/fonts/truetype/liberation/LiberationSans-Italic.ttf
12:18:14 15437  firefox      do_sys_open      /usr/share/fonts/truetype/liberation/LiberationSans-Italic.ttf
12:18:14 15437  firefox      do_sys_open      /usr/share/fonts/truetype/liberation/LiberationSans-Italic.ttf
12:18:14 15437  firefox      do_sys_open      /sys/devices/system/cpu/present
12:18:14 15437  firefox      do_sys_open      /dev/urandom
12:18:14 15437  firefox      do_sys_open      /dev/urandom
12:18:14 15437  firefox      do_sys_open      /dev/urandom
12:18:14 15437  firefox      do_sys_open      /dev/urandom
12:18:15 15437  firefox      do_sys_open      /sys/devices/system/cpu/present
12:18:15 15437  firefox      do_sys_open      /dev/urandom
12:18:15 15437  firefox      do_sys_open      /dev/urandom
12:18:15 15437  firefox      do_sys_open      /sys/devices/system/cpu/present
12:18:15 15437  firefox      do_sys_open      /dev/urandom
12:18:15 15437  firefox      do_sys_open      /dev/urandom
....

Summary

BCC is a powerful and easy-to-use toolkit for various System administration tasks such as tracing system performance monitoring, tracing block device I/O, TCP functions, file system operations, syscalls, Node.js probes, plus lots more. Importantly, it ships in with several example files and man pages for the tools to guide you, making it user friendly and reliable.

Last but not least, you can get back to us by sharing your thoughts about the subject, ask questions, make useful suggestions or any constructive feedback via the comment section below.

Source

linux-dash: Monitors “Linux Server Performance” Remotely Using Web Browser

If you are looking for a low resource, speedy server statistics monitoring script, look no further than linux-dash. Linux Dash’s claim to popular is its slick and responsive web dashboard that works better on large and small screens.

Install linux-dash in Linux

linux-dash: Server Monitoring Tool

linux dash is a memory efficient, low resource, easy to install, server statistics monitoring script written in PHP. The web statistics page allows you to drag and drop the various widgets and rearrange the display as you desire. The script displays live statistics of your server, including RAM, CPU, Disk Space, Network Information, Installed Software’s, Running Processes and much more.

Linux Dash’s interface provides information in a organized fashion, which makes us easy to switch between specific sections using buttons in the main toolbar. Linux Dash is not an advanced monitoring tool like Collectlor Glances, but still it’s a good monitoring application for users who are looking for lightweight and easy to deploy.

linux-dash Demo

Please have a quick look at the demo page set up by the developer of linux-dash.

  1. Watch Demo at: linux-dash: Server Monitoring

Linux Dash Features

  1. A responsive web based interface for monitoring server resources.
  2. A real-time monitoring of CPU, RAM, Disk Usage, Load, Uptime, Users and many more system statistics.
  3. Easy install for servers with Apache/Nginx + PHP.
  4. Click and drag to re-organize widgets.
  5. Support for wide range of Linux server flavours.

Pre-requisites for Installation

  1. A Linux server with Apache/Nginx installed.
  2. A PHP and php-json extension installed.
  3. A unzip utility installed on server.
  4. Optionally, you need htpasswd installed, to password protect the statistics page on your server.

After all, you do not want to be displaying your statistics to the whole world, as it is a security risk.

Note: htpasswd is just one of the ways to protect your server. There are others such as denying access to certain IPs for instance. Use whichever way you are comfortable.

However, in this article, I’ve used Apache web server to show you how to setup linux-dash on Linux servers. I’ve also tested this nifty tool on other browsers such as FirefoxMidori and Chrome and it works fine.

Installing “linux-dash” in RedHat and Debian Based Systems

As I said above, that linux-dash is created in PHP for Linux with Apache. So, you must have these two packages installed on the server along with php-json module. Let’s install them using package manager tool called yum or apt-get according to your server distribution.

Step 1: Install Apache, PHP and PHP Modules

Install on Red Hat based systems using yum command.

# yum install httpd httpd-tools
# yum install php php-xml php-common php-json
# service httpd start

Install on Debian based systems using apt-get command.

# apt-get install apache2 apache2-utils
# apt-get install php5 curl php5-curl php5-json
# service apache2 start

Step 2: Download and Install linux-Dash

Proceed to ‘GitHub‘ repository, download linux-dash and extract contents into a sub-directory called ‘linux-dash‘ in your Apache public folder (i.e. /var/www or /var/www/html).

# git clone https://github.com/afaqurk/linux-dash.git

Step 3: Monitor Server using linux-dash

Open your browser and navigate to the folder where you have ‘linux-dash‘ installed. On mine it ishttp://localhost/linux-dash.

The following are some screenshots of linux-dash dashboard taken from my CentOS 6.5 server.

General Info

General Information

General Information

Disk Usage

Disk Monitoring

Disk Monitoring

CPU Usage

CPU and Process Monitoring

CPU and Process Monitoring

RAM Usage

RAM Utilization

RAM Utilization

Users

Users Information

Users Information

Network Statistics

Network Statistics

Network Statistics

Full linux-dash Preview

Server Monitoring Web Dashboard

Server Monitoring Web Dashboard

Step 4: Password Protect linux-dash

To password protect your statistics page, you need to generate an ‘.htaccess’ and ‘.htpasswd‘ file. The following command will create a user ‘admin‘, sets password ‘admin123‘ and creates new ‘htpasswd‘ file under ‘/var‘ folder.

# htpasswd -c /var/.htpasswd admin admin123

Note: The ‘htpasswd‘ file stores the user ‘admin‘ password in encrypted format and this file should be placed in a non public folder to protect from viewing in the browser.

Now create a ‘.htaccess‘ file under ‘linux-dash‘ directory and add the following content to it. Save and close the file.

AuthName "Restricted Area" 
AuthType Basic 
AuthUserFile /var/.htpasswd 
AuthGroupFile /dev/null 
require valid-user

Clear your browser’s cache. The next time you navigate to the statistics page, you will be greeted with a login prompt. Login with the username and password you used in the htpasswd command.

Password Protect linux-dash

Password Protect linux-dash

Reference Links

https://github.com/afaqurk/linux-dash

Enjoy your low resource, server statistics monitoring application.

Source

WP2Social Auto Publish Powered By : XYZScripts.com