What Is Kubernetes? – Make Tech Easier
Kubernetes (pronounced “CUBE-A-NET-IS”) is an open-source platform that helps manage container applications such as Docker. Whether you are looking to automate or scale these containers across multiple hosts, Kubernetes can speed up deployment. To do this it may use internal components such as Kubernetes API or third-party extensions which run on Kubernetes.
This article will help you understand the basic concepts of Kubernetes and why it is causing such a seismic shift in the server market, with vendors as well as cloud providers, such as Azure and Google Cloud, offering Kubernetes services.
Kubernetes: A Brief History
Kubernetes is one of Google’s gifts to the open source community. The container platform was a part of Borg, an internal Google project for more than a decade. Borg let Google manage hundreds and even thousands of tasks (called “Borglets”) from different applications across clusters. Its objective was to efficiently utilize machines (and virtual machines) while ensuring high availability of run-time features.

The same architecture was popular with other companies looking for ways to efficiently ensure high availability. Somewhere in 2015, as soon as Kubernetes 1.0 came out, Google gave up control over the technology. Kubernetes is now with a foundation called Cloud Native Computing Foundation (CNCF), which itself is part of the Linux Foundation.

How Kubernetes Works
Borrowing the ideas of the Borg Project, the “Borglets” gave way to “pods,” which are scheduler units housing the containers. Essentially, they have individual IP addresses which come into the picture whenever a container requires CPU, memory or storage.

The pods ensure high availability by load balancing the traffic in a round-robin format. Furthermore, they are inside machines (or virtual machines) called “worker nodes,” also known as “minions.” From this point a “master node” controls the entire cluster by orchestrating containerization using the Kubernetes API. Docker is capable of running in each worker node where it can download images and start containers.

To get the API connection at a Kubernetes cluster, a CLI syntax called kubectl is used. This is a very important command because it single-handedly runs all the instructions which the Master node serves to worker nodes. Mastering kubectl requires a bit of learning, but once you learn, you can start utilizing Kubernetes clusters. Kubernetes as well as Docker are written in the Go programming language.
Applications
Kubernetes can drastically bring down server and data center costs because of its high efficiency in using the machines. Some of the common applications of Kubernetes include:
- Managing application servers. Most application servers require security, configuration management, updates and more, which can run using Kubernetes.
 - Automatic rollouts and rollbacks. With Kubernetes, you don’t have to worry about product rollouts or rollbacks across multiple end nodes.
 - Deploying stateless apps. Web applications are now remotely controllable. For example, Kubernetes can help you run Nginx servers using a stateless application deployment.
 - Deploying stateful apps. Kubernetes can run a MySQL database.
 - Storing API objects. For different storage needs, Kubernetes ensures ideal storage because it uses container principles.
 - Out-of-the-box-ready. Kubernetes is very helpful in out-of-the-box applications such as service discovery, logging and monitoring and authentication.
 - IoT applications. Kubernetes is finding an increasing use in IoT because of its massive scaling capability.
 - Run anywhere. You can run Kubernetes anywhere, including inside a suitcase.
 

In Summary
The objective of Kubernetes is to utilize computing resources to their maximum extent. Since you can orchestrate containers across multiple hosts, the end nodes will never have resource problems or failure. It helps scale automatically because you only have to give the command once from Master node, and to scale applicataions is nothing short of revolutionary.
To learn more about Kubernetes, visit its official website which contains tutorials.
Swatchdog – Simple Log File Watcher in Real-Time in Linux
Swatchdog (the “Simple WATCH DOG”) is a simple Perl script for monitoring active log files on Unix-like systems such as Linux. It watches your logs based on regular expressions that you can define in a configuration file. You can run it from the command line or in the background, detached from any terminal using the daemon mode option.
Note that the program was originally called swatch (the “Simple Watcher”) but a request by the old Swiss watch company for a name change saw the developer change its name to swatchdog.
Read Also: 4 Good Open Source Log Monitoring and Management Tools for Linux
Importantly, swatchdog has grown from a script for watching logs produced by Unix’s syslog facility, and it can monitor just about any kind of logs.
How to Install Swatch in Linux
The package swatchdog is available to install from the official repositories of mainstream Linux distributions as a package “swatch” via a package manager as shown.
$ sudo apt install swatch [On Ubuntu/Debian] $ sudo yum install epel-release && sudo yum install swatch [On RHEL/CentOS] $ sudo dnf install swatch [On Fedora 22+]
To install most latest version of swatchdog, you need to compile it from source using following commands in any Linux distribution.
$ git clone https://github.com/ToddAtkins/swatchdog.git $ cd swatchdog/ $ perl Makefile.PL $ make $ sudo make install $ sudo make realclean
Once you have installed the swatch, you need to create its configuration file (default location is /home/$USER/.swatchdogrc or .swatchrc), to determine what types of expression patterns to look for and what type of action(s) should be taken when a pattern is matched.
$ touch /home/tecmint/.swatchdogrc OR $ touch /home/tecmint/.swatchrc
Add your regular expression in this file and each line should contain a keyword and value (sometimes optional), separated by a space or an equal (=) sign. You need to specify a pattern and an action(s) to be taken when a pattern is matched.
We will use a simple configuration file, you can find more options in the swatchdog man page, for instance.
watchfor /sudo/ echo red mail=admin@tecmint.com, subject="Sudo Command"
Here, our regular expression is a literal string – “sudo”, means any time the string sudo appeared in the log file, would be printed to the terminal in red text and mail specify the action to be taken, which is to echo the matched pattern on the terminal and send an e-mail to the specified address, receptively.
After you have configured it, swatchdog reads the /var/log/syslog log file by default, if this file is not present, it reads /var/log/messages.
$ swatch [On RHEL/CentOS & Fedora] $ swatchdog [On Ubuntu/Debian]
You can specify a different configuration file using the -c flag as shown in the following example.
First create a swatch configuration directory and a file.
$ mkdir swatch $ touch swatch/secure.conf
Next, add the following configuration in the file to monitor failed login attempts, failed SSH login attempts, successful SSH logins from the /var/log/secure log file.
watchfor /FAILED/ echo red mail=admin@tecmint.com, subject="Failed Login Attempt" watchfor /ROOT LOGIN/ echo red mail=admin@tecmint.com, subject="Successful Root Login" watchfor /ssh.*: Failed password/ echo red mail=admin@tecmint.com, subject="Failed SSH Login Attempt" watchfor /ssh.*: session opened for user root/ echo red mail=admin@tecmint.com, subject="Successful SSH Root Login"
Now run the Swatch by specifying the configuration file using the -c and log file using -t flag as shown.
$ swatchdog -c ~/swatch/secure.conf -t /var/log/secure
To run it in the background, use the --daemon flag; in this mode, it is detached from any terminal.
$ swatchdog ~/swatch/secure.conf -t /var/log/secure --daemon
Now to test the swatch configuration, try to login into server from the different terminal, you see the following output printed to the terminal where Swatchdog is running.
*** swatch version 3.2.3 (pid:16531) started at Thu Jul 12 12:45:10 BST 2018 Jul 12 12:51:19 tecmint sshd[16739]: Failed password for root from 192.168.0.103 port 33324 ssh2 Jul 12 12:51:19 tecmint sshd[16739]: Failed password for root from 192.168.0.103 port 33324 ssh2 Jul 12 12:52:07 tecmint sshd[16739]: pam_unix(sshd:session): session opened for user root by (uid=0) Jul 12 12:52:07 tecmint sshd[16739]: pam_unix(sshd:session): session opened for user root by (uid=0)
You can also run multiple swatch processes to monitor various log files.
$ swatchdog -c ~/site1_watch_config -t /var/log/nginx/site1/access_log --daemon $ swatchdog -c ~/messages_watch_config -t /var/log/messages --daemon $ swatchdog -c ~/auth_watch_config -t /var/log/auth.log --daemon
For more information, check out the swatchdog man page.
$ man swatchdog
Swatchdog SourceForge Repository: https://sourceforge.net/projects/swatch/
The following are some additional log monitoring guides that you will find useful:
- 4 Ways to Watch or Monitor Log Files in Real Time
 - How to Create a Centralized Log Server with Rsyslog
 - Monitor Server Logs in Real-Time with “Log.io” Tool
 - lnav – Watch and Analyze Apache Logs from a Linux Terminal
 - ngxtop – Monitor Nginx Log Files in Real Time in Linux
 
Swatchdog is a simple active log file monitoring tool for Unix-like systems such as Linux. Try it out and share your thoughts or ask any questions in the comments section.
How to Set Linux Process Priority Using nice and renice Commands
In this article, we’ll briefly explain the kernel scheduler (also known as the process scheduler), and process priority, which are topics beyond the scope of this guide. Then we will dive into a little bit of Linux process management: see how to run a program or command with modified priority and also change the priority of running Linux processes.
Read Also: How to Monitor Linux Processes and Set Process Limits on a Per-User Basis
Understanding the Linux Kernel Scheduler
The kernel scheduler is a unit of the kernel that determines the most suitable process out of all runnable processes to execute next; it allocates processor time between the runnable processes on a system. A runnable process is one which is waiting only for CPU time, it’s ready to be executed.
The scheduler forms the core of multitasking in Linux, using a priority-based scheduling algorithm to choose between the runnable processes in the system. It ranks processes based on the most deserving as well as need for CPU time.
Understanding Process Priority and Nice Value
The kernel stores a great deal of information about processes including process priority which is simply the scheduling priority attached to a process. Processes with a higher priority will be executed before those with a lower priority, while processes with the same priority are scheduled one after the next, repeatedly.
There are a total of 140 priorities and two distinct priority ranges implemented in Linux. The first one is nice value (niceness) which ranges from -20 (highest priority value) to 19 (lowest priority value) and the default is 0, this is what we will uncover in this guide. The other is the real-time priority, which ranges from 1 to 99 by default, then 100 to 139 are meant for user-space.
One important characteristic of Linux is dynamic priority-based scheduling, which allows nice value of a processes to be changed (increased or decreased) depending on your needs, as we’ll see later on.
How to Check Nice Value of Linux Processes
To see the nice values of processes, we can use utilities such as ps, top or htop.
To view processes nice value with ps command in user-defined format (here the NI column shows niceness of processes).
$ ps -eo pid,ppid,ni,comm

View Linux Processes Nice Values
Alternatively, you can use top or htop utilities to view Linux processes nice values as shown.
$ top $ htop

Check Linux Process Nice Values using Top Command

Check Linux Process Nice Values using Htop Command
Difference Between PR or PRI and NI
From the top and htop outputs above, you’ll notice that there is a column called PR and PRI receptively which show the priority of a process.
This therefore means that:
NI– is the nice value, which is a user-space concept, whilePRorPRI– is the process’s actual priority, as seen by the Linux kernel.
How To Calculate PR or PRI Values
Total number of priorities = 140 Real time priority range(PR or PRI): 0 to 99 User space priority range: 100 to 139
Nice value range (NI): -20 to 19
PR = 20 + NI PR = 20 + (-20 to + 19) PR = 20 + -20 to 20 + 19 PR = 0 to 39 which is same as 100 to 139.
But if you see a rt rather than a number as shown in the screen shot below, it basically means the process is running under real time scheduling priority.

Linux rt Process
How to Run A Command with a Given Nice Value in Linux
Here, we will look at how to prioritize the CPU usage of a program or command. If you have a very CPU-intensive program or task, but you also understand that it might take a long time to complete, you can set it a high or favorable priority using the nice command.
The syntax is as follows:
$ nice -n niceness-value [command args] OR $ nice -niceness-value [command args] #it’s confusing for negative values OR $ nice --adjustment=niceness-value [command args]
Important:
- If no value is provided, nice sets a priority of 10 by default.
 - A command or program run without nice defaults to a priority of zero.
 - Only root can run a command or program with increased or high priority.
 - Normal users can only run a command or program with low priority.
 
For example, instead of starting a program or command with the default priority, you can start it with a specific priority using following nice command.
$ sudo nice -n 5 tar -czf backup.tar.gz ./Documents/* OR $ sudo nice --adjustment=5 tar -czf backup.tar.gz ./Documents/*
You can also use the third method which is a little confusing especially for negative niceness values.
$ sudo nice -5 tar -czf backup.tar.gz ./Documents/*
Change the Scheduling Priority of a Process in Linux
As we mentioned before, Linux allows dynamic priority-based scheduling. Therefore, if a program is already running, you can change its priority with the renice command in this form:
$ renice -n -12 -p 1055 $ renice -n -2 -u apache

Change Process Priority
From the sample top output below, the niceness of the teamspe+ with PID 1055 is now -12 and for all processes owned by user apache is -2.
Still using this output, you can see the formula PR = 20 + NI stands,
PR for ts3server = 20 + -12 = 8 PR for apache processes = 20 + -2 = 18

Watch Processes Nice Values
Any changes you make with renice command to a user’s processes nice values are only applicable until the next reboot. To set permanent default values, read the next section.
How To Set Default Nice Value Of a Specific User’s Processes
You can set the default nice value of a particular user or group in the /etc/security/limits.conf file. It’s primary function is to define the resource limits for the users logged in via PAM.
The syntax for defining a limit for a user is as follows (and the possible values of the various columns are explained in the file):
#<domain> <type> <item> <value>
Now use the syntax below where hard – means enforcing hard links and soft means – enforcing the soft limits.
<username> <hard|soft> priority <nice value>
Alternatively, create a file under /etc/security/limits.d/ which overrides settings in the main file above, and these files are read in alphabetical order.
Start by creating the file /etc/security/limits.d/tecmint-priority.conf for user tecmint:
# vi /etc/security/limits.d/tecmint-priority.conf
Then add this configuration in it:
tecmint hard priority 10
Save and close the file. From now on, any process owned by tecmint will have a nice value of 10 and PR of 30.
For more information, read the man pages of nice and renice:
$ man nice $ man renice
You might also like to read these following articles about Linux process management.
- How to Find and Kill Running Processes in Linux
 - A Guide to Kill, Pkill and Killall Commands to Terminate a Process in Linux
 - How to Monitor System Usage, Outages and Troubleshoot Linux Servers
 - CPUTool – Limit and Control CPU Utilization of Any Process in Linux
 
In this article, we briefly explained the kernel scheduler, process priority, looked at how to run a program or command with modified priority and also change the priority of active Linux processes. You can share any thoughts regarding this topic via the feedback form below.
Conky – The Ultimate X Based System Monitor Application
Conky is a system monitor application written in ‘C’ Programming Language and released under GNU General Public License and BSD License. It is available for Linux and BSD Operating System. The application is X (GUI) based that was originally forked from Torsmo.
Features
- Simple User Interface
 - Higher Degree of configuration
 - It can show System stats using built-in objects (300+) as well as external scripts either on the desktop or in it’s own container.
 - Low on Resource Utilization
 - Shows system stats for a wide range of system variables which includes but not restricted to CPU, memory, swap, Temperature, Processes, Disk, Network, Battery, email, System messages, Music player, weather, breaking news, updates and blah..blah..blah
 - Available in Default installation of OS like CrunchBang Linux and Pinguy OS.
 
Lesser Known Facts about Conky
- The Name conky was derived from a Canadian Television Show.
 - It has already been ported to Nokia N900.
 - It is no more maintained officially.
 
Conky Installation and Usage in Linux
Before we install conky, we need to install packages like lm-sensors, curl and hddtemp using following command.
# apt-get install lm-sensors curl hddtemp
Time to detect-sensors.
# sensors-detect
Note: Answer ‘Yes‘ when prompted!
Check all the detected sensors.
# sensors
Sample Output
acpitz-virtual-0 Adapter: Virtual device temp1: +49.5°C (crit = +99.0°C) coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +49.0°C (high = +100.0°C, crit = +100.0°C) Core 0: +49.0°C (high = +100.0°C, crit = +100.0°C) Core 1: +49.0°C (high = +100.0°C, crit = +100.0°C)
Conky can be installed from repo as well as, can be compiled from source.
# yum install conky [On RedHat systems] # apt-get install conky-all [On Debian systems]
Note: Before you install conky on Fedora/CentOS, you must have enabled EPEL repository.
After conky has been installed, just issue following command to start it.
$ conky &
It will run conky in a popup like window. It uses the basic conky configuration file located at /etc/conky/conky.conf.
You may need to integrate conky with the desktop and won’t like a popup like window every-time. Here is what you need to do
Copy the configuration file /etc/conky/conky.conf to your home directory and rename it as ‘.conkyrc‘. The dot (.) at the beginning ensures that the configuration file is hidden.
$ cp /etc/conky/conky.conf /home/$USER/.conkyrc
Now restart conky to take new changes.
$ killall -SIGUSR1 conky
You may edit the conky configuration file located in your home dircetory. The configuration file is very easy to understand.
Here is a sample configuration of conky.
From the above window you can modify color, borders, size, scale, background, alignment and several other properties. By setting different alignments to different conky window, we can run more than one conky script at a time.
Using script other than the default for conky and where to find it?
You may write your own conky script or use one that is available over Internet. We don’t suggest you to use any script you find on the web which can be potentially dangerous unless you know what you are doing. However a few famous threads and pages have conky script that you can trust as mentioned below.
http://ubuntuforums.org/showthread.php?t=281865
http://conky.sourceforge.net/screenshots.html
At the above url, you will find every screenshot has a hyperlink, which will redirects to script file.
Testing Conky Script
Here I will be running a third party written conky-script on my Debian Jessie Machine, to test.
$ wget https://github.com/alexbel/conky/archive/master.zip $ unzip master.zip
Change current working directory to just extracted directory.
$ cd conky-master
Rename the secrets.yml.example to secrets.yml.
$ mv secrets.yml.example secrets.yml
Install Ruby before you could run this (ruby) script.
$ sudo apt-get install ruby $ ruby starter.rb
Note: This script can be modified to show your current weather, temperature, etc.
If you want to start conky at boot, add the below one liner to startup Applications.
conky --pause 10 save and exit.
And Finally…such a lightweight and useful GUI eye candy like package is not in active stage and is not maintained officially anymore. The last stable release was conky 1.9.0 released on May 03, 2012. A thread on Ubuntu forum has gone over 2k pages of users sharing configuration. (link to forum : http://ubuntuforums.org/showthread.php?t=281865/)
That’s all for now. Keep connected. Keep commenting. Share your thoughts and configuration in comments below.
5 Tools to Scan a Linux Server for Malware and Rootkits
There are constant level of high attacks and port scans on Linux servers all the time, while a properly configured firewall and regular security system updates adds a extra layer to keep the system safe, but you should also frequently watch if anyone got in. This will also helps to ensure that your server stays free of any program that aims at disrupting its normal operation.
The tools presented in this article are created for these security scans and they are able to identity Virus, Malwares, Rootkits, and Malicious behaviors. You can use these tools make regularly system scans e.g. every night and mail reports to your email address.
1. Lynis – Security Auditing and Rootkit Scanner
Lynis is a free, open source, powerful and popular security auditing and scanning tool for Unix/Linux like operating systems. It is a malware scanning and vulnerability detecting tool that scans systems for security information and issues, file integrity, configuration errors; performs firewall auditing, checks installed software, file/directory permissions and so much more.
Importantly, it doesn’t automatically perform any system hardening, however, it simply offers suggestions that enable you to harden your server.
We will install latest version of Lynis (i.e. 2.6.6) from the sources, using following commands.
# cd /opt/ # wget https://downloads.cisofy.com/lynis/lynis-2.6.6.tar.gz # tar xvzf lynis-2.6.6.tar.gz # mv lynis /usr/local/ # ln -s /usr/local/lynis/lynis /usr/local/bin/lynis
Now you can perform your system scanning with the command below.
# lynis audit system
To make run Lynis automatically at every night, add the following cron entry, which will run at 3am night and send reports to your email address.
0 3 * * * /usr/local/bin/lynis --quick 2>&1 | mail -s "Lynis Reports of My Server" you@yourdomain.com
2. Chkrootkit – A Linux Rootkit Scanners
Chkrootkit is also another free, open source rootkit detector that locally checks for signs of a rootkit on a Unix-like systems. It helps to detect hidden security holes. The chkrootkit package consists of a shell script that checks system binaries for rootkit modification and a number of programs that check various security issues.
The chkrootkit tool can be installed using following command on Debian-based systems.
$ sudo apt install chkrootkit
On CentOS-based systems, you need to install it from sources using following commands.
# yum update # yum install wget gcc-c++ glibc-static # wget -c ftp://ftp.pangeia.com.br/pub/seg/pac/chkrootkit.tar.gz # tar –xzf chkrootkit.tar.gz # mkdir /usr/local/chkrootkit # mv chkrootkit-0.52/* /usr/local/chkrootkit # cd /usr/local/chkrootkit # make sense
To check your server with Chkrootkit run the following command.
$ sudo chkrootkit OR # /usr/local/chkrootkit/chkrootkit
Once run, it will start checking your system for known Malwares and Rootkits and after the process is finished, you can see the summary of report.
To make run Chkrootkit automatically at every night, add the following cron entry, which will run at 3am night and send reports to your email address.
0 3 * * * /usr/sbin/chkrootkit 2>&1 | mail -s "chkrootkit Reports of My Server" you@yourdomain.com
Rkhunter – A Linux Rootkit Scanners
RKH (RootKit Hunter) is a free, open source, powerful, simple to use and well known tool for scanning backdoors, rootkits and local exploits on POSIX compliant systems such as Linux. As the name implies, it is a rootkit hunter, security monitoring and analyzing tool that is thoroughly inspects a system to detect hidden security holes.
The rkhunter tool can be installed using following command on Ubuntu and CentOS based systems.
$ sudo apt install rkhunter # yum install epel-release # yum install rkhunter
To check your server with rkhunter run the following command.
# rkhunter -c
To make run rkhunter automatically at every night, add the following cron entry, which will run at 3am night and send reports to your email address.
0 3 * * * /usr/sbin/rkhunter -c 2>&1 | mail -s "rkhunter Reports of My Server" you@yourdomain.com
4. ClamAV – Antivirus Software Toolkit
ClamAV is an open source, versatile, popular and cross-platform antivirus engine to detect viruses, malware, trojans and other malicious programs on a computer. It is one of the best free anti-virus programs for Linux and the open source standard for mail gateway scanning software that supports almost all mail file formats.
It supports virus database updates on all systems and on-access scanning on Linux only. In addition, it can scan within archives and compressed files and supports formats such as Zip, Tar, 7Zip, Rar among others and more other features.
The ClamAV can be installed using following command on Debian-based systems.
$ sudo apt-get install clamav
The ClamAV can be installed using following command on CentOS-based systems.
# yum -y update # yum -y install clamav
Once installed, you can update the signatures and scan a directory with the following commands.
# freshclam # clamscan -r -i DIRECTORY
Where DIRECTORY is the location to scan. The options -r, means recursively scan and the -i means to only show infected files.
5. LMD – Linux Malware Detect
LMD (Linux Malware Detect) is an open source, powerful and fully-featured malware scanner for Linux specifically designed and targeted at shared hosted environments, but can be used to detect threats on any Linux system. It can be integrated with ClamAV scanner engine for better performance.
It provides a full reporting system to view current and previous scan results, supports e-mail alert reporting after every scan execution and many other useful features.
For LMD installation and usage, read our article How to Install and Use Linux Malware Detect (LMD) with ClamAV as Antivirus Engine.
That’s all for now! In this article, we shared a list of 5 tools to scan a Linux server for malware and rootkits. Let us know of your thoughts in the comments section.
Linfo – Shows Linux Server Health Status in Real-Time
Linfo is a free and open source, cross-platform server statistics UI/library which displays a great deal of system information. It is extensible, easy-to-use (via composer) PHP5 library to get extensive system statistics programmatically from your PHP application. It’s a Ncurses CLI view of Web UI, which works in Linux, Windows, *BSD, Darwin/Mac OSX, Solaris, and Minix.
It displays system info including CPU type/speed; architecture, mount point usage, hard/optical/flash drives, hardware devices, network devices and stats, uptime/date booted, hostname, memory usage (RAM and swap, if possible), temperatures/voltages/fan speeds and RAID arrays.
Requirements:
- PHP 5.3
 - pcre extension
 - Linux – /proc and /sys mounted and readable by PHP and Tested with the 2.6.x/3.x kernels
 
How to Install Linfo Server Stats UI/library in Linux
First, create a Linfo directory in your Apache or Nginx web root directory, then clone and move repository files into /var/www/html/linfo using the rsync command as shown below:
$ sudo mkdir -p /var/www/html/linfo $ git clone git://github.com/jrgp/linfo.git $ sudo rsync -av linfo/ /var/www/html/linfo/
Then rename sample.config.inc.php to config.inc.php. This is the Linfo config file, you can define your own values in it:
$ sudo mv sample.config.inc.php config.inc.php
Now open the URL http://SERVER_IP/linfo in web browser to see the Web UI as shown in the screenshots below.
This screenshot shows the Linfo Web UI displaying core system info, hardware components, RAM stats, network devices, drives and file system mount points.
You can add the line below in the config file config.inc.php to yield useful error messages for troubleshooting purposes:
$settings['show_errors'] = true;
Running Linfo in Ncurses Mode
Linfo has a simple ncurses-based interface, which rely on php’s ncurses extension.
# yum install php-pecl-ncurses [On CentOS/RHEL] # dnf install php-pecl-ncurses [On Fedora] $ sudo apt-get install php5-dev libncurses5-dev [On Debian/Ubuntu]
Now compile the php extension as follows
$ wget http://pecl.php.net/get/ncurses-1.0.2.tgz $ tar xzvf ncurses-1.0.2.tgz $ cd ncurses-1.0.2 $ phpize # generate configure script $ ./configure $ make $ sudo make install
Next, if you successfully compiled and installed the php extension, run the commands below.
$ sudo echo extension=ncurses.so > /etc/php5/cli/conf.d/ncurses.ini
Verify the ncurses.
$ php -m | grep ncurses
Now run the Linfo.
$ cd /var/www/html/linfo/ $ ./linfo-curses
The following features yet to be added in Linfo:
- Support for more Unix operating systems (such as Hurd, IRIX, AIX, HP UX, etc)
 - Support for less known operating systems: Haiku/BeOS
 - Extra superfluous features/extensions
 - Support for htop-like features in ncurses mode
 
For more information, visit Linfo Github repository: https://github.com/jrgp/linfo
That’s all! From now on, you can view a Linux system’s information from within a web browser using Linfo. Try it out and share with us your thoughts in the comments. Additionally, have you come across any similar useful tools/libraries? If yes, then give us some info about them as well.
Use Glances to Monitor Remote Linux in Web Server Mode
Glances is a free open source, modern, cross-platform, real-time top and htop like system monitoring tool. It offers advanced features compared to its counterparts, and can run in different modes: as a standalone, in client/server mode and in web server mode.
Considering the web server mode, you do not necessarily need to log into your remote server via SSH to run glances, you can run it in web server mode and access it via a web browser to remotely monitor your Linux server, as explained below.
To run glances in web server mode, you need to install it together with the Python bottle module, a fast, simple and lightweight WSGI micro web-framework, using the appropriate command for your Linux distribution.
$ sudo apt install glances python-bottle #Debian/Ubuntu $ sudo yum install glances python-bottle #RHEL/CentOS $ sudo dnf install glancespython-bottle #Fedora 22+
Alternatively, install it using PIP command like shown.
$ sudo pip install bottle
Once you have installed the above packages, launch glances with the -w flag to run it in web server mode. By default, it will listen on port 61208.
$ glances -w OR $ glances -w &
If you are running firewalld services, then you should open port 61208 to allow inbound traffic to that port.
$ sudo firewall-cmd --permanent --add-port=61208/tcp $ sudo firewall-cmd --reload
For UFW firewall, run these commands.
$ sudo ufw allow 61208/tcp $ sudo ufw allow reload
After that, from a web browser, use the URL http://SERVER_IP:61208/ to access the glances UI.
If you are using systemd system and services manager, you can run glances in web server mode as a service for efficient management, as explained in the next section. I actually prefer this method to running it as a background process.
Run Glances in Web Server Mode as a Service
Start by creating your service unit file (which I have preferred to name as glancesweb.service) under /usr/lib/systemd/system/glancesweb.service.
$ sudo vim /usr/lib/systemd/system/glancesweb.service
Then copy and paste the unit file configuration below in it.
[Unit] Description = Glances in Web Server Mode After = network.target [Service] ExecStart = /usr/bin/glances -w -t 5 [Install] WantedBy = multi-user.target
The above configuration tells systemd that this is a unit of type service, it should be loaded after the network.target.
And once the system is in the network target, systemd will invoke the command “/usr/bin/glances -w -t 5” as a service. The -t specifies an interval for live updates in seconds.
The [install] section informs systemd that this service is wanted by “multi-user.target”. Therefore, when you enable it, a symbolic link is created from the /etc/systemd/system/multi-user.target.wants/glancesweb.service to /usr/lib/systemd/system/glancesweb.service. Disabling it will delete this symbolic link.
Next, enable your new systemd service, start and view its status as follows.
$ sudo systemctl enable connection.service $ sudo systemctl start connection.service $ sudo systemctl status connection.service
Finally, from your web browser, use the URL http://SERVER_IP:61208/ to remotely monitor your Linux servers via the glances UI, on any device (smart phone, tablet or computer).
You can change the refresh rate of the page, simply add the period in seconds at the end of the URL, this sets the refresh rate to 8 seconds.
http://SERVERI_P:61208/8
One downside of running glances in web server mode is that, if the Internet connection is poor the client tends to easily disconnect from the server.
You can learn how to create new systemd services from this guide:
That’s all! If you have any questions or additional information to add, use the comment from below.
How to Install Security Updates Automatically on Debian and Ubuntu
It has been said before -and I couldn’t agree more- that some of the best system administrators are those who seem (note the use of the word seem here) to be lazy all the time.
While that may sound somewhat paradoxical, I bet it must be true in most cases – not because they are not doing the job they are supposed to be doing, but rather because they have automated most of it.
One of the critical needs of a Linux system is to be kept up to date with the latest security patches available for the corresponding distribution.
In this article we will explain how to set up your Debian and Ubuntu system to auto install (or update) essential security packages or patches automatically when needed.
Other Linux distributions such as CentOS/RHEL configured to install security updates automatically.
Needless to say, you will need superuser privileges in order to perform the tasks outlined in this article.
Configure Automatic Security Updates On Debian/Ubuntu
To begin, install the following packages:
# aptitude update -y && aptitude install unattended-upgrades apt-listchanges -y
where apt-listchanges will report what has been changed during an upgrade.
Next, open /etc/apt/apt.conf.d/50unattended-upgrades with your preferred text editor and add this line inside the Unattended-Upgrade::Origins-Pattern block:
Unattended-Upgrade::Mail "root";
Finally, use the following command to create and populated the required configuration file (/etc/apt/apt.conf.d/20auto-upgrades) to activate the unattended updates:
# dpkg-reconfigure -plow unattended-upgrades
Choose Yes when prompted to install unattended upgrades:
then check that the following two lines have been added to /etc/apt/apt.conf.d/20auto-upgrades:
APT::Periodic::Update-Package-Lists "1"; APT::Periodic::Unattended-Upgrade "1";
And add this line to make reports verbose:
APT::Periodic::Verbose "2";
Last, inspect /etc/apt/listchanges.conf to make sure notifications will be sent to root.
In this post we have explained how to make sure your system is updated regularly with the latest security patches. Additionally, you learned how to set up notifications in order to keep yourself informed when patches are applied.
Do you have any questions about this article? Feel free to drop us a note using the comment form below. We look forward to hearing from you.
How to Install Latest GIMP 2.10 in Ubuntu
GIMP (in full GNU Image Manipulation Program) is a free open source, powerful, and cross-platform image manipulation software that runs on GNU/Linux, OS X, Windows plus many other operating systems.
It is highly customizable and extensible via third party plugins. It offers advanced tools for graphic designers, photographers, graphical illustrators as well as scientists for high quality image manipulation.
For programmers, it also supports scripted image manipulation, with multiple programming languages such as C, C++, Perl, Python, Scheme, and so much more. The latest major release of GIMP is version 2.10 which was released a few weeks ago, and there is a recent update release GIMP 2.10.2.
What’s New in GIMP 2.10
Some of the important new highlights of this release are:
- Ships in with a number of new and improved tools such as the Warp transform, the Unified transform and the Handle transform tools.
 - Color management has become a core feature.
 - Improvements to histogram computation.
 - Added support for HEIF image format.
 - Image processing almost completely ported to GEGL.
 - Uses on-canvas preview for all filters ported to GEGL.
 - Improved digital painting with support for functions such as canvas rotation and flipping, symmetry painting, MyPaint brush.
 - Support for several new image formats such as OpenEXR, RGBE, WebP, and HGT.
 - Supports metadata viewing and editing for Exif, XMP, IPTC, and DICOM.
 - Offers basic HiDPI support.
 - Comes with some fresh themes:Light, Gray, Dark, and System and symbolic icons.
 - Added two new filters: spherize and recursive transform, and more.
 
If you want to know more about GIMP 2.10 features in detail, please refer to its release note.
Install GIMP 2.10 in Ubuntu & Linux Mint
You can install or update Gimp on Ubuntu and Linux Mint using one of the following methods.
Install Gimp Using PPA
The developer Otto Kesselgulasch maintains a unofficial PPA, which has the latest version of the Gimp program for you to install on Ubuntu 17.10 and 18.04 (16.04 builds are said to be on the way), .
$ sudo add-apt-repository ppa:otto-kesselgulasch/gimp $ sudo apt update $ sudo apt install gimp
The above PPA will install or upgrade (if you already have GIMP 2.8) to GIMP 2.10.
Install Gimp Using Flatpak
This is the most recommended way to install GIMP 2.10 on Ubuntu, Linux Mint and other Ubuntu-based Linux distributions by using the official Flatpak app on the Flathub app store.
If you don’t have support for Flatpak, then you need to enable Flatpak support first using the following commands.
$ sudo add-apt-repository ppa:alexlarsson/flatpak $ sudo apt update $ sudo apt install flatpak
Once you have Fltapak support, use the following command to install GIMP 2.10.
$ flatpak install https://flathub.org/repo/appstream/org.gimp.GIMP.flatpakref
Once Gimp installed, if you don’t see it in the menu, you can start it using the following command.
$ flatpak run org.gimp.GIMP
Uninstall GIMP 2.10 in Ubuntu & Linux Mint
For any reasons, if you don’t like GIMP 2.10 and want to uninstall or roll back to old stable version. To accomplish this, you need the ppa-purge program to purge the PPA from your system using following commands.
$ sudo apt install ppa-purge $ sudo ppa-purge ppa:otto-kesselgulasch/gimp
That’s all! In this article, we have explained how to install latest version of GIMP 2.10 in Ubuntu, Linux Mint and Ubuntu-based Linux distributions. If you have any questions, let us know via the feedback form below.














