How to Install MySQL 8.0 in Ubuntu 18.04

MySQL community server is a free open source, popular and cross-platform database management system. It supports both SQL and NoSQL, and has a pluggable storage engine architecture. Additionally, it also comes with multiple database connectors for different programming languages, allowing you to develop applications using any of the well known languages, and many other features.

It has many use cases under document storage, cloud, high availability systems, IoT (Internet of Things), hadoop, big data, data warehousing, LAMP or LEMP stack for supporting high-volume website/apps and much more.

In this article, we will explain a fresh installation of MySQL 8.0 database system on Ubuntu 18.04 Bionic Beaver. Before we move onto the actual installation steps, let’s look at a summary of:

What’s New in MySQL 8.0

  • The database now incorporates a transactional data dictionary.
  • Comes with Atomic DDL statement support.
  • Enhanced security and account management.
  • Improvements to resource management.
  • Several InnoDB enhancements.
  • New type of backup lock.
  • Default character set has changed to utf8mb4 from latin1.
  • A couple of JSON enhancements.
  • Comes with regular expression support using International Components for Unicode (ICU).
  • New error logging which now uses the MySQL component architecture.
  • Enhancements to MySQL replication.
  • Supports common table expressions(both non-recursive and recursive).
  • Has an enhanced optimizer.
  • Additional window functions and more.

Step 1: Add MySQL Apt Repository

Luckily, there is an APT repository for installing the MySQL server, client, and other components. You need to add this MySQL repository to your system’s package sources list; start by downloading the repository package using the wget tool from the command line.

$ wget -c https://dev.mysql.com/get/mysql-apt-config_0.8.10-1_all.deb 

Then install the MySQL repository package using the following dpkg command.

$ sudo dpkg -i mysql-apt-config_0.8.10-1_all.deb 

Note that in the package installation process, you will be prompted to choose MySQL server version and other components such as cluster, shared client libraries, or the MySQL workbench that you want to configure for installation.

MySQL server version mysql-8.0 will be auto-selected, then scroll down to the last option Ok and click [Enter]to finish the configuration and installation of the release package, as shown in the screenshot.

Configure MySQL APT Config

Configure MySQL APT Config

Step 2: Install MySQL Server in Ubuntu 18.04

Next, download the latest package information from all configured repositories, including the recently added MySQL repository.

$ sudo apt update

Then run the following command to install packages for the MySQL community server, client and the database common files.

$ sudo apt-get install mysql-server

Install MySQL 8.0 in Ubuntu 18.04

Install MySQL 8.0 in Ubuntu 18.04

Through the installation process, you will be asked to enter a password for the root user for your MySQL server, re-enter the password to confirm it and press [Enter].

Set MySQL Root Password

Set MySQL Root Password

Next, the MySQL server authentication plugin configuration message will appear, read through it and use the right arrow to choose Ok and press [Enter] to continue.

MySQL Authentication Configuration

MySQL Authentication Configuration

Afterwards, you will be asked to select the default authentication plugin to use, then use the right arrow to choose Ok and press [Enter] to complete the package configuration.

Select MySQL Authentication Plugin

Select MySQL Authentication Plugin

Step 3: Secure MySQL Server Installation

By default, the MySQL installation is unsecure. To secure it, run the security script which comes with the binary package. You will be asked to enter the root password you set during the installation process. Then also choose whether to use the VALIDATE PASSWORD plugin or not.

You can also change the root password you set before (as we have done in this example). Then enter yes/y to the following security questions:

  • Remove anonymous users? (Press y|Y for Yes, any other key for No) : y
  • Disallow root login remotely? (Press y|Y for Yes, any other key for No) : y
  • Remove test database and access to it? (Press y|Y for Yes, any other key for No) : y
  • Reload privilege tables now? (Press y|Y for Yes, any other key for No) : y

Launch the script by issuing the following command.

$ sudo mysql_secure_installation

Secure MySQL Server Installation

Secure MySQL Server Installation

To further secure your MySQL server, read our article 12 MySQL/MariaDB Security Best Practices for Linux.

Step 4: Managing MySQL Server via Systemd

On Ubuntu, after installing a package, it’s service(s) are usually started automatically once the package is configured. You can check if the MySQL server is up and running using following command.

$ sudo systemctl status mysql

Check MySQL Server Status

Check MySQL Server Status

If for one reason or the other, it isn’t auto-started, use the commands below to start and enable it to start at system boot time, as follows.

$ sudo systemctl status mysql
$ sudo systemctl enable mysql

Step 5: Install Extra MySQL Products and Components

In addition, you can install extra MySQL components that you feel you need in order to work with the server, such as mysql-workbench-communitylibmysqlclient18 and many others.

$ sudo apt-get update
$ sudo apt-get install mysql-workbench-community libmysqlclient18

Finally, to access the MySQL shell, issue the following command.

$ sudo mysql -u root -p

Connect to MySQL Server

Connect to MySQL Server

For more information, read the MySQL 8.0 Release Notes.

That’s It! In this article, we have explained how to install MySQL 8.0 in Ubuntu 18.04 Bioni Beaver. If you have any questions or thoughts to share, use the comment form below to reach us.

Source

Setup Local Repositories with ‘apt-mirror’ in Ubuntu and Debian Systems

When today traffic and casual internet speeds is measured in teens of Giga over an eye blink even for ordinary Internet clients, what’s the purpose of setting a local repository cache on LAN’s you may ask?

Setup Local Repositories in Ubuntu

Setup Local Repositories in Ubuntu

One of the reasons is to reduce Internet bandwidth and high speed on pulling packages from local cache. But, also, another major reason should be privacy. Let’s imagine that clients from your organization are Internet restricted, but their Linux boxes need to regular system updates on software and security or just need new software packages. To go further picture, a server that runs on a private network, contains and serves secret sensitive information only for a restricted network segment, and should never be exposed to public Internet.

This are just a few reasons why you should build a local repository mirror on your LAN, delegate an edge server for this job and configure internal clients to pull out software form its cache mirror.

Ubuntu provides apt-mirror package to synchronize local cache with official Ubuntu repositories, mirror that can be configured through a HTTP or FTP server to share its software packages with local system clients.

For a complete mirror cache your server needs at least 120G free space reserved for local repositories.

Requirements

  1. Min 120G free space
  2. Proftpd server installed and configured in anonymous mode.

Step 1: Configure Server

1. The first thing you may want to do is to identify the closest and fastest Ubuntu mirrors near you’re location by visiting Ubuntu Archive Mirror page and select your country.

Ubuntu Archive Mirror

Ubuntu Archive Mirror

If your country provides more mirrors you should identify mirror address and do some tests based on ping or traceroute results.

Select Mirror Location

Select Mirror Location

2. The next step is to install required software for setting up local mirror repository. Install apt-mirror and proftpd packages and configure proftpd as standalone system daemon.

$ sudo apt-get install apt-mirror proftpd-basic

Install apt-mirror Proftpd

Install apt-mirror Proftpd

ProFTPD Configuration

ProFTPD Configuration

3. Now it’s time to configure apt-mirror server. Open and edit /etc/apt/mirror.list file by adding your nearest locations (Step 1) – optional, if default mirrors are fast enough or you’re not in a hurry – and choose your system path where packages should be downloaded. By default apt-mirror uses /var/spool/apt-mirror location for local cache but on this tutorial we are going to use change system path and point set base_path directive to /opt/apt-mirror location.

$ sudo nano /etc/apt/mirror.list

Configure apt-mirror Server.

Configure apt-mirror Server.

Also you can uncomment or add other source list before clean directive – including Debian sources – depending on what Ubuntu versions your clients use. You can add sources from 12.04, if you like but be aware that adding more sources requires more free space.

For Debian source lists visit Debian Wiki or Debian Sources List Generator.

4. All you need to do now is, just create path directory and run apt-mirror command to synchronize official Ubuntu repositories with our local mirror.

$ sudo mkdir -p /opt/apt-mirror
$ sudo apt-mirror

Create apt-mirror Paths

Create apt-mirror Paths

As you can see apt-mirror proceeds with indexing and downloading archives presenting total number of downloaded packages and their size. As we can imagine 110-120 GB is large enough to take some time to download.

You can run ls command to view directory content.

Verify apt-mirror Paths

Verify apt-mirror Paths

Once the initial download is completed, future downloads will be small.

5. While apt-mirror downloads packages, you can configure your Proftpd server. The first thing you need to do is, to create anonymous configuration file for proftpd by running the following command.

$ sudo nano /etc/proftpd/conf.d/anonymous.conf

Then add the following content to anonymous.conf file and restart proftd service.

<Anonymous ~ftp>
   User                    ftp
   Group                nogroup
   UserAlias         anonymous ftp
   RequireValidShell        off
#   MaxClients                   10
   <Directory *>
     <Limit WRITE>
       DenyAll
     </Limit>
   </Directory>
 </Anonymous>

Configure ProFTPD

Configure ProFTPD

6. Next step is to link apt-mirror path to proftpd path by running a bind mount by issuing the command.

$ sudo mount --bind /opt/apt-mirror/mirror/archive.ubuntu.com/  /srv/ftp/

Mount apt-mirror to ProFTP Path

Mount apt-mirror to ProFTP Path

To verify it run mount command with no parameter or option.

$ mount

Verify Paths

Verify Paths

7. Last step is to make sure that Proftpd server is automatically started after system reboot and mirror-cachedirectory is also automatically mounted on ftp server path. To automatically enable proftpd run the following command.

$ sudo update-rc.d proftpd enable

To automatically mount apt-mirror cache on proftpd open and edit /etc/rc.local file.

$ sudo nano /etc/rc.local

Add the following line before exit 0 directive. Also use 5 seconds delay before attempting to mount.

sleep 5
sudo mount --bind  /opt/apt-mirror/mirror/archive.ubuntu.com/ /srv/ftp/

Auto Mount Apt Mirrors

Auto Mount Apt Mirrors

If you pull packages from Debian repositories run the following commands and make sure appropriate settings for above rc.local file are enabled.

$ sudo mkdir /srv/ftp/debian
$ sudo mount --bind /opt/apt-mirror/mirror/ftp.us.debian.org/debian/ /srv/ftp/debian/

Debian Repository Setup

Debian Repository Setup

8. For a daily apt-mirror synchronization you can also create a system schedule job to run at 2 AM every day. Run crontab command, select your preferred editor then add the following line syntax.

$ sudo crontab –e

Daily apt-mirror Synchronization

Daily apt-mirror Synchronization

On last line add the following line.

0  2  *  *  *  /usr/bin/apt-mirror >> /opt/apt-mirror/mirror/archive.ubuntu.com/ubuntu/apt-mirror.log

Add Cron Entry for Synchronization

Add Cron Entry for Synchronization

Now every day at 2 AM your system repository cache will synchronize with Ubuntu official mirrors and create a log file.

Step 2: Configure clients

9. To configure local Ubuntu clients, edit /etc/apt/source.list on client computers to point to the IP address or hostname of apt-mirror server – replace http protocol with ftp, then update system.

deb ftp://192.168.1.13/ubuntu trusty universe
deb ftp://192.168.1.13/ubuntu trusty main restricted
deb ftp://192.168.1.13/ubuntu trusty-updates main restricted
## Ad so on….

Configure Clients

Configure Clients

10. To view repositories you can actually open a browser and point to your server IP address of domain name using FTP protocol.

View Local Repositories

View Local Repositories

The same system applies also to Debian clients and servers, the only change needed are debian mirror and sources list.

Also if you install a fresh Ubuntu or Debian system, provide your local mirror manually whit ftp protocol when installer asks which repository to use.

The great thing about having your own local mirror repositories is that you’re always on current and your local clients don’t have to connect to Internet to install updates or software.

Source

How To Install Nginx, MariaDB 10, PHP 7 (LEMP Stack) in 16.10/16.04

The LEMP stack is an acronym which represents is a group of packages (Linux OS, Nginx web server, MySQL\MariaDB database and PHP server-side dynamic programming language) which are used to deploy dynamic web applications and web pages.

This tutorial will guide you on how to install a LEMP stack with MariaDB 10PHP 7 and HTTP 2.0 Support for Nginx on Ubuntu 16.10 and Ubuntu 16.04 server/desktop editions.

Requirements

  1. Installation of Ubuntu 16.04 Server Edition [instructions also works on Ubuntu 16.10]

Step 1: Install the Nginx Web Server

1. Nginx is a modern and resources efficient web server used to display web pages to visitors on the internet. We’ll start by installing Nginx web server from Ubuntu official repositories by using the apt command line.

$ sudo apt-get install nginx

Install Nginx on Ubuntu 16.04

Install Nginx on Ubuntu 16.04

2. Next, issue the netstat and systemctl commands in order to confirm if Nginx is started and binds on port 80.

$ netstat -tlpn

Check Nginx Network Port Connection

Check Nginx Network Port Connection

$ sudo systemctl status nginx.service

Check Nginx Service Status

Check Nginx Service Status

Once you have the confirmation that the server is started you can open a browser and navigate to your server IP address or DNS record using HTTP protocol in order to visit Nginx default web page.

http://IP-Address

Verify Nginx Webpage

Verify Nginx Webpage

Step 2: Enable Nginx HTTP/2.0 Protocol

3. The HTTP/2.0 protocol which is build by default in the latest release of Nginx binaries on Ubuntu 16.04 works only in conjunction with SSL and promises a huge speed improvement in loading web SSL web pages.

To enable the protocol in Nginx on Ubuntu 16.04, first navigate to Nginx available sites configuration files and backup the default configuration file by issuing the below command.

$ cd /etc/nginx/sites-available/
$ sudo mv default default.backup

Backup Nginx Sites Configuration File

Backup Nginx Sites Configuration File

4. Then, using a text editor create a new default page with the below instructions:

server {
        listen 443 ssl http2 default_server;
        listen [::]:443 ssl http2 default_server;

        root /var/www/html;

        index index.html index.htm index.php;

        server_name 192.168.1.13;

        location / {
                try_files $uri $uri/ =404;
        }

        ssl_certificate /etc/nginx/ssl/nginx.crt;
        ssl_certificate_key /etc/nginx/ssl/nginx.key;

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_ciphers EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
        ssl_dhparam  /etc/nginx/ssl/dhparam.pem;
        ssl_session_cache shared:SSL:20m;
        ssl_session_timeout 180m;
        resolver 8.8.8.8 8.8.4.4;
        add_header Strict-Transport-Security "max-age=31536000;
        #includeSubDomains" always;


        location ~ \.php$ {
                include snippets/fastcgi-php.conf;
                fastcgi_pass unix:/run/php/php7.0-fpm.sock;
        }

        location ~ /\.ht {
                deny all;
        }

}

server {
       listen         80;
       listen    [::]:80;
       server_name    192.168.1.13;
       return         301 https://$server_name$request_uri;
}

Enable Nginx HTTP 2 Protocol

Enable Nginx HTTP 2 Protocol

The above configuration snippet enables the use of HTTP/2.0 by adding the http2 parameter to all SSL listen directives.

Also, the last part of the excerpt enclosed in server directive is used to redirect all non-SSL traffic to SSL/TLS default host. Also, replace the server_name directive to match your own IP address or DNS record (FQDN preferably).

5. Once you finished editing Nginx default configuration file with the above settings, generate and list the SSL certificate file and key by executing the below commands.

Fill the certificate with your own custom settings and pay attention to Common Name setting to match your DNS FQDN record or your server IP address that will be used to access the web page.

$ sudo mkdir /etc/nginx/ssl
$ sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/ssl/nginx.key -out /etc/nginx/ssl/nginx.crt
$ ls /etc/nginx/ssl/

Generate SSL Certificate and Key for Nginx

Generate SSL Certificate and Key for Nginx

6. Also, create a strong DH cypher, which was changed on the above configuration file on ssl_dhparaminstruction line, by issuing the below command:

$ sudo openssl dhparam -out /etc/nginx/ssl/dhparam.pem 2048

Create Diffie-Hellman Key

Create Diffie-Hellman Key

7. Once the Diffie-Hellman key has been created, verify if Nginx configuration file is correctly written and can be applied by Nginx web server and restart the daemon to reflect changes by running the below commands.

$ sudo nginx -t
$ sudo systemctl restart nginx.service

Check Nginx Configuration

Check Nginx Configuration

8. In order to test if Nginx uses HTTP/2.0 protocol issue the below command. The presence of h2 advertised protocol confirms that Nginx has been successfully configured to use HTTP/2.0 protocol. All modern up-to-date browsers should support this protocol by default.

$ openssl s_client -connect localhost:443 -nextprotoneg ''

Test Nginx HTTP 2.0 Protocol

Test Nginx HTTP 2.0 Protocol

Step 3: Install PHP 7 Interpreter

Nginx can be used with PHP dynamic processing language interpreter to generate dynamic web content with the help of FastCGI process manager obtained by installing the php-fpm binary package from Ubuntu official repositories.

9. In order to grab PHP7.0 and the additional packages that will allow PHP to communicate with Nginx web server issue the below command on your server console:

$ sudo apt install php7.0 php7.0-fpm 

Install PHP 7 and PHP-FPM for Ngin

Install PHP 7 and PHP-FPM for Ngin

10. Once the PHP7.0 interpreter has been successfully installed on your machine, start and check php7.0-fpmdaemon by issuing the below command:

$ sudo systemctl start php7.0-fpm
$ sudo systemctl status php7.0-fpm

Start and Verify php-fpm Service

Start and Verify php-fpm Service

11. The current configuration file of Nginx is already configured to use PHP FastCGI process manager in order to server dynamic content.

The server block that enables Nginx to use PHP interpreter is presented on the below excerpt, so no further modifications of default Nginx configuration file are required.

location ~ \.php$ {
                include snippets/fastcgi-php.conf;
                fastcgi_pass unix:/run/php/php7.0-fpm.sock;
        }

Below is a screenshot of what instructions you need to uncomment and modify is case of an original Nginx default configuration file.

Enable PHP FastCGI for Nginx

Enable PHP FastCGI for Nginx

12. To test Nginx web server relation with PHP FastCGI process manager create a PHP info.php test configuration file by issuing the below command and verify the settings by visiting this configuration file using the below address: http://IP_or domain/info.php.

$ sudo su -c 'echo "<?php phpinfo(); ?>" |tee /var/www/html/info.php'

Create PHP Info File

Create PHP Info File

Verify PHP FastCGI Info

Verify PHP FastCGI Info

Also check if HTTP/2.0 protocol is advertised by the server by locating the line $_SERVER[‘SERVER_PROTOCOL’] on PHP Variables block as illustrated on the below screenshot.

Check HTTP 2.0 Protocol Info

Check HTTP 2.0 Protocol Info

13. In order to install extra PHP7.0 modules use the apt search php7.0 command to find a PHP module and install it.

Also, try to install the following PHP modules which can come in handy in case you are planning to install WordPress or other CMS.

$ sudo apt install php7.0-mcrypt php7.0-mbstring

Install PHP 7 Modules

Install PHP 7 Modules

14. To register the PHP extra modules just restart PHP-FPM daemon by issuing the below command.

$ sudo systemctl restart php7.0-fpm.service

Step 4: Install MariaDB Database

15. Finally, in order to complete our LEMP stack we need the MariaDB database component to store and manage website data.

Install MariaDB database management system by running the below command and restart PHP-FPM service in order to use MySQL module to access the database.

$ sudo apt install mariadb-server mariadb-client php7.0-mysql
$ sudo systemctl restart php7.0-fpm.service

Install MariaDB for Nginx

Install MariaDB for Nginx

16. To secure the MariaDB installation, run the security script provided by the binary package from Ubuntu repositories which will ask you set a root password, remove anonymous users, disable root login remotely and remove test database.

Run the script by issuing the below command and answer all questions with yes. Use the below screenshot as a guide.

$ sudo mysql_secure_installation

Secure MariaDB Installation for Nginx

Secure MariaDB Installation for Nginx

17. To configure MariaDB so that ordinary users can access the database without system sudo privileges, go to MySQL command line interface with root privileges and run the below commands on MySQL interpreter:

$ sudo mysql 
MariaDB> use mysql;
MariaDB> update user set plugin=’‘ where User=’root’;
MariaDB> flush privileges;
MariaDB> exit

MariaDB User Permissions

MariaDB User Permissions

Finally, login to MariaDB database and run an arbitrary command without root privileges by executing the below command:

$ mysql -u root -p -e 'show databases'

Check MariaDB Databases

Check MariaDB Databases

That’ all! Now you have a LEMP stack configured on Ubuntu 16.10 and Ubuntu 16.04 server that allows you to deploy complex dynamic web applications that can interact with databases.

Source

How to Install LAMP with PHP 7 and MariaDB 10 on Ubuntu 16.10

In this article, we will go through the various steps to install the constituent packages in LAMP stack with PHP 7and MariaDB 10 on Ubuntu 16.10 Server and Desktop editions.

As you may already know, LAMP (LinuxApacheMySQL/MariaDBPHP) stack is the assortment of leading open source web development software packages.

This web platform is made up of a web server, database management system and a server-side scripting language, and is acceptable for building dynamic websites and a wide range of web applications. It can be used in a testing or production environment to support small-scale to very large web-based projects.

One of the common uses of LAMP stack is for running content management systems (CMSs) such as WordPressJoomla or Drupal and many others.

Requirements

  1. Ubuntu 16.10 Installation Guide

Step 1: Install Apache on Ubuntu 16.10

1. The first step is to start by installing Apache web server from the default Ubuntu official repositories by typing the following commands on terminal:

$ sudo apt install apache2
OR
$ sudo apt-get install apache2

Install Apache on Ubuntu 16.10

Install Apache on Ubuntu 16.10

2. After Apache web server successfully installed, confirm if the daemon is running and on what ports it binds (by default apache listens on port 80) by running the commands below:

$ sudo systemctl status apache2.service 
$ sudo netstat -tlpn

Check Apache Status and Port

Check Apache Status and Port

3. You can also confirm apache web server via a web browser by typing server IP address using HTTP protocol. A default apache web page should be appeared on the web browser similar to the below screenshot:

http://your_server_IP_address

Verify Apache Web Server

Verify Apache Web Server

4. If you want to use HTTPS support to secure your web pages, you can enable Apache SSL module and confirm port by issuing the following commands:

$ sudo a2enmod ssl 
$ sudo a2ensite default-ssl.conf 
$ sudo systemctl restart apache2.service
$ sudo netstat -tlpn

Enable Apache SSL HTTPS Support on Ubuntu 16.10

Enable Apache SSL HTTPS Support on Ubuntu 16.10

5. Now confirm Apache SSL support using HTTPS Secure Protocol by typing the below address in web browser:

https://your_server_IP_address

You will get the following error page, its because that apache is configured to run with a Self-Signed Certificate. Just accept and proceed further to bypass the certificate error and the web page should be displayed securely.

Apache Self-Signed Certificate Error

Apache Self-Signed Certificate Error

Apache HTTPS Support Enabled

Apache HTTPS Support Enabled

6. Next enable apache web server to start the service at boot time using following command.

$ sudo systemctl enable apache2

Step 2: Install PHP 7 on Ubuntu 16.10

7. To install most recent version of PHP 7, which is developed to run with speed enhancements on Linux machine, first do a search for any existing PHP modules by running the below commands:

$ sudo apt search php7.0

APT Search PHP 7 Modules

APT Search PHP 7 Modules

8. Once you came to know that proper PHP 7 modules are needed to setup, use apt command to install the proper modules so that PHP can able to run scripts in conjunction with apache web server.

$ sudo apt install php7.0 libapache2-mod-php7.0 php7.0-mysql php7.0-xml php7.0-gd

Install PHP 7 with PHP Modules

Install PHP 7 with PHP Modules

9. After PHP7 and its required modules are installed and configured on your server, run php -v command in order see the current release version of PHP.

$ php -v

Check Installed PHP Version

Check Installed PHP Version

10. To further tests PHP7 and its modules configuration, create a info.php file in apache /var/www/html/webroot directory.

$ sudo nano /var/www/html/info.php

add the below lines of code to info.php file.

<?php 
phpinfo();
?>

Restart apache service to apply changes.

$ sudo systemctl restart apache2

Open your web browser and type the following URL to check the PHP configuration.

https://your_server_IP_address/info.php 

Check PHP Configuration

Check PHP Configuration

11. If you wanted to install additional PHP modules, use apt command and press [TAB] key after php7.0 string and the bash autocomplete feature will automatically show you all available PHP 7 modules.

$ sudo apt install php7.0[TAB]

List All Available PHP 7 Modules

List All Available PHP 7 Modules

Step 3: Install MariaDB 10 in Ubuntu 16.10

12. Now it’s time to install latest version of MariaDB with the needed PHP modules to access the database from Apache-PHP interface.

$ sudo apt install php7.0-mysql mariadb-server mariadb-client

Install MariaDB in Ubuntu 16.10

Install MariaDB in Ubuntu 16.10

13. Once MariaDB has been installed, you need to secure its installation using the security script, which will set a root password, revoke anonymous access, disable root login remotely and remove the test database.

$ sudo mysql_secure_installation

Secure MariaDB Installation in Ubuntu 16.10

Secure MariaDB Installation in Ubuntu 16.10

14. In order to give MariaDB database access to system normal users without using sudo privileges, login to MySQL prompt using root and run the below commands:

$ sudo mysql 
MariaDB> use mysql;
MariaDB> update user set plugin=’‘ where User=’root’;
MariaDB> flush privileges;
MariaDB> exit

To learn more about MariaDB basic usage, you should read our series: MariaDB for Beginners

15. Then, restart MySQL service and try to login to database without root as shown.

$ sudo systemctl restart mysql.service
$ mysql -u root -p

16. Optionally, if you wanted to administer MariaDB from a web browser, install PhpMyAdmin.

$ sudo apt install php-gettext phpmyadmin

During PhpMyAdmin installation select apache2 web server, choose No for configure phpmyadmin with dbconfig-common and add a strong password for the web interface.

16. After PhpMyAdmin has been installed, you can access the web interface of Phpmyadmin at the below URL.

https://your_server_IP_address/phpmyadmin/ 

PhpMyAdmin on Ubuntu 16.10

PhpMyAdmin on Ubuntu 16.10

If you wanted to secure your PhpMyAdmin web interface, go through our article: 4 Useful Tips to Secure PhpMyAdmin Web Interface

That’s all! Now you have a complete LAMP stack setup installed and running on Ubuntu 16.10, which enables you to deploy dynamic websites or application on your Ubuntu server.

Source

ngxtop – Monitor Nginx Log Files in Real Time in Linux

ngxtop is a free open source, simple, flexible, fully configurable and easy-to-use real-time top-like monitoring tool for nginx server. It gathers data by parsing the nginx access log (default location is always /var/log/nginx/access.log) and displays useful metrics of your nginx server, thus helping you to keep an eye on your web server in real-time. It also allows you to parse Apache logs from a remote server.

How to Install and Use Ngxtop in Linux

To install ngxtop, first you need to install PIP in Linux, once you have pip installed on your system, you can install ngxtop using following command.

$ sudo pip install ngxtop

Monitor Nginx Server Requests

Now that you have installed ngxtop, the easiest way to run it is without any arguments. This will parse the /var/log/nginx/access.log and runs in follow mode (watch for new lines as they are written to the access log) by default.

$ sudo ngxtop
Sample Output
running for 411 seconds, 64332 records processed: 156.60 req/sec

Summary:
|   count |   avg_bytes_sent |   2xx |   3xx |   4xx |   5xx |
|---------+------------------+-------+-------+-------+-------|
|   64332 |         2775.251 | 61262 |  2994 |    71 |     5 |

Detailed:
| request_path                             |   count |   avg_bytes_sent |   2xx |   3xx |   4xx |   5xx |
|------------------------------------------+---------+------------------+-------+-------+-------+-------|
| /abc/xyz/xxxx                            |   20946 |          434.693 | 20935 |     0 |    11 |     0 |
| /xxxxx.json                              |    5633 |         1483.723 |  5633 |     0 |     0 |     0 |
| /xxxxx/xxx/xxxxxxxxxxxxx                 |    3629 |         6835.499 |  3626 |     0 |     3 |     0 |
| /xxxxx/xxx/xxxxxxxx                      |    3627 |        15971.885 |  3623 |     0 |     4 |     0 |
| /xxxxx/xxx/xxxxxxx                       |    3624 |         7830.236 |  3621 |     0 |     3 |     0 |
| /static/js/minified/utils.min.js         |    3031 |         1781.155 |  2104 |   927 |     0 |     0 |
| /static/js/minified/xxxxxxx.min.v1.js    |    2889 |         2210.235 |  2068 |   821 |     0 |     0 |
| /static/tracking/js/xxxxxxxx.js          |    2594 |         1325.681 |  1927 |   667 |     0 |     0 |
| /xxxxx/xxx.html                          |    2521 |          573.597 |  2520 |     0 |     1 |     0 |
| /xxxxx/xxxx.json                         |    1840 |          800.542 |  1839 |     0 |     1 |     0 |

To quit, press [Ctrl + C].

Parse Different Access Log

You can parse a different access log, for instance for a particular website or web app using the -l flag as shown.

$ sudo ngxtop -l /var/log/nginx/site1/access.log

List Top Source IPs of Clients

The following command will list all top source IP’s of clients accessing the site.

$ sudo ngxtop remote_addr -l  /var/log/nginx/site1/access.log
Sample Output
running for 20 seconds, 3215 records processed: 159.62 req/sec

top remote_addr
| remote_addr     |   count |
|-----------------+---------|
| 118.173.177.161 |      20 |
| 110.78.145.3    |      16 |
| 171.7.153.7     |      16 |
| 180.183.67.155  |      16 |
| 183.89.65.9     |      16 |
| 202.28.182.5    |      16 |
| 1.47.170.12     |      15 |
| 119.46.184.2    |      15 |
| 125.26.135.219  |      15 |
| 125.26.213.203  |      15 |

Use Particular Log Format

To use a log format as specified in log_format directive, employ the -f option as shown.

$ sudo ngxtop -f main -l /var/log/nginx/site1/access.log

Parse Apache Log From Remote Server

To parse Apache log file from a remote server with common format, use a command similar to the following (specify your username and remote server IP).

$ ssh user@remote_server tail -f /var/log/apache2/access.log | ngxtop -f common
Sample Output
running for 20 seconds, 1068 records processed: 53.01 req/sec

Summary:
|   count |   avg_bytes_sent |   2xx |   3xx |   4xx |   5xx |
|---------+------------------+-------+-------+-------+-------|
|    1068 |        28026.763 |  1029 |    20 |    19 |     0 |

Detailed:
| request_path                             |   count |   avg_bytes_sent |   2xx |   3xx |   4xx |   5xx |
|------------------------------------------+---------+------------------+-------+-------+-------+-------|
| /xxxxxxxxxx                              |     199 |        55150.402 |   199 |     0 |     0 |     0 |
| /xxxxxxxx/xxxxx                          |     167 |        47591.826 |   167 |     0 |     0 |     0 |
| /xxxxxxxxxxxxx/xxxxxx                    |      25 |         7432.200 |    25 |     0 |     0 |     0 |
| /xxxx/xxxxx/x/xxxxxxxxxxxxx/xxxxxxx      |      22 |          698.727 |    22 |     0 |     0 |     0 |
| /xxxx/xxxxx/x/xxxxxxxxxxxxx/xxxxxx       |      19 |         7431.632 |    19 |     0 |     0 |     0 |
| /xxxxx/xxxxx/                            |      18 |         7840.889 |    18 |     0 |     0 |     0 |
| /xxxxxxxx/xxxxxxxxxxxxxxxxx              |      15 |         7356.000 |    15 |     0 |     0 |     0 |
| /xxxxxxxxxxx/xxxxxxxx                    |      15 |         9978.800 |    15 |     0 |     0 |     0 |
| /xxxxx/                                  |      14 |            0.000 |     0 |    14 |     0 |     0 |
| /xxxxxxxxxx/xxxxxxxx/xxxxx               |      13 |        20530.154 |    13 |     0 |     0 |     0 |

For more usage options, view the ngxtop help message using the following command.

$ ngxtop -h  

ngxtop Github repositoryhttps://github.com/lebinh/ngxtop

That’s it for now! In this article, we have explained how to install and use ngxtop in Linux systems. If you have any questions, or extra thoughts to add to this guide, use the comment form below. In addition, if you have come across any similar tools, also let us know and we will be grateful.

Source

Sysdig – A Powerful System Monitoring and Troubleshooting Tool for Linux

Sysdig is an open-source, cross-platform, powerful and flexible system monitoring and troubleshooting tool for Linux; it also works on Windows and Mac OSX but with limited functionality and can be used for system analysis, inspection and debugging.

Normally, you would employ a mix of various Linux performance monitoring and troubleshooting toolsincluding these ones listed below to perform the Linux monitoring and debugging tasks:

  1. strace – discover system calls and signals to a process.
  2. tcpdump – raw network traffic monitoring.
  3. netstat – network connections monitoring.
  4. htop – real time process monitoring.
  5. iftop – real time network bandwidth monitoring.
  6. lsof – view which files are opened by which process.

However, sysdig integrates what all the above tools and many more, offer in a single and simple program, more so with amazing container support. It enables you to capture, save, filter and examine the real behavior (stream of events) of Linux systems as well as containers.

It comes with a command line interface and a powerful interactive UI (csysdig) which allow you to watch system activity in real time, or perform a trace dump and save for later analysis. You can watch how csysdig works from the below video.

Sysdig Features:

  • It is fast, stable and easy-to-use with comprehensively well documented.
  • Comes with native support for container technologies, including Docker, LXC.
  • It is scriptable in Lua; offers chisels (lightweight Lua scripts) for processing captured system events.
  • Supports useful filtering of output.
  • Supports system and application tracing.
  • It can be integrated with Ansible, Puppet and Logstash.
  • Enable sample advanced log analysis.
  • It also offers Linux server attack (forensics) analysis features for ethical hackers and lot’s more.

In this article, we will show how to install sysdig on a Linux system, and use it with basic examples of system analysis, monitoring and troubleshooting.

How To Install Sysdig in Linux

Installing sysdig package is as easy as running the command below, which will check all the requirements; if every thing is in place, it will download and install the package from the Draios APT/YUM repository.

# curl -s https://s3.amazonaws.com/download.draios.com/stable/install-sysdig | bash 
OR
$ curl -s https://s3.amazonaws.com/download.draios.com/stable/install-sysdig | sudo bash

After installing it, you need to run sysdig as root because it requires access to critical areas such as /proc file system, /dev/sysdig* devices and needs to auto-load the sysdig-probe kernel module (in case it is not); otherwise use the sudo command.

The most basic example is running it without any arguments, this will enable you to view your Linux system stream of events updated in real-time:

$ sudo sysdig

Watch Linux System Events

Watch Linux System Events

The above output (raw data) does not perhaps make a lot of sense to you, for a more useful output run csysdig:

$ sudo csysdig 

Monitor Linux System Events

Monitor Linux System Events

Note: To get the real feel of this tool, you need to use sysdig which produces raw data as we saw before, from a running Linux system: this calls for you to understand how to use filters and chisels.

But if you need a painless means of using sysdig – continue with csysdig.

Understanding Sysdig Chisels and Filters

Sysdig chisels are minimal Lua scripts for examining the sysdig event stream to carry out useful system troubleshooting actions and more. The command below will help you view all available chisels:

$ sudo sysdig -cl

The screen shot shows a sample list of chisels under different categories.

View Sysdig Chisels

View Sysdig Chisels

If you want to find out more information about a particular chisel, use the -i flag:

$ sudo sysdig -i topprocs_cpu

View Sysdig Chisel Info

View Sysdig Chisel Info

Sysdig filters add more power to the kind of output you can obtain from event streams, they allow you to customize the output. You should specify them at the end of a command line.

A straightforward and commonest filter is a basic “class.field=value” check, you can also combine chisels with filters for even more powerful customizations.

To view a list of available field classes, fields and their descriptions, type:

$ sudo sysdig -l

View Sysdig Field Classes

View Sysdig Field Classes

Creating Linux System Trace File

To dump sysdig output in a file for later analysis, use the -w flag like this.

You can read the trace dump file using the -r flag:

$ sudo sysdig -r trace.scap

The -s option is used to specify the amount of bytes of data to be captured for each system event. In this example, we are filtering events for the mongod process.

$ sudo sysdig -s 3000 -w trace.scap
$ sudo sysdig -r trace.scap proc.name=mongod

Create MongoDB Trace File

Create MongoDB Trace File

Monitoring Linux Processes

To list system processes, type:

$ sudo sysdig -c ps

Monitor Linux Processes

Monitor Linux Processes

Monitor Processes by CPU Usage

To watch top processes by CPU usage percentage, run this command:

$ sudo sysdig -c topprocs_cpu

Monitor Processes by CPU Usage

Monitor Processes by CPU Usage

Monitoring Network Connections and I/O

To view system network connections, run:

$ sudo sysdig -c netstat

Monitor Network Connections

Monitor Network Connections

The following command will help you list top network connections by total bytes:

$ sudo sysdig -c topconns

Next, you can also list top processes by network I/O as follows:

$ sudo sysdig -c topprocs_net    

Monitoring System File I/O

You can output the data read and written by processes on the system as below:

$ sudo sysdig -c echo_fds

Monitor System IO

Monitor System IO

To list top processes by (read + write) disk bytes, use:

$ sudo sysdig -c topprocs_file   

Troubleshooting a Linux System Performance

To keep an eye on system bottlenecks (slow system calls), execute this command:

$ sudo sysdig -c bottlenecks

Troubleshoot Linux Performance

Troubleshoot Linux Performance

Track Execution Time of a Process

To track the execution time of a process, you can run this command and dump the trace in a file:

$ sudo sysdig -w extime.scap -c proc_exec_time 

Track Process Execution Time

Track Process Execution Time

Then use a filter to zero down on details of a particular process (postgres in this example) as follows:

$ sudo sysdig -r extime.scap proc.name=postgres

Discover Slow Network I/0

This simple command will help you detect slow network I/0:

$ sudo sysdig -c netlower     

Watching Log File Entries

The command below helps you display every message written to syslog, if you are interested in log entries for a specific process, create a trace dump and filter it out accordingly as shown before:

$ sudo sysdig -c spy_syslog      

You can print any data written by any process to a log file as follows:

$ sudo sysdig -c spy_logs   

Monitoring HTTP Server Requests

If you have a HTTP server such as Apache or Nginx running on our system, look through the server’s requests log with this command:

$ sudo sysdig -c httplog    
$ sudo sysdig -c httptop   [Print Top HTTP Requests] 

Monitor HTTP Requests

Monitor HTTP Requests

Display Login Shells and Interactive User Activity

The command below will enable you view all the login shell IDs:

$ sudo sysdig -c list_login_shells

Last but not least, you can show interactive activity of system users like so:

$ sudo sysdig -c spy_users

Monitor User Activity

Monitor User Activity

For more usage information and examples, read the sysdig and csysdig man pages:

$ man sysdig 
$ man csysdig

Reference: https://www.sysdig.org/

Also check these useful Linux performance monitoring tools:

    1. BCC – Dynamic Tracing Tools for Linux Performance Monitoring, Networking and More
    2. pyDash – A Web Based Linux Performance Monitoring Tool
    3. Perf- A Performance Monitoring and Analysis Tool for Linux
    4. Collectl: An Advanced All-in-One Performance Monitoring Tool for Linux
    5. Netdata – A Real-Time Performance Monitoring Tool for Linux Systems
Conclusion

Sysdig brings together functionalities from numerous command line tools into one remarkable interface, thus allowing you to dig deep into your Linux system events to gather data, save for later analysis and it offers incredible container support.

To ask any questions or share any thoughts about this tool, use the feedback form below.

Source

16 Useful Bandwidth Monitoring Tools to Analyze Network Usage in Linux

Are you having problems monitoring your Linux network bandwidth usage? Do you need help? It’s important that you are able to visualize what is happening in your network in order to understand and resolve whatever is causing network slowness or simply to keep an eye on your network.

Read Also20 Commad Line Tools to Monitor Linux Performance

In this article, we will review 16 useful bandwidth monitoring tools to analyze network usage on a Linux system.

If you are looking to manage, troubleshoot or debug your Network, then read our article – A Linux Sysadmin’s Guide to Network Management, Troubleshooting and Debugging

The tools listed below are all open source and can help you to answer questions such as “why is the network so slow today?”. This article includes a mix of small tools for monitoring bandwidth on a single Linux machine and complete monitoring solutions capable of handling a few number of hosts on a LAN (Local Area Network) to multiple host even on a WAN (Wide Area Network).

1. vnStat – A Network Traffic Monitor

VnStat is a fully-featured, command line-based program to monitor Linux network traffic and bandwidth utilization in real-time, on Linux and BSD systems.

Vnstat Network Traffic Monitor Tool

Vnstat Network Traffic Monitor Tool

One advantage it has over similar tool is that it logs network traffic and bandwidth usage statistics for later analysis – this is its default behavior. You can actually view these logs even after system reboots.

Install VnStat in Linux

# yum install epel-release  [On RHEL/CentOS]
# yum install vnstat

# apt install vnstat   [On Debian/Ubuntu]

2. iftop – Displays Bandwidth Usage

iftop is a simple, easy to use, real time top-like command line based network bandwidth monitoring tool, used to get a quick overview of network activities on an interface. It displays network usage bandwidth updates every 2, 10 and 40 seconds on average.

Iftop Display Bandwidth Usage

Iftop Display Bandwidth Usage

Install iftop in Linux

# yum install epel-release  [On RHEL/CentOS]
# yum install iftop

# apt install iftop   [On Debian/Ubuntu]

3. nload – Displays Network Usage

nload is a another simple, easy to use command-line tool for monitoring network traffic and bandwidth usage in real time. It uses graphs to help you monitor inbound and outbound traffic. In addition, it also displays information such as the total amount of transfered data and min/max network usage.

nload - Monitor Network Usage

nload – Monitor Network Usage

Install nload in Linux

# yum install epel-release  [On RHEL/CentOS]
# yum install nload

# apt install nload   [On Debian/Ubuntu]

4. NetHogs – Monitor Network Traffic Bandwidth

NetHogs is a tiny top-like, text-based tool to monitor real time network traffic bandwidth usage by each process or application running on a Linux system. It simply offers real time statistics of your network bandwidth usage on a per-process basis.

NetHogs - Monitor Network Usage Per User

NetHogs – Monitor Network Usage Per User

Install NetHogs in Linux

# yum install epel-release  [On RHEL/CentOS]
# yum install nethogs

# apt install nethogs       [On Debian/Ubuntu]

5. bmon – Bandwidth Monitor and Rate Estimator

bmon is also a straightforward command line tool for monitoring network bandwidth utilization and a rate estimator, in Linux. It captures network statistics and visualizes them in a human friendly format so that you can keep an eye on your system.

Bmon - Bandwidth Monitor and Rate Estimator

Bmon – Bandwidth Monitor and Rate Estimator

Install Bmon in Linux

# yum install epel-release  [On RHEL/CentOS]
# yum install bmon

# apt install bmon          [On Debian/Ubuntu]

6. Darkstat – Captures Network Traffic

Darkstat is a small, simple, cross-platform, real-time, efficient web-based network traffic analyzer. It is a network statistics monitoring tool that works by capturing network traffic, computes usage statistics, and serves the reports over HTTP in a graphical format. You can also use it via the command line to get the same results.

Darkstat - Captures Network Traffic

Darkstat – Captures Network Traffic

Install Darkstat in Linux

# yum install epel-release  [On RHEL/CentOS]
# yum install darkstat

# apt install darkstat      [On Debian/Ubuntu]

7. IPTraf – An IP Network Monitor

IPTraf is an easy to use, ncurses-based and configurable tool for monitoring incoming and outgoing network traffic passing through an interface. It is useful for IP traffic monitoring, and viewing general interface statistics, detailed interface statistics and so much more.

IPTraf - Network Statistics Utility

IPTraf – Network Statistics Utility

Install IPTraf in Linux

# yum install epel-release  [On RHEL/CentOS]
# yum install iptraf

# apt install iptraf        [On Debian/Ubuntu]

8. CBM – (Color Bandwidth Meter)

CBM is a tiny command line utility for displaying current network traffic on all connected devices in colored output in Ubuntu Linux and its derivatives such as Linux Mint, Lubuntu and many others. It shows each connected network interface, bytes received, bytes transmitted and total bytes, allowing you to monitor network bandwidth.

CBM - Monitor Network LAN Usage

CBM – Monitor Network LAN Usage

Install Color Bandwidth Meter in Linux

# yum install epel-release  [On RHEL/CentOS]
# yum install cbm

# apt install cbm           [On Debian/Ubuntu]

9. Iperf/Iperf3 – Network Bandwidth Measurement Tool

Iperf/Iperf3 is a powerful tool for measuring network throughput over protocols such as TCP, UDP and SCTP. It is primarily built to help in tuning TCP connections over a particular path, thus useful for testing and monitoring the maximum achievable bandwidth on IP networks (supports both IPv4 and IPv6). It requires a server and a client to perform tests (which reports the bandwidth, loss, and other useful network performance parameters).

Iperf3 - Network Performance and Tuning

Iperf3 – Network Performance and Tuning

Install Iperf3 in Linux

# yum install epel-release  [On RHEL/CentOS]
# yum install iperf3

# apt install iperf3        [On Debian/Ubuntu]

10. Netperf – Network Bandwidth Testing

Netperf is similar to iperf, for testing network performance. It can help in monitoring network bandwidth in Linux by measuring data transfer using either TCP, UDP. It also supports measurements via Berkeley Sockets interface, DLPI, Unix Domain Sockets and so many other interfaces. You need a server and a client to run tests.

Netperf - Network Bandwidth Testing

Netperf – Network Bandwidth Testing

For installation instruction, check out project github page.

11. SARG – Squid Analysis Report Generator

SARG is a squid log files analyzer and internet bandwidth monitoring tool. It produces useful HTML reports with information including but not limited to IP addresses, and total bandwidth usage. It is a handy tool for monitoring internet bandwidth utilization by individual machines on a single network.

Sarg - Squid Network Analysis Report Generator

Sarg – Squid Network Analysis Report Generator

For installation instruction and usage, check out our article – How to Install SARG to Monitor Squid Internet Bandwidth Usage.

12. Monitorix – System and Network Monitoring Tool

Monitorix is a lightweight system resources and network monitoring application, designed for small Linux/Unix servers and also comes with amazing support for embedded devices.

It helps you monitor network traffic and usage statistics from unlimited number of network devices. It supports IPv4 and IPv6 connections, includes packet traffic and traffic error graphs, and supports up to 9 qdiscs per network interface.

Monitorix - System and Network Monitoring Tool

Monitorix – System and Network Monitoring Tool

Install Monitorix in Linux

# yum install epel-release  [On RHEL/CentOS]
# yum install monitorix

# apt install monitorix     [On Debian/Ubuntu]

13. Cacti – Network Monitoring and Graphing Tool

Cacti is a fully functional, web based network graphing PHP application with an intuitive, easy to use interface. It uses MySQL database for storing data collected network performance data, used to produce customized graphing. It is a frontend to RRDTool, useful for monitoring small to complex networks with thousands of devices.

Cacti - Network Monitoring and Graphing Tool

Cacti – Network Monitoring and Graphing Tool

For installation instruction and usage, check out our article – How to Install Cacti – A Network Monitoring and Graphing Tool.

14. Observium – Network Monitoring Platform

Observium is a fully-featured network monitoring platform with an elegant and powerful, robust yet simple and intuitive interface. It supports a number of platforms including, Linux, Windows, FreeBSD, Cisco, HP, Dell and many others, and includes autodetection of devices. It helps users to gather network metrics and offers intuitive graphing of device metrics from collected performance data.

Observium - Network Monitoring Platform

Observium – Network Monitoring Platform

For installation instruction and usage, check out our article – How to Install Observium – A Complete Network Management and Monitoring System.

15. Zabbix – Application and Network Monitoring Tool

Zabbix is a feature-rich, commonly used network monitoring platform, designed in a server-client model, to monitor networks, servers and applications in real time. It collects different types of data that are used for visual representation network performance or load metrics of the monitored devices.

It is capable of working with well known networking protocols such as HTTP, FTP, SMTP, IMAP and many more, without the need to install additional software on the monitored devices.

Zabbix - Monitoring Solution for Linux

Zabbix – Monitoring Solution for Linux

For installation instruction and usage, check out our article – How to Install Zabbix – A Complete Network Monitoring Solution for Linux.

16. Nagios – Monitors Systems, Networks and Infrastructure

Nagios is a robust, powerful, feature-rich and widely used monitoring software. It allows you to monitor local and remote network devices and their services from a single window.

It offers bandwidth monitoring in network devices such as switches and Routers via SNMP thus enabling you to easily find out over utilized ports, and pin point possible network abusers.

Read Also13 Linux Network Configuration and Troubleshooting Commands

In addition, Nagios also helps you to keep an eye on per-port bandwidth utilization and errors, and supports fast detection of network outages and protocol failures.

Nagios - IT Infrastructure Monitoring Tool

Nagios – IT Infrastructure Monitoring Tool

For installation instruction and usage, check out our article – How to Install Nagios – A Complete IT Infrastructure Monitoring Solution for Linux.

Summary

In this article, we have reviewed a number of useful network bandwidth and system monitoring tools for Linux. If we’ve missed to include any monitoring tool in the list, do share with us in the comment form below.

Source

Psensor – A Graphical Hardware Temperature Monitoring Tool for Linux

Psensor is a GTK+ (Widget Toolkit for creating Graphical User Interface) based application software. It is one the simplest application to monitor hardware temperature and plot Real Time graph from the obtained data for quick review.

Linux Hardware Temperature Monitoring

Psensor – Linux Hardware Temperature Monitoring

Features of Psensor

  1. Show Temperature of motherboard, CPU, GPU (Nvidia), Hard Disk Drives.
  2. Show CPU fan speed.
  3. Psensor is capable of showing remote server Temperature and Fan Speed.
  4. Show CPU usages, as well.
  5. Infact Psensor will detect any supported Hardware and report the Temperature as text and over graph, automatically.
  6. All the temperatures are plotted in one graph.
  7. Alarms and Alerts ensures you don’t miss a critical System Hardware Temperature and Fan Speed related issues.
  8. Easy to Configure. Easy to use.

Dependencies

  1. lm-sensor and hddtemp: : Psensor depends upon these two packages to get the reports about temperature and fan speed.
  2. psensor-server : It is an optional package, which is required if you want to gather information about Remote Server Temperature and Fan Speed.

Installation of Psensor in Linux

1. As I said above that Psensor program depends on lm-sensor and hddtemp package and these two packages must installed on the system in order to install Psensor.

Both these two packages are available in the official repository of most of the standard Linux distributions, but in RedHat/CentOS based systems, you need to install and enable epel-release repository to get these packages.

On Debian, Ubuntu and Mint

# apt-get install lm-sensors hddtemp

On RedHat, CentOS and Fedora

# yum install epel-release 
# yum install lm_sensors lm_sensors-devel hddtemp

Note: If you are using Fedora 22, replace yum with dnf in above command.

2. Once these two dependencies installed on the system, you can install Psensor on Debian alike systems using following commands.

# apt-get install psensor

Unfortunately, on RedHat alike systems, Psensor isn’t available from the default system repository, and you need to compile it from source as shown below.

# yum install gcc gtk3-devel GConf2-devel cppcheck libatasmart-devel libcurl-devel json-c-devel libmicrohttpd-devel help2man libnotify-devel libgtop2-devel make 

Next, download the most recent stable Psensor (i.e version 1.1.3) source tarball and compile it using following commands.

# wget http://wpitchoune.net/psensor/files/psensor-1.1.3.tar.gz 
# tar zxvf psensor-1.1.3.tar.gz 
# cd psensor-1.1.3/ 
# ./configure 
# make 
# make install

3. Install Psensor Server – optional. It is required only if you want to see the temperature and fan speed of remote server.

# apt-get install psensor-server

Note: That the Psensor Server package is only available under Debian alike systems, there isn’t any binary or source packages available for RedHat systems.

Testing and Usage of Psensor

4. It is optional but suggestive step you should follow. Run sensors-detect, as root to diagnose the hardware’s by sensors. Every Time Type the default option ‘Yes’, until you know what you are doing.

# sensors-detect

Detect Sensors

Detect Sensors

5. Again Optional but suggestive setup you should follow. Run sensors, as root to display the temperature of various Hardware Devices. All these Data will be used for Psensor.

# sensors

Check Temperature Hardware

Check Temperature Hardware

6. Run Psensor, from the desktop Application Menu to get the graphical view.

Temperature Hardware Monitoring

Temperature Hardware Monitoring

Check mark all the Sensors to plot graph. You may notice the color codes.

Plot Graphs of Hardware Temperature

Plot Graphs of Hardware Temperature

Customize Psensor

7. Go to Menu Psensor → Preferences → Interface. From here, you can have options for Interface related customization, Temperature Unit and Sensor table Position.

Psensor Interface Customization

Psensor Interface Customization

8. Under Menu Psensor → Preferences → Startup. From here, you can configure Launch/Hide at Startup and Restore Window Position and Size.

Control Psensor

Control Psensor

9. Under the Hood Graph (Psensor → Preferences → Graph), you may configure Foreground/Background Color, Monitoring Duration, Update Interval, etc.

Psensor Graph Customization

Psensor Graph Customization

10. You may configure Sensors Settings under (Psensor → Preferences → Sensors).

Sensors Settings

Sensors Settings

11. The last tab (Psensor → Preferences → Providers) provides you with Enable/Disable configuration for all the sensors.

Psensor Configuration Control

Psensor Configuration Control

You may do sensor Preferences under (Psensor → Sensor Preferences).

Give Sensor Name

Give Sensor Name

Give Sensor Color

Give Sensor Color

Set Sensor Threshold

Set Sensor Threshold

Enable Sensor Indicator

Enable Sensor Indicator

Conclusion

Psensor is a very useful tool which lets you see those gray areas of system monitoring which is often overlooked upon i.e., Hardware temperature monitoring. A over heating Hardware may damage that particular hardware, other hardware in the surrounding or may crash the whole system.

No, I am not thinking from financial perspective. Think of the value of Data that might loose and the cost and time it will take to build the system again. Hence it is always a good idea to have a tool like Psensor beside ourselves to avoid any such risk.

Installation on Debian alike system is pretty simple. For CentOS and alike System, installation is a bit tricky.

Source

How to Create a Centralized Log Server with Rsyslog in CentOS/RHEL 7

In order for system administrator to identify or troubleshoot a problem on a CentOS 7 or RHEL 7 server system, it must know and view the events that happened on the system in a specific period of time from log files stored in the system in the /var/log directory.

The syslog server on a Linux machine can act a central monitoring point over a network where all servers, network devices, routers, switches and most of their internal services that generate logs, whether related to specific internal issue or just informative messages can send their logs.

On a CentOS/RHEL 7 system, Rsyslog daemon is the main log server preinstalled, followed by Systemd Journal Daemon (journald).

Rsyslog server in build as a client/server architecture service and can achieve both roles simultaneous. It can run as a server and collect all logs transmitted by other devices in the network or it can run as a client by sending all internal system events logged to a remote endpoint syslog server.

When rsyslog is configured as a client, the logs can be stored locally in files on the local filesystem or can be send remotely rather than write them in files stored on the machine or write events log files locally and send them to a remote syslog server at the same time.

Syslog server operates any log message using the following scheme:

type (facility).priority (severity)  destination(where to send the log)

A. The facility or type data is represented by the internal system processes that generates the messages. In Linux internal processes (facilities) that generate logs are standardized as follows:

  • auth = messages generated by authentication processes (login).
  • cron= messages generated by scheduled processes (crontab).
  • daemon = messages generated by daemons (internal services).
  • kernel = messages generated by the Linux Kernel itself.
  • mail = messages generated by a mail server.
  • syslog = messages generated by the rsyslog daemon itself.
  • lpr = messages generated by local printers or a print server.
  • local0 – local7 = custom messages defined by an administrator (local7 is usually assigned for Cisco or Windows).

B. The priority (severity) levels are standardized also. Each priority is assigned with a standard abbreviation and a number as described below. The 7th priority is the higher level of all.

  • emerg = Emergency – 0
  • alert = Alerts – 1
  • err = Errors – 3
  • warn = Warnings – 4
  • notice = Notification – 5
  • info = Information – 6
  • debug = Debugging – 7

Special Rsyslog keywords:

  • * = all facilities or priorities
  • none = the facilities have no given priorities Eg: mail.none

C. The third part for the syslog schema is represented by the destination directive. Rsyslog daemon can send log messages to be written in a file on the local filesystem (mostly in a file in /var/log/ directory) or to be piped to another local process or to be send to a local user console (to stdout), or send the message to a remote syslog server via TCP/UDP protocol, or even discard the message to /dev/null.

In order to Configure CentOS/RHEL 7 as a central Log Server, first we need to check and ensure that the /var partition where all log file are recorded is large enough (a few GB minimum) in order to be able to store all the log files that will be sent by other devices. It’s a good decision to use a separate drive (LVM, RAID) to mount the /var/log/ directory.

Requirements

  1. CentOS 7.3 Installation Procedure
  2. RHEL 7.3 Installation Procedure

How to Configure Rsyslog in CentOS/RHEL 7 Server

1. By default, Rsyslog service is automatically installed and should be running in CentOS/RHEL 7. In order to check if the daemon is started in the system, issue the following command with root privileges.

# systemctl status rsyslog.service

Check Rsyslog Service

Check Rsyslog Service

If the service is not running by default, execute the below command in order to start rsyslog daemon.

# systemctl start rsyslog.service

2. If the rsyslog package is not installed on the system that you intend to use as a centralized logging server, issue the following command to install the rsyslog package.

# yum install rsyslog

3. The first step that we need to do on the system in order to configure rsyslog daemon as a centralized log server, so it can receive log messages for external clients, is to open and edit, using your favorite text editor, the main configuration file from /etc/rsyslog.conf, as presented in the below excerpt.

# vi /etc/rsyslog.conf

In the rsyslog main configuration file, search and uncomment the following lines (remove the hashtag # sign at the line beginning) in order to provide UDP transport reception to Rsyslog server via 514 port. UDP is the standard protocol used for log transmission by Rsyslog.

$ModLoad imudp 
$UDPServerRun 514

Configure Rsyslog Server

Configure Rsyslog Server

4. UDP protocol does not have the TCP overhead, which make it faster for transmitting data than TCP protocol. On the other hand, UDP protocol does not assure reliability of transmitted data.

However, if you need to use TCP protocol for log reception you must search and uncomment the following lines from /etc/rsyslog.conf file in order to configure Rsyslog daemon to bind and listen a TCP socket on 514 port. TCP and UDP listening sockets for reception can be configured on a Rsyslog server simultaneously.

$ModLoad imtcp 
$InputTCPServerRun 514 

5. On the next step, don’t close the file yet, create a new template that will be used for receiving remote messages. This template will instruct the local Rsyslog server where to save the received messages send by syslog network clients. The template must be added before the beginning of the GLOBAL DIRECTIVES block as illustrated in the below excerpt.

$template RemoteLogs,"/var/log/%HOSTNAME%/%PROGRAMNAME%.log" 
. ?RemoteLogs & ~

Create Rsyslog Template

Create Rsyslog Template

The above $template RemoteLogs directive instructs Rsyslog daemon to collect and write all of the received log messages to distinct files, based on the client machine name and remote client facility (application) that generated the messages based on the defined properties presents in the template configuration: %HOSTNAME%and %PROGRAMNAME%.

All these log files will be written to local filesystem to a dedicated file named after client machine’s hostname and stored in /var/log/ directory.

The & ~ redirect rule instructs the local Rsyslog server to stop processing the received log message further and discard the messages (not write them to internal log files).

The RemoteLogs name is an arbitrary name given to this template directive. You can use whatever name you can find best suited for your template.

In order to write all received messages from clients in a single log file named after the IP Address of the remote client, without filtering the facility that generated the message, use the below excerpt.

$template FromIp,"/var/log/%FROMHOST-IP%.log" 
. ?FromIp & ~ 

Another example of a template where all messages with auth facility flag will be logged to a template named “TmplAuth“.

$template TmplAuth, "/var/log/%HOSTNAME%/%PROGRAMNAME%.log" 
authpriv.*   ?TmplAuth

Below is an excerpt form a template definition from Rsyslog 7 server:

template(name="TmplMsg" type="string"
         string="/var/log/remote/msg/%HOSTNAME%/%PROGRAMNAME:::secpath-replace%.log"
        )

The above template excerpt can also be written as:

template(name="TmplMsg" type="list") {
    constant(value="/var/log/remote/msg/")
    property(name="hostname")
    constant(value="/")
    property(name="programname" SecurePath="replace")
    constant(value=".log")
    }

To write complex Rsyslog templates, read the Rsyslog configuration file manual by issuing man rsyslog.confcommand or consult Rsyslog online documentation.

6. After you’ve edited the Rsyslog configuration file with your own settings as explained above, restart the Rsyslog daemon in order to apply changes by issuing the following command:

# service rsyslog restart

7. By now, Rsyslog server should be configured to act a centralized log server and record messages from syslog clients. To verify Rsyslog network sockets, run netstat command with root privileges and use grep to filter rsyslog string.

# netstat -tulpn | grep rsyslog 

Verify Rsyslog Network Socket

Verify Rsyslog Network Socket

8. If you have SELinux enabled in CentOS/RHEL 7, issue the following command to configure SELinux to allow rsyslog traffic depending on network socket type.

# semanage -a -t syslogd_port_t -p udp 514
# semanage -a -t syslogd_port_t -p tcp 514 

9. If the firewall is enabled and active, run the below command in order to add the necessary rules for opening rsyslog ports in Firewalld.

# firewall-cmd --permanent --add-port=514/tcp
# firewall-cmd --permanent --add-port=514/udp
# firewall-cmd –reload

That’s all! Rsyslog is now configured in server mode and can centralize logs from remote clients. In next article, we will see how to configure Rsyslog client on CentOS/RHEL 7 server.

Using Rsyslog server as a central monitoring point for remote log messages you can inspect log files and observe the clients health status or debug client’s issues more easily when systems crash or are under some kind of attack.

Source

How to Setup Rsyslog Client to Send Logs to Rsyslog Server in CentOS 7

Log management is one of the most critical component in a network infrastructure. Logs messages are constantly generated by numerous system software, such as utilities, applications, daemons, services related to network, kernel, physical devices and so on.

Log files proves to be useful in case of troubleshooting Linux system issues, monitor the system and review a system security strength and problems.

Rsyslog is an Open Source logging program, which is the most popular logging mechanism in a huge number of Linux distributions. It’s also the default logging service in CentOS 7 or RHEL 7.

Rsyslog daemon in CentOS can be configured to run as a server in order collect log messages from multiple network devices. These devices act as clients and are configured to transmit their logs to a rsyslog server.

However, the Rsyslog service can be also configured and started in client mode. This setup instructs the rsyslog daemon to forward log messages to a remote Rsyslog server using the TCP or UDP transport protocols. Rsyslog service can also be configured to run as a client and as a server in the same time.

In this tutorial we’ll describe how to setup a CentOS/RHEL 7 Rsyslog daemon to send log messages to a remote Rsyslog server. This setup ensures that your machine disk space can be preserved for storing other data.

The place where almost all log files are written by default in CentOS is the /var system path. It’s also advisable to always create a separate partition for /var directory, which can be dynamically grown, in order to not exhaust the /(root) partition.

An Rsyslog client always sends the log messages in plain text, if not specified otherwise. You should not setup an Rsyslog client to transmit log messages over Internet or networks that are not under your complete control.

Requirements

  1. CentOS 7.3 Installation Procedure
  2. RHEL 7.3 Installation Procedure
  3. Configure a Rsyslog Server in CentOS/RHEL 7

Step 1: Verify Rsyslog Installation

1. By default, the Rsyslog daemon is already installed and running in a CentOS 7 system. In order to verify if rsyslog service is present in the system, issue the following commands.

# rpm -q | grep rsyslog
# rsyslogd -v

Check Rsyslog Installation

Check Rsyslog Installation

2. If the Rsyslog package is not installed in CentOS, execute the below command to install the service.

# yum install rsyslog

Step 2: Configure Rsyslog Service as Client

3. In order to enforce the Rsyslog daemon installed on a CentOS 7 system to act as a log client and route all of locally generated log messages to a remote Rsyslog server, modify the rsyslog configuration file as follows:

First open the main configuration file for editing.

# vi /etc/rsyslog.conf

Then, append the below line at the end of the file as illustrated in the below excerpt.

*. *  @192.168.10.254:514

On the above line makes sure you replace the IP address of the FQDN of the remote rsyslog server accordingly. The above line instructs the Rsyslog daemon to send all log messages, regardless of the facility or severity, to the host with the IP 192.168.10.254 via 514/UDP port.

Configure Rsyslog Client

Configure Rsyslog Client

4. If the remote log server is configured to listen only on TCP connections or you want to use a reliable transport network protocol, such as TCP, add another @ character in front of the remote host as shown in the below example:

*. *  @@logs.domain.lan:514

The Linux rsyslog also allows has some special characters, such as = or !, which can be prefixed to priority levels to indicate “this priority only” for equal sign and “not this priority or higher than this”.

Some samples of Rsyslog priority level qualifiers in CentOS 7:

  • kern.info = kernel logs with info priority and higher.
  • kern.=info = only kernel messages with info priority.
  • kern.info;kern.!err = only kernel messages with info, notice, and warning priorities.
  • kern.debug;kern.!=warning = all kernel priorities except warning.
  • kern.* = all kernel priorities messages.
  • kern.none = don’t log any related kernel facility messages regardless of the priority.

For instance, assuming you want to send only a specific facility messages to a remote log server, such as all related mail messages regardless of the priority level, add the below line to rsyslog configuration file:

mail.* @192.168.10.254:514 

5. Finally, in order to apply the new configuration, Rsyslog service needs to be restarted in order for the daemon to pick-up the changes, by running the below command:

# systemctl restart rsyslog.service

6. If for some reasons Rsyslog daemon is not enabled during the boot time, issue the below command to enable the service system-wide:

# systemctl enable rsyslog.service

Step 3: Send Apache and Nginx Logs to a Remote Log Server

7. Apache HTTP server can be configured to send logs messages to a remote syslog server by adding the following line to its main configuration file as illustrated in the below example.

# vi /etc/httpd/conf/httpd.conf

On Apache main conf file add the below line.

CustomLog "| /bin/sh -c '/usr/bin/tee -a /var/log/httpd/httpd-access.log | /usr/bin/logger -thttpd -plocal1.notice'" combined

The line will enforce the HTTP daemon to write the log messages internally to the filesystem log file, but also process the messages further through a pipe to logger utility, which will send them to a distant syslog server, by marking them as coming from the local1 facility.

8. If you want to also direct Apache error log messages to a remote syslog server, add a new rule as the one presented in the above example, but make sure to replace the name of the httpd log file and the log file severity level to match error priority, as shown in the following sample:

ErrorLog "|/bin/sh -c '/usr/bin/tee -a /var/log/httpd/httpd-error.log | /usr/bin/logger -thttpd -plocal1.err'"

9. Once you’ve added the above lines, you need to restart Apache daemon to apply changes, by issuing the following command:

# systemctl restart httpd.service                 

10. As of version 1.7.1, Nginx web server has build-in capabilities in order to directly log its messages to a remote syslog server, by adding the following lines of code to an nginx configuration file.

error_log syslog:server=192.168.1.10:514,facility=local7,tag=nginx,severity=error;
access_log syslog:server=192.168.10.254:514,facility=local7,tag=nginx,severity=info main;

For an IPv6 server, use the following syntax format to enclose the IPv6 address.

access_log syslog:server=[7101:dc7::9]:514,facility=local7,tag=nginx,severity=info;

11. On the remote Rsyslog server you need to make the following change to rsyslog configuration file, in order to receive the logs send by Apache web server.

local1.* @Apache_IP_address:514

That’s all! You have successfully configured Rsyslog daemon to run in client mode and, also, you’ve instructed Apache HTTP server or Nginx to forward its log messages to a remote syslog server.

In case you system crashes, you should be able to investigate the problem by inspecting the log files content which are stored on the remote syslog server.

Source

WP2Social Auto Publish Powered By : XYZScripts.com