How to Setup High-Availability Load Balancer with ‘HAProxy’ to Control Web Server Traffic

HAProxy stands for High Availability proxy. It is a Free and open source application written in C programming Language. HAProxy application is used as TCP/HTTP Load Balancer and for proxy Solutions. The most common use of the HAProxy application is to distribute the workload across multiple servers e.g., web server, database server, etc thus improving the overall performance and reliability of server environment.

The highly efficient and fast application is used by many of the world’s reputed organization which includes but not limited to – Twitter, Reddit, GitHub and Amazon. It is available for Linux, BSD, Solaris and AIX platform.

Install HAProxy in Linux

Install HAProxy Load Balancer in Linux

In this tutorial, we will discuss the process of setting up a high availability load balancer using HAProxy to control the traffic of HTTP-based applications (web servers) by separating requests across multiple servers.

For this article, we’re using the most recent stable release of HAProxy version i.e. 1.5.10 released on December 31st 2014. And also we’re using CentOS 6.5 for this setup, but the below given instructions also works on CentOS/RHEL/Fedora and Ubuntu/Debian distributions.

My Environment Setup

Here our load-balancer HAProxy server having hostname as websrv.tecmintlocal.com with IP address 192.168.0.125.

HAProxy Server Setup
Operating System	:	CentOS 6.5
IP Address		: 	192.168.0.125
Hostname		: 	websrv.tecmintlocal.com
Client Web Servers Setup

The other four machines are up and running with web servers such as Apache.

Web Server #1 :	CentOS 6.5 [IP: 192.168.0.121] - [hostname: web1srv.tecmintlocal.com]
Web Server #2 :	CentOS 6.5 [IP: 192.168.0.122] - [hostname: web2srv.tecmintlocal.com]
Web Server #3 :	CentOS 6.5 [IP: 192.168.0.123] - [hostname: web3srv.tecmintlocal.com]
Web Server #4 :	CentOS 6.5 [IP: 192.168.0.124] - [hostname: web4srv.tecmintlocal.com]

Step 1: Installing Apache on Client Machines

1. First we have to install Apache in all four server’s and share any one of site, for installing Apache in all four server’s here we going to use following command.

# yum install httpd		[On RedHat based Systems]
# apt-get install apache2	[On Debian based Systems]

2. After installing Apache web server on all four client machines, you can verify anyone of the server whether Apache is running by accessing it via IP address in browser.

http://192.168.0.121

Check Apache Status

Check Apache Status

Step 2: Installing HAProxy Server

3. In most of the today’s modern Linux distributions, HAPRoxy can be easily installed from the default base repository using default package manager yum or apt-get.

For example, to install HAProxy on RHEL/CentOS/Fedora and Debian/Ubuntu versions, run the following command. Here I’ve included openssl package too, because we’re going to setup HAProxy with SSL and NON-SSL support.

# yum install haproxy openssl-devel	[On RedHat based Systems]
# apt-get install haproxy		[On Debian based Systems]

Note: On Debian Whezzy 7.0, we need to enable the backports repository by adding a new file backports.listunder “/etc/apt/sources.list.d/” directory with the following content.

# echo "deb http://cdn.debian.net/debian wheezy-backports main" >> /etc/apt/sources.list.d/backports.list

Next, update the repository database and install HAProxy.

# apt-get update
# apt-get install haproxy -t wheezy-backports

Step 3: Configure HAProxy Logs

4. Next, we need to enable logging feature in HAProxy for future debugging. Open the main HAProxy configuration file ‘/etc/haproxy/haproxy.cfg‘ with your choice of editor.

# vim /etc/haproxy/haproxy.cfg

Next, follow the distro-specific instructions to configure logging feature in HAProxy.

On RHEL/CentOS/Fedora

Under #Global settings, enable the following line.

log         127.0.0.1 local2
On Ubuntu/Debian

Under #Global settings, replace the following lines,

log /dev/log        local0
log /dev/log        local1 notice 

With,

log         127.0.0.1 local2

Enable HAProxy Logging

Enable HAProxy Logging

5. Next, we need to enable UDP syslog reception in ‘/etc/rsyslog.conf‘ configuration file to separate log files for HAProxy under /var/log directory. Open your your ‘rsyslog.conf‘ file with your choice of editor.

# vim /etc/rsyslog.conf

Uncommnet ModLoad and UDPServerRun, Here our Server will listen to Port 514 to collect the logs into syslog.

# Provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514

Configure HAProxy Logging

Configure HAProxy Logging

6. Next, we need to create a separate file ‘haproxy.conf‘ under ‘/etc/rsyslog.d/‘ directory to configure separate log files.

# vim /etc/rsyslog.d/haproxy.conf

Append following line to the newly create file.

local2.*	/var/log/haproxy.log

HAProxy Logs

HAProxy Logs

Finally, restart the rsyslog service to update the new changes.

# service rsyslog restart

Step 4: Configuring HAProxy Global Settings

7. Now, here we need to set default variables in ‘/etc/haproxy/haproxy.cfg‘ for HAProxy. The changes needs to make for default under default section as follows, Here some of the changes like timeout for queue, connect, client, server and max connections need to be defined.

In this case, I suggest you to go through the HAProxy man pages and tweak it as per your requirements.

#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    20
    timeout queue           86400
    timeout connect         86400
    timeout client          86400
    timeout server          86400
    timeout http-keep-alive 30
    timeout check           20
    maxconn                 50000

HAProxy Default Settings

HAProxy Default Settings

8. Then we need to define front-end and back-end as shown below for Balancer in ‘/etc/haproxy/haproxy.cfg‘ global configuration file. Make sure to replace the IP addresses, hostnames and HAProxy login credentials as per your requirements.

frontend LB
   bind 192.168.0.125:80
   reqadd X-Forwarded-Proto:\ http
   default_backend LB

backend LB 192.168.0.125:80
   mode http
   stats enable
   stats hide-version
   stats uri /stats
   stats realm Haproxy\ Statistics
   stats auth haproxy:redhat		# Credentials for HAProxy Statistic report page.
   balance roundrobin			# Load balancing will work in round-robin process.
   option httpchk
   option  httpclose
   option forwardfor
   cookie LB insert
   server web1-srv 192.168.0.121:80 cookie web1-srv check		# backend server.
   server web2-srv 192.168.0.122:80 cookie web2-srv check		# backend server.
   server web3-srv 192.168.0.123:80 cookie web3-srv check		# backend server.
   server web4-srv 192.168.0.124:80 check backup			# backup fail-over Server, If three of the above fails this will be activated.

HAProxy Global Configuration

HAProxy Global Configuration

9. After adding above settings, our load balancer can be accessed at ‘http://192.168.0.125/stats‘ with HTTP authentication using login name as ‘haproxy‘ and password ‘redhat‘ as mentioned in the above settings, but you can replace them with your own credentials.

10. After you’ve done with the configuration, make sure to restrat the HAProxy and make it persistent at system startup on RedHat based systems.

# service haproxy restart
# chkconfig haproxy on
# chkconfig --list haproxy

Start HAProxy

Start HAProxy

For Ubuntu/Debian users to need to set “ENABLED” option to “1” in ‘/etc/default/haproxy‘ file.

ENABLED=1

Step 5: Verify HAProxy Load Balancer

11. Now it’s time to access our Load balancer URL/IP and verify for the site whether loading. Let me put one HTML file in all four servers. Create a file index.html in all four servers in web servers document root directory and add the following content to it.

<html>
<head>
  <title>Tecmint HAProxy Test Page</title>
</head>

<body>
<!-- Main content -->
<h1>My HAProxy Test Page</h1>

<p>Welcome to HA Proxy test page!

<p>There should be more here, but I don't know
what to be write :p.

<address>Made 11 January 2015<br>
  by Babin Lonston.</address>

</body>
</html>

12. After creating ‘index.html‘ file, now try to access the site and see whether I can able access the copied html file.

http://192.168.0.125/

Verify HAProxy Load Balancer

Verify HAProxy Load Balancer

Site has been successfully accessed.

Step 6: Verify Statistic of Load Balancer

13. To get the statistic page of HAProxy, you can use the following link. While asking for Username and password we have to provide the haproxy/redhat.

http://192.168.0.125/stats

HAProxy Statistics Login

HAProxy Statistics Login

HAProxy Statistics

HAProxy Statistics

Step 7: Enabling SSL in HAProxy

14. To enable SSL in HAProxy, you need to install mod_ssl package for creating SSL Certificate for HAProxy.

On RHEL/CentOS/Fedora

To install mod_ssl run the following command

# yum install mod_ssl -y

On Ubuntu/Debian

By default under Ubuntu/Debian SSL support comes standard with Apache package. We just need to enable it..

# a2enmod ssl

After you’ve enabled SSL, restart the Apache server for the change to be recognized.

# service apache2 restart

15. After restarting, Navigate to the SSL directory and create SSL certificate using following commands.

# cd /etc/ssl/
# openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/tecmint.key -out /etc/ssl/tecmint.crt
# cat tecmint.crt tecmint.key > tecmint.pem

Create SSL for HAProxy

Create SSL for HAProxy

SSL Certificate for HAProxy

SSL Certificate for HAProxy

16. Open and edit the haproxy configuration and add the SSL front-end as below.

# vim /etc/haproxy/haproxy.cfg 

Add the following configuration as frontend.

frontend LBS
   bind 192.168.0.125:443 ssl crt /etc/ssl/tecmint.pem
   reqadd X-Forwarded-Proto:\ https
   default_backend LB

17. Next, add the redirect rule in backend configuration.

redirect scheme https if !{ ssl_fc }

Enable SSL on HAProxy

Enable SSL on HAProxy

18. After making above changes, make sure to restart the haproxy service.

# service haproxy restart

While restarting if we get the below warning, we can fix it by adding a parameter in Global Section of  haproxy.

SSL HAProxy Error

SSL HAProxy Error

tune.ssl.default-dh-param 2048

19. After restarting, try to access the site 192.168.0.125, Now it will forward to https.

http://192.168.0.25

Verify SSL HAProxy

Verify SSL HAProxy

SSL Enabled HAProxy

SSL Enabled HAProxy

20. Next, verify the haproxy.log under ‘/var/log/‘ directory.

# tail -f /var/log/haproxy.log

Check HAProxy Logs

Check HAProxy Logs

Step 8: Open HAProxy Ports on Firewall

21. Open the port’s for web service and Log reception UDP port using below rules.

On CentOS/RHEL 6
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -i eth0 -p udp --dport 514 -j ACCEPT
iptables -A INPUT -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A INPUT -i eth0 -p tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT
On CentOS/RHEL 7 and Fedora 21
# firewall­cmd ­­permanent ­­zone=public ­­add­port=514/tcp
# firewall­cmd ­­permanent ­­zone=public ­­add­port=80/tcp
# firewall­cmd ­­permanent ­­zone=public ­­add­port=443/tcp
# firewall­cmd ­­reload 
On Debian/Ubuntu

Add the following line to ‘/etc/iptables.up.rules‘ to enable ports on firewall.

A INPUT ­p tcp ­­dport 514 ­j ACCEPT 
A INPUT ­p tcp ­­dport 80 ­j ACCEPT 
A INPUT ­p tcp ­­dport 443 ­j ACCEPT 

Conclusion

In this article, we’ve installed Apache in 4 server’s and shared a website for reducing the traffic load. I Hope this article will help you to setup a Load Balancer for web server’s using HAProxy and make your applications more stable and available

If you have any questions regarding the article, feel free to post your comments or suggestions, I will love to help you out in whatever the best way I can.

Source

How to Check Which Apache Modules are Enabled/Loaded in Linux

In this guide, we will briefly talk about the Apache web server front-end and how to list or check which Apache modules have been enabled on your server.

Apache is built, based on the principle of modularity, this way, it enables web server administrators to add different modules to extend its primary functionalities and enhance apache performance as well.

Suggested Read: 5 Tips to Boost the Performance of Your Apache Web Server

Some of the common Apache modules include:

  1. mod_ssl – which offers HTTPS for Apache.
  2. mod_rewrite – which allows for matching url patterns with regular expressions, and perform a transparent redirect using .htaccess tricks, or apply a HTTP status code response.
  3. mod_security – which offers you to protect Apache against Brute Force or DDoS attacks.
  4. mod_status – that allows you to monitor Apache web server load and page statics.

In Linux, the apachectl or apache2ctl command is used to control Apache HTTP server interface, it is a front-end to Apache.

You can display the usage information for apache2ctl as below:

$ apache2ctl help
OR
$ apachectl help
apachectl help
Usage: /usr/sbin/httpd [-D name] [-d directory] [-f file]
                       [-C "directive"] [-c "directive"]
                       [-k start|restart|graceful|graceful-stop|stop]
                       [-v] [-V] [-h] [-l] [-L] [-t] [-S]
Options:
  -D name            : define a name for use in  directives
  -d directory       : specify an alternate initial ServerRoot
  -f file            : specify an alternate ServerConfigFile
  -C "directive"     : process directive before reading config files
  -c "directive"     : process directive after reading config files
  -e level           : show startup errors of level (see LogLevel)
  -E file            : log startup errors to file
  -v                 : show version number
  -V                 : show compile settings
  -h                 : list available command line options (this page)
  -l                 : list compiled in modules
  -L                 : list available configuration directives
  -t -D DUMP_VHOSTS  : show parsed settings (currently only vhost settings)
  -S                 : a synonym for -t -D DUMP_VHOSTS
  -t -D DUMP_MODULES : show all loaded modules 
  -M                 : a synonym for -t -D DUMP_MODULES
  -t                 : run syntax check for config files

apache2ctl can function in two possible modes, a Sys V init mode and pass-through mode. In the SysV initmode, apache2ctl takes simple, one-word commands in the form below:

$ apachectl command
OR
$ apache2ctl command

For instance, to start Apache and check its status, run these two commands with root user privileges by employing the sudo command, in case you are a normal user:

$ sudo apache2ctl start
$ sudo apache2ctl status
Check Apache Status
tecmint@TecMint ~ $ sudo apache2ctl start
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message
httpd (pid 1456) already running
tecmint@TecMint ~ $ sudo apache2ctl status
Apache Server Status for localhost (via 127.0.0.1)

Server Version: Apache/2.4.18 (Ubuntu)
Server MPM: prefork
Server Built: 2016-07-14T12:32:26

-------------------------------------------------------------------------------

Current Time: Tuesday, 15-Nov-2016 11:47:28 IST
Restart Time: Tuesday, 15-Nov-2016 10:21:46 IST
Parent Server Config. Generation: 2
Parent Server MPM Generation: 1
Server uptime: 1 hour 25 minutes 41 seconds
Server load: 0.97 0.94 0.77
Total accesses: 2 - Total Traffic: 3 kB
CPU Usage: u0 s0 cu0 cs0
.000389 requests/sec - 0 B/second - 1536 B/request
1 requests currently being processed, 4 idle workers

__W__...........................................................
................................................................
......................

Scoreboard Key:
"_" Waiting for Connection, "S" Starting up, "R" Reading Request,
"W" Sending Reply, "K" Keepalive (read), "D" DNS Lookup,
"C" Closing connection, "L" Logging, "G" Gracefully finishing,
"I" Idle cleanup of worker, "." Open slot with no current process

And when operating in pass-through mode, apache2ctl can take all the Apache arguments in the following syntax:

$ apachectl [apache-argument]
$ apache2ctl [apache-argument]

All the Apache-arguments can be listed as follows:

$ apache2 help    [On Debian based systems]
$ httpd help      [On RHEL based systems]

Check Enabled Apache Modules

Therefore, in order to check which modules are enabled on your Apache web server, run the applicable command below for your distribution, where -t -D DUMP_MODULES is a Apache-argument to show all enabled/loaded modules:

---------------  On Debian based systems --------------- 
$ apache2ctl -t -D DUMP_MODULES   
OR 
$ apache2ctl -M
---------------  On RHEL based systems --------------- 
$ apachectl -t -D DUMP_MODULES   
OR 
$ httpd -M
$ apache2ctl -M
List Apache Enabled Loaded Modules
[root@tecmint httpd]# apachectl -M
Loaded Modules:
 core_module (static)
 mpm_prefork_module (static)
 http_module (static)
 so_module (static)
 auth_basic_module (shared)
 auth_digest_module (shared)
 authn_file_module (shared)
 authn_alias_module (shared)
 authn_anon_module (shared)
 authn_dbm_module (shared)
 authn_default_module (shared)
 authz_host_module (shared)
 authz_user_module (shared)
 authz_owner_module (shared)
 authz_groupfile_module (shared)
 authz_dbm_module (shared)
 authz_default_module (shared)
 ldap_module (shared)
 authnz_ldap_module (shared)
 include_module (shared)
....

That’s all! in this simple tutorial, we explained how to use the Apache front-end tools to list enabled/loaded apache modules. Keep in mind that you can get in touch using the feedback form below to send us your questions or comments concerning this guide.

Source

Apache Virtual Hosting: IP Based and Name Based Virtual Hosts in RHEL/CentOS/Fedora

As we all are aware that Apache is a very powerful, highly flexible and configurable Web server for Nix OS. Here in this tutorial, we are going to discuss one more feature of Apache which allows us to host more than one website on a single Linux machine. Implementing virtual hosting with Apache web server can help you to save costs you are investing on your server maintenance and their administration.

Don’t MissNGINX Name-based and IP-based Virtual Hosting (Server Blocks)

Apache Virtual Hosting in Linux

Apache Virtual Hosting in Linux

Concept of Shared web hosting and Reseller web hosting is based on this facility of Apache only.

Types of Virtual Host

There are two types of virtual hosting is available with Apache.

Name Based Virtual Hosting

With the name based virtual hosting you can host several domains/websites on a single machine with a singleIP. All domains on that server will be sharing a single IP. It’s easier to configure than IP based virtual hosting, you only need to configure DNS of the domain to map it with its correct IP address and then configure Apache to recognize it with the domain names.

Name Based Virtual Hosting

Name Based Virtual Hosting

IP Based Virtual Hosting

With the IP based virtual hosting, you can assign a separate IP for each domain on a single server, these IP’s can be attached to the server with single NIC cards and as well as multiple NICs.

IP Based Virtual Hosting

IP Based Virtual Hosting

Lets set up Name Based Virtual Hosting and IP based Virtual hosting in RHEL, CentOS and Fedora.

Testing Environment
  1. OS – CentOS 6.5
  2. Application – Apache Web Server
  3. IP Address – 192.168.0.100
  4. IP Address – 192.168.0.101
  5. Domain – www.example1.com
  6. Domain – www.example2.com

How to Setup IP Based and Name Based Apache Virtual Hosts

Before setting up virtual hosting with Apache, your system must have Apache Web software installed. if not, install it using default package installer called yum.

[root@tecmint ~]# yum install httpd

Setup Name Based Virtual Host

But, before creating a virtual host, you need to create a directory where you will keep all your website’s files. So, create directories for these two virtual hosts under /var/www/html folder. Please remember /var/www/html will be your default Document Root in the Apache virtual configuration.

[root@tecmint ~]# mkdir /var/www/html/example1.com/
[root@tecmint ~]# mkdir /var/www/html/example2.com/

To set up Name based virtual hosting you must need to tell Apache to which IP you will be using to receive the Apache requests for all the websites or domain names. We can do this with NameVirtualHost directive. Open Apache main configuration file with VI editor.

[root@tecmint ~]# vi /etc/httpd/conf/httpd.conf

Search for NameVirtualHost and uncomment this line by removing the # sign in front of it.

NameVirtualHost

Next add the IP with possible in which you want to receive Apache requests. After the changes, your file should look like this:

NameVirtualHost 192.168.0.100:80

Now, it’s time to setup Virtual host sections for your domains, move to the bottom of the file by pressing Shift + G. Here in this example, We are setting up virtual host sections for two domains

  1. www.example1.com
  2. www.example2.com

Add the following two virtual directives at the bottom of the file. Save and close the file.

<VirtualHost 192.168.0.100:80>
    ServerAdmin webmaster@example1.com
    DocumentRoot /var/www/html/example1.com
    ServerName www.example1.com
ErrorLog logs/www.example1.com-error_log
CustomLog logs/www.example1.com-access_log common
</VirtualHost>

<VirtualHost *:80>
    ServerAdmin webmaster@example2.com
    DocumentRoot /var/www/html/example2.com
    ServerName www.example2.com
ErrorLog logs/www.example2.com-error_log
CustomLog logs/www.example2.com-access_log common
</VirtualHost>

You are free to add as many directives you want to add in your domains virtual host section. When you are done with changes in httpd.conf file, please check the syntax of files with following command.

[root@tecmint ~]# httpd -t

Syntax OK

It is recommended to check the syntax of the file after making some changes and before restarting the Web server because if any syntax goes wrong Apache will refuse to work with some errors and eventually affect your existing web server go down for a while. If syntax is OK. Please restart your Web server and add it to chkconfigto make your web server start in runlevel 3 and 5 at the boot time only.

[root@tecmint ~]# service httpd restart
Stopping httpd:                                            [  OK  ]
Starting httpd:                                            [  OK  ]
[root@tecmint ~]# chkconfig --level 35 httpd on

Now it’s time to create a test page called index.html add some content to the file so we will have something to check it, when the IP calls the virtual host.

[root@tecmint ~]# vi /var/www/html/example1.com/index.html
<html>
  <head>
    <title>www.example1.com</title>
  </head>
  <body>
    <h1>Hello, Welcome to www.example1.com.</h1>
  </body>
</html>
[root@tecmint ~]# vi /var/www/html/example2.com/index.html
<html>
  <head>
    <title>www.example2.com</title>
  </head>
  <body>
    <h1>Hello, Welcome to www.example2.com.</h1>
  </body>
</html>

Once you’re done with it, you can test the setup by accessing both the domains in a browser.

http://www.example1.com
http://www.example2.com
Preview: www.example1.com

Virtual Hosting: www.example1.com

Virtual Hosting: www.example1.com

Preview: www.example2.com

Virtual Hosting: www.example2.com

Virtual Hosting: www.example2.com

Setup IP Based Virtual Hosting Linux

To setup IP based virtual hosting, you must have more than one IP address/Port assigned to your server or your Linux machine.

It can be on a single NIC card , For example: eth0:1eth0:2eth0:3 … so forth. Multiple NIC cards can also be attached. If you don’t know how to create multiple IP’s on single NIC, follow the below guide, that will help you out in creating.

  1. Create Multiple IP Addresses to One Single Network Interface

Purpose of implementing IP based virtual hosting is to assign implementing for each domain and that particular IP will not be used by any other domain.

This kind of set up required when a website is running with SSL certificate (mod_ssl) or on different ports and IPs. And You can also run multiple instances of Apache on a single machine. To check the IPs attached in your server, please check it using ifconfig command.

root@tecmint ~]# ifconfig
Sample Output
 
eth0      Link encap:Ethernet  HWaddr 08:00:27:4C:EB:CE  
          inet addr:192.168.0.100  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe4c:ebce/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:17550 errors:0 dropped:0 overruns:0 frame:0
          TX packets:15120 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:16565983 (15.7 MiB)  TX bytes:2409604 (2.2 MiB)

eth0:1    Link encap:Ethernet  HWaddr 08:00:27:4C:EB:CE  
          inet addr:192.168.0.101  Bcast:192.168.0.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:1775 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1775 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:3416104 (3.2 MiB)  TX bytes:3416104 (3.2 MiB)

As you can see in above output, two IPs 192.168.0.100 (eth0) and 192.168.0.101 (eth0:1) is attached to the server, both IPs are assigned to the same physical network device (eth0).

Now, assign a specific IP/Port to receive http requests, you can simply do it by changing Listen directive in httpd.conf file.

[root@tecmint ~]# vi /etc/httpd/conf/httpd.conf

Search for word “Listen”, You find a section where the short description about Listen directive is written. In that section, comment the original line and write your own directive below that line.

# Listen 80

Listen 192.168.0.100:80

Now,  create a Virtual host sections for both the domains. Go the bottom of the file and add the following virtual directives.

<VirtualHost 192.168.0.100:80>
    ServerAdmin webmaster@example1.com
    DocumentRoot /var/www/html/example1
    ServerName www.example1.com
ErrorLog logs/www.example1.com-error_log
TransferLog logs/www.example1.com-access_log
</VirtualHost>

<VirtualHost 192.168.0.101:80>
    ServerAdmin webmaster@example2.com
    DocumentRoot /var/www/html/example2
    ServerName www.example2.com
ErrorLog logs/www.example2.com-error_log
TransferLog logs/www.example2.com-access_log
</VirtualHost>

Now, since you have modified main Apache conf file, you need to restart the http service like below.

[root@tecmint ~]# service httpd restart
Stopping httpd:                                            [  OK  ]
Starting httpd:                                            [  OK  ]

Test your IP based Virtual hosting setup by accessing the URLs on web browser as shown below.

http://www.example1.com
http://www.example2.com

That’s all with Apache virtual host today, If you’re looking to secure and harden your Apache configuration, then read our article that guides.

  1. 13 Apache Web Server Security and Hardening Tips

Reference Links

Apache Virtual Host Documentation

I’ll be again come with some other Apache tips and trick in my future articles, till then Stay Geeky and connected to Tecmint.com. Do not forget to leave your suggestions about the article in our comment section below.

Source

Setting Up Web Servers Load Balancing Using ‘POUND’ on RHEL/CentOS

POUND is a load balancing program developed by ITSECURITY Company. It is a lightweight open source reverse proxy tool which can be used as a web-server load balancer to distribute load among several servers. There are several advantages POUND gives to end user which are very convenient and does the job right.

  1. Supports virtual hosts.
  2. Configurable.
  3. When a backend server is failed or recovered from a failure, it detects it automatically and bases its load balancing decisions according to that.
  4. It rejects incorrect requests.
  5. No specified browser or webservers.

Let’s have a look at how can get this hack done.

First of all you will need a scenario for better understanding about getting this done. So I will use a scenario where there are two webservers and one gateway server which needs to balance the requests comes to gateway server to webservers.

Pound Gateway Server : 172.16.1.222
Web Server 01 : 172.16.1.204
Web Server 02 : 192.168.1.161

Install Pound Load Balancer in Linux

Pound Web Server Load Balancer

Step1: Install Pound Load Balancer on Gateway Server

1. The easiest way to install Pound is using pre-compiled RPM packages, you can find RPMs for RedHat based distributions at:

  1. http://www.invoca.ch/pub/packages/pound/

Alternatively, Pound can be easily installed from the EPEL repository as shown below.

# yum install epel-release
# yum install Pound

After Pound installed, you can verify whether it is installed by issuing this command.

# rpm –qa |grep Pound

Install Pound Load Balancer

Install Pound Load Balancer

2. Secondly, you need two web-servers to balance the load and make sure you have clear identifiers in order to test the pound configuration works fine.

Here I have two servers bearing IP addresses 172.16.1.204 and 192.168.1.161.

For ease of use, I have created python SimpleHTTPServer to create an instant webserver on both servers. Read about python SimpleHTTPServer

In my scenario, I have my webserver01 running on 172.16.1.204 through port 8888 and webserver02 running on 192.168.1.161 through port 5555.

Pound Webserver 1

Pound Webserver 1

Pound Webserver 2

Pound Webserver 2

Step 2: Configure Pound Load Balancer

3. Now it’s time to make the configurations done. Once you have installed pound successfully, it creates the pound’s config file in /etc, namely pound.cfg.

We have to edit the server and backend details in order to balance the load among the webservers. Go to /etcand open pound.cfg file for editing.

# vi /etc/pound.cfg

Make the changes as suggested below.

ListenHTTP
    Address 172.16.1.222
    Port 80
End

ListenHTTPS
    Address 172.16.1.222
    Port    443
    Cert    "/etc/pki/tls/certs/pound.pem"
End

Service
    BackEnd
        Address 172.16.1.204
        Port    8888
    End

    BackEnd
        Address 192.168.1.161
        Port    5555
    End
End

This is how my pound.cfg file looks like.

Configure Pound Load Balancer

Configure Pound Load Balancer

Under the “ListenHTTP” and “ListenHTTPS” tags, you have to enter the IP address of the server you have installed POUND.

By default a server handles HTTP requests though port 80 and HTTPS requests through port 443. Under the “Service” tag, you can add any amount of sub tags called “BackEnd”. BackEnd tags bears the IP addresses and the port numbers which the webservers are running on.

Now save the file after editing it correctly and restart the POUND service by issuing one of below commands.

# /etc/init.d/pound restart 
OR
# service pound restart
OR
# systemctl restart pound.service

Start Pound Load Balancer

Start Pound Load Balancer

4. Now it’s time to check. Open two web browsers to check whether our configurations work fine. In the address bar type your POUND gateway’s IP address and see what appears.

First request should load the first webserver01 and second request from the other web browser should load the second webserver02.

Check Pound Load Balancing

Check Pound Load Balancing

Furthermore, think of a scenario like if you have two webservers to load balance and one of the server’s performance is good and other’s performance is not so good.

So when load balancing among them, you will have to consider for which server you have to put more weight on. Obviously for the server with good performance specs.

To balance the load like that, you just have to add a single parameter inside the pound.cfg file. Let’s have a look at it.

Think server 192.168.1.161:5555 is the better server. Then you need put more requests flow to that server. Under the “BackEnd” tag which is configured for 192.168.1.161 server, add the parameter “Priority” before the End tag.

Look at below example.

Pound Load Balancing Priority

Pound Load Balancing Priority

The range we can use for the “Priority” parameter is between 1-9. If we do not define it, default value of 5 will be assigned.

Then load will be balanced equally. If we define the Priority number, POUND will load the server with higher priority number more oftenly. So in this case, 192.168.1.161:5555 will be loaded more often than the server 172.16.1.204:8888.

Step 3: Planning Emergency Breakdowns

Emergency Tag: This tag is used to load a server in case of all the back end servers are dead. You can add it before the last End tag of pound.cfg as follows.

“Emergency
           Address 192.168.5.10
           Port        8080
   End”

6. POUND always keep track of which backend servers are alive and which are not. We can define after how many seconds POUND should checkout the backend servers by adding “Alive” parameter in pound.cfg.

You can use the parameter as “Alive 30” for set it to 30 seconds. Pound will temporarily disable the backend servers which are not responding. When we say not responding server may be dead or cannot establish a connection at that moment.

POUND will check the disabled backend server after every time period you have defined in the pound.cfg file in case if the server could establish a connection, then POUND can get back to work with the server.

7. POUND daemon will be handled by poundctl command. By having that we don’t need to edit the pound.cfgfile and we can issue Listner ServerBackEnd servers and sessions etc. via a single command.

Syntax: poundctl -c /path/to/socket [-L/-l] [-S/-s] [-B/-b] [-N/-n] [-H] [-X]
  1. -c defines path to your socket.
  2. -L / -l defines the listener of your architecture.
  3. -S / -s defines the service.
  4. -B / -b defines the backend servers.

See poundctl man pages for more information.

Hope you enjoy this hack and discover more options regarding this. Feel free to comment below for any suggestions and ideas. Keep connected with Tecmint for handy and latest How To’s.

Read AlsoInstalling XR Crossroads Load Balancer for Web Servers

Source

Setting Up ‘XR’ (Crossroads) Load Balancer for Web Servers on RHEL/CentOS

Crossroads is a service independent, open source load balance and fail-over utility for Linux and TCP based services. It can be used for HTTP, HTTPS, SSH, SMTP and DNS etc. It is also a multi-threaded utility which consumes only one memory space which leads to increase the performance when balancing load.

Let’s have a look at how XR works. We can locate XR between network clients and a nest of servers which dispatches client requests to the servers balancing the load.

If a server is down, XR forwards next client request to the next server in line, so client feels no down time. Have a look at the below diagram to understand what kind of a situation we are going to handle with XR.

Install XR Crossroads Load Balancer

Install XR Crossroads Load Balancer

There are two web-servers, one gateway server which we install and setup XR to receive client requests and distribute them among the servers.

XR Crossroads Gateway Server : 172.16.1.204
Web Server 01 : 172.16.1.222
Web Server 02 : 192.168.1.161

In above scenario, my gateway server (i.e XR Crossroads) bears the IP address 172.16.1.222webserver01 is 172.16.1.222 and it listens through port 8888 and webserver02 is 192.168.1.161 and it listens through port 5555.

Now all I need is to balance the load of all the requests that receives by the XR gateway from internet and distribute them among two web-servers balancing the load.

Step1: Install XR Crossroads Load Balancer on Gateway Server

1. Unfortunately, there isn’t any binary RPM packages available for crosscroads, the only way to install XR crossroads from source tarball.

To compile XR, you must have C++ compiler and Gnu make utilities installed on the system in order to continue installation error free.

# yum install gcc gcc-c++ make

Next, download the source tarball by going to their official site (https://crossroads.e-tunity.com), and grab the archived package (i.e. crossroads-stable.tar.gz).

Alternatively, you may use following wget utility to download the package and extract it in any location (eg: /usr/src/), go to unpacked directory and issue “make install” command.

# wget https://crossroads.e-tunity.com/downloads/crossroads-stable.tar.gz
# tar -xvf crossroads-stable.tar.gz
# cd crossroads-2.74/
# make install

Install XR Crossroads Load Balancer

Install XR Crossroads Load Balancer

After installation finishes, the binary files are created under /usr/sbin/ and XR configuration within /etc namely “xrctl.xml”.

2. As the last prerequisite, you need two web-servers. For ease of use, I have created two python SimpleHTTPServer instances in one server.

To see how to setup a python SimpleHTTPServer, read our article at Create Two Web Servers Easily Using SimpleHTTPServer.

As I said, we’re using two web-servers, and they are webserver01 running on 172.16.1.222 through port 8888and webserver02 running on 192.168.1.161 through port 5555.

XR WebServer 01

XR WebServer 01

XR WebServer 02

XR WebServer 02

Step 2: Configure XR Crossroads Load Balancer

3. All requisites are in place. Now what we have to do is configure the xrctl.xml file to distribute the load among the web-servers which receives by the XR server from the internet.

Now open xrctl.xml file with vi/vim editor.

# vim /etc/xrctl.xml

and make the changes as suggested below.

<?xml version=<94>1.0<94> encoding=<94>UTF-8<94>?>
<configuration>
<system>
<uselogger>true</uselogger>
<logdir>/tmp</logdir>
</system>
<service>
<name>Tecmint</name>
<server>
<address>172.16.1.204:8080</address>
<type>tcp</type>
<webinterface>0:8010</webinterface>
<verbose>yes</verbose>
<clientreadtimeout>0</clientreadtimeout>
<clientwritetimout>0</clientwritetimeout>
<backendreadtimeout>0</backendreadtimeout>
<backendwritetimeout>0</backendwritetimeout>
</server>
<backend>
<address>172.16.1.222:8888</address>
</backend>
<backend>
<address>192.168.1.161:5555</address>
</backend>
</service>
</configuration>

Configure XR Crossroads Load Balancer

Configure XR Crossroads Load Balancer

Here, you can see a very basic XR configuration done within xrctl.xml. I have defined what the XR server is, what are the back end servers and their ports and web interface port for the XR.

4. Now you need to start the XR daemon by issuing below commands.

# xrctl start
# xrctl status

Start XR Crossroads

Start XR Crossroads

5. Okay great. Now it’s time to check whether the configs are working fine. Open two web browsers and enter the IP address of the XR server with port and see the output.

Verify Web Server Load Balancing

Verify Web Server Load Balancing

Fantastic. It works fine. now it’s time to play with XR.

6. Now it’s time to login into XR Crossroads dashboard and see the port we’ve configured for web-interface. Enter your XR server’s IP address with the port number for web-interface you have configured in xrctl.xml.

http://172.16.1.204:8010

XR Crossroads Dashboard

XR Crossroads Dashboard

This is what it looks like. It’s easy to understand, user-friendly and easy to use. It shows how many connections each back end server received in the top right corner along with the additional details regarding the requests receiving. Even you can set the load weight each server you need to bear, maximum number of connections and load average etc..

The best part is, you actually can do this even without configuring xrctl.xml. Only thing you have to do is issue the command with following syntax and it will do the job done.

# xr --verbose --server tcp:172.16.1.204:8080 --backend 172.16.1.222:8888 --backend 192.168.1.161:5555

Explanation of above syntax in detail:

  1. –verbose will show what happens when the command has executed.
  2. –server defines the XR server you have installed the package in.
  3. –backend defines the webservers you need to balance the traffic to.
  4. Tcp defines it uses tcp services.

For more details, about documentations and configuration of CROSSROADS, please visit their official site at: https://crossroads.e-tunity.com/.

XR Corssroads enables many ways to enhance your server performance, protect downtime’s and make your admin tasks easier and handier. Hope you enjoyed the guide and feel free to comment below for the suggestions and clarifications.

Read AlsoInstalling Pound Load Balancer to Control Web Server Load

Source

How to Sync Two Apache Web Servers/Websites Using Rsync

There are so many tutorials available on web to mirror or take a backup of your web files with different methods, here I am creating this article for my future reference and here I’ll be using a very simple and versatile command of Linux to create a backup of your website. This tutorial will help you to sync data between your two web servers with “Rsync“.

Sync Apache Web Server

Sync Two Apache Web Server

The purpose of creating a mirror of your Web Server with Rsync is if your main web server fails, your backup server can take over to reduce downtime of your website. This way of creating a web server backup is very good and effective for small and medium size web businesses.

Advantages of Syncing Web Servers

The main advantages of creating a web server backup with rsync are as follows:

  1. Rsync syncs only those bytes and blocks of data that have changed.
  2. Rsync has the ability to check and delete those files and directories at backup server that have been deleted from the main web server.
  3. It takes care of permissions, ownerships and special attributes while copying data remotely.
  4. It also supports SSH protocol to transfer data in an encrypted manner so that you will be assured that all data is safe.
  5. Rsync uses compression and decompression method while transferring data which consumes less bandwidth.

How To Sync Two Apache Web Servers

Let’s proceed with setting up rsync to create a mirror of your web server. Here, I’ll be using two servers.

Main Server
  1. IP Address: 192.168.0.100
  2. Hostname: webserver.example.com
Backup Server
  1. IP Address: 192.168.0.101
  2. Hostname: backup.example.com

Step 1: Install Rsync Tool

Here in this case web server data of webserver.example.com will be mirrored on backup.example.com. And to do so first, we need to install Rsync on both the server with the help of following command.

[root@tecmint]# yum install rsync        [On Red Hat based systems]
[root@tecmint]# apt-get install rsync    [On Debian based systems]

Step 2: Create a User to run Rsync

We can setup rsync with root user, but for security reasons, you can create an unprivileged user on main webserver i.e webserver.example.com to run rsync.

[root@tecmint]# useradd tecmint
[root@tecmint]# passwd tecmint

Here I have created a user “tecmint” and assigned a password to user.

Step 3: Test Rsync Setup

It’s time to test your rsync setup on your backup server (i.e. backup.example.com) and to do so, please type following command.

[root@backup www]# rsync -avzhe ssh tecmint@webserver.example.com:/var/www/ /var/www
Sample Output
tecmint@webserver.example.com's password:

receiving incremental file list
sent 128 bytes  received 32.67K bytes  5.96K bytes/sec
total size is 12.78M  speedup is 389.70

You can see that your rsync is now working absolutely fine and syncing data. I have used “/var/www” to transfer; you can change the folder location according to your needs.

Step 4: Automate Sync with SSH Passwordless Login

Now, we are done with rsync setups and now its time to setup a cron for rsync. As we are going to use rsync with SSH protocol, ssh will be asking for authentication and if we won’t provide a password to cron it will not work. In order to work cron smoothly, we need to setup passwordless ssh logins for rsync.

Here in this example, I am doing it as root to preserve file ownerships as well, you can do it for alternative users too.

First, we’ll generate a public and private key with following commands on backups server (i.e. backup.example.com).

[root@backup]# ssh-keygen -t rsa -b 2048

When you enter this command, please don’t provide passphrase and click enter for Empty passphrase so that rsync cron will not need any password for syncing data.

Sample Output
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
9a:33:a9:5d:f4:e1:41:26:57:d0:9a:68:5b:37:9c:23 root@backup.exmple.com
The key's randomart image is:
+--[ RSA 2048]----+
|          .o.    |
|           ..    |
|        ..++ .   |
|        o=E *    |
|       .Sooo o   |
|       =.o o     |
|      * . o      |
|     o +         |
|    . .          |
+-----------------+

Now, our Public and Private key has been generated and we will have to share it with main server so that main web server will recognize this backup machine and will allow it to login without asking any password while syncing data.

[root@backup html]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@webserver.example.com

Now try logging into the machine, with “ssh ‘root@webserver.example.com‘”, and check in.ssh/authorized_keys.

[root@backup html]# root@webserver.example.com

Now, we are done with sharing keys. To know more in-depth about SSH password less login, you can read our article on it.

  1. SSH Passwordless Login in in 5 Easy Steps

Step 5: Schedule Cron To Automate Sync

Let’s setup a cron for this. To setup a cron, please open crontab file with the following command.

[root@backup ~]# crontab –e

It will open up /etc/crontab file to edit with your default editor. Here In this example, I am writing a cron to run it every 5 minutes to sync the data.

*/5        *        *        *        *   rsync -avzhe ssh root@webserver.example.com:/var/www/ /var/www/

The above cron and rsync command simply syncing “/var/www/” from the main web server to a backup serverin every 5 minutes. You can change the time and folder location configuration according to your needs. To be more creative and customize with Rsync and Cron command, you can check out our more detailed articles at:

  1. 10 Rsync Commands to Sync Files/Folders in Linux
  2. 11 Cron Scheduling Examples in Linux

Source

GoAccess (A Real-Time Apache and Nginx) Web Server Log Analyzer

GoAccess is an interactive and real time web server log analyzer program that quickly analyze and view web server logs. It comes as an open source and runs as a command line in Unix/Linux operating systems. It provides brief and beneficial HTTP (web server) statistics report for Linux administrators on the fly. It also take care of both the Apache and Ngnix web server log formats.

GoAccess parses and analyze the given web server log formats in preferred options including CLF (Common Log Format), W3C format (IIS) and Apache virtual hosts and then generate output of the data to the terminal.

GoAccess Features

It has the following features.

  1. General Statistics, bandwidth etc.
  2. Top Visitors, Visitors Time Distribution, Referring Sites & URLs and 404 or Not Found.
  3. Hosts, Reverse DNS, IP Location.
  4. Operating Systems, Browsers and Spiders.
  5. HTTP Status Codes
  6. Geo Location – Continent/Country/City
  7. Metrics per Virtual Host
  8. Support for HTTP/2 & IPv6
  9. Ability to output JSON and CSV
  10. Incremental log processing and support for large datasets + data persistence
  11. Different Color Schemes

How Do I Install GoAccess?

Presently, the most recent version of GoAccess 0.9.7 is not available from default system package repositories, so to install latest stable version, you need to manually download and compile it from source code under Linux systems as shown:

Install GoAccess from Source

# yum install ncurses-devel glib2-devel geoip-devel
# cd /usr/src
# wget http://tar.goaccess.io/goaccess-0.9.8.tar.gz
# tar zxvf goaccess-0.9.8.tar.gz
# cd goaccess-0.9.8/
# ./configure
# make; make install

Install GoAccess Using Package Manager

The easiest and preferred way to install GoAccess on Linux using the default package manager of your respective Linux distribution.

Note: As I said above, not all distributions will have the most recent version of GoAccess available in the system default repositories..

On RedHat, CentOS and Fedora
# yum install goaccess
# dnf install goaccess    [From Fedora 23+ versions]
On Debian and Ubuntu Systems

GoAccess utility is available since Debian Squeeze 6 and Ubuntu 11.04. To install just run the following command on the terminal.

# apt-get install goaccess

Note: The above command will not always provide you the most latest version. To get the latest stable version of GoAccess, add the official GoAccess Debian & Ubuntu repository as shown:

$ echo "deb http://deb.goaccess.io/ $(lsb_release -cs) main" | sudo tee -a /etc/apt/sources.list.d/goaccess.list
$ wget -O - http://deb.goaccess.io/gnugpg.key | sudo apt-key add -
$ sudo apt-get update
$ sudo apt-get install goaccess

How Do I Use GoAccess?

Once, goaccess is installed, execute ‘goaccess‘ command without any arguments will list the help menu.

# goaccess
Sample Output
GoAccess - 0.9.8

Usage: goaccess [ options ... ] -f log_file [-c][-M][-H][-q][-d][...]
The following options can also be supplied to the command:

Log & Date Format Options

  --log-format=        - Specify log format. Inner quotes need to
                                    be escaped, or use single quotes.
  --date-format=      - Specify log date format. e.g.,
                                    %d/%b/%Y
  --time-format=      - Specify log time format. e.g.,
                                    %H:%M:%S

User Interface Options

  -c --config-dialog              - Prompt log/date/time configuration
                                    window.
  -i --hl-header                  - Color highlight active panel.
  -m --with-mouse                 - Enable mouse support on main dashboard.
  --color=<fg:bg[attrs, PANEL]>   - Specify custom colors. See manpage for
                                    more details and options.
  --color-scheme=<1|2>            - Color schemes: 1 => Grey, 2 => Green.
  --html-report-title=     - Set HTML report page title and header.
  --no-color                      - Disable colored output.
  --no-column-names               - Don't write column names in term
                                    output.
  --no-csv-summary                - Disable summary metrics on the CSV
                                    output.
  --no-progress                   - Disable progress metrics.
  --no-tab-scroll                 - Disable scrolling through panels on TAB.

File Options

  -f --log-file=        - Path to input log file.
  -l --debug-file=      - Send all debug messages to the specified
                                    file.
  -p --config-file=     - Custom configuration file.
  --invalid-requests=   - Log invalid requests to the specified
                                    file.
  --no-global-config              - Don't load global configuration
                                    file.
.....

The easiest way to get the any web server statistics use the flag ‘f‘ with input log file name as shown below. The below command will give you general statistics of your web server logs.

# goaccess -f /var/log/httpd/tecmint.com
# goaccess -f /var/log/nginx/tecmint.com

The above command gives you an complete overview of web server metrics by showing summaries of various reports as panels on a one scrollable view as shown.

Apache Logs Overview

View Web Server Apache Logs

View Web Server Apache Logs

Apache Logs by Operating System – Overview

View Apache Logs By Operating System

View Apache Logs By Operating System

Apache Logs by Visitor Bandwidth – Overview

View Apache Visitor Bandwidth Usage

View Apache Visitor Bandwidth Usage

Apache Logs by Web Browser – Overview

View Apache Usage based on Browsers

View Apache Usage based on Browsers

How do I generate Apache HTML report?

To generate an HTML report of your Apache webserver logs, just run it against your web log file.

# goaccess -f /var/log/httpd/access_log > reports.html

Monitor Apache Logs Using Web Browser

GoAccess: Monitor Apache Logs Using Web Browser

For more information and usage please visit http://goaccess.io/.

Source

httpstat – A Curl Statistics Tool to Check Website Performance

httpstat is a Python script that reflects curl statistics in a fascinating and well-defined way, it is a single file which is compatible with Python 3 and requires no additional software (dependencies) to be installed on a users system.

It is fundamentally a wrapper of cURL tool, means that you can use several valid cURL options after a URL(s), excluding the options -w-D-o-s, and -S, which are already employed by httpstat.

httpstat Curl Statistics Tool

httpstat Curl Statistics Tool

You can see in the above image an ASCII table displaying how long each process took, and for me the most important step is “server processing” – if this number is higher, then you need to tune your server to speed up website.

For website or server tuning you can check our articles here:

  1. 5 Tips to Tune Performance of Apache Web Server
  2. Speed Up Apache and Nginx Performance Upto 10x
  3. How to Boost Nginx Performance Using Gzip Module
  4. 15 Tips to Tune MySQL/MariaDB Performance

Grab httpstat to check out your website speed using following instillation instructions and usage.

Install httpstat in Linux Systems

You can install httpstat utility using two possible methods:

1. Get it directly from its Github repo using the wget command as follows:

$ wget -c https://raw.githubusercontent.com/reorx/httpstat/master/httpstat.py

2. Using pip (this method allows httpstat to be installed on your system as a command) like so:

$ sudo pip install httpstat

Note: Make sure pip package installed on the system, if not install it using your distribution package manager yum or apt.

How to Use httpstat in Linux

httpstat can be used according to the way you installed it, if you directly downloaded it, run it using the following syntax from within the download directory:

$ python httpstat.py url cURL_options 

In case you used pip to install it, you can execute it as a command in the form below:

$ httpstat url cURL_options  

To view the help page for httpstat, issue the command below:

$ python httpstat.py --help
OR
$ httpstat --help
httpstat help
Usage: httpstat URL [CURL_OPTIONS]
       httpstat -h | --help
       httpstat --version

Arguments:
  URL     url to request, could be with or without `http(s)://` prefix

Options:
  CURL_OPTIONS  any curl supported options, except for -w -D -o -S -s,
                which are already used internally.
  -h --help     show this screen.
  --version     show version.

Environments:
  HTTPSTAT_SHOW_BODY    Set to `true` to show response body in the output,
                        note that body length is limited to 1023 bytes, will be
                        truncated if exceeds. Default is `false`.
  HTTPSTAT_SHOW_IP      By default httpstat shows remote and local IP/port address.
                        Set to `false` to disable this feature. Default is `true`.
  HTTPSTAT_SHOW_SPEED   Set to `true` to show download and upload speed.
                        Default is `false`.
  HTTPSTAT_SAVE_BODY    By default httpstat stores body in a tmp file,
                        set to `false` to disable this feature. Default is `true`
  HTTPSTAT_CURL_BIN     Indicate the curl bin path to use. Default is `curl`
                        from current shell $PATH.
  HTTPSTAT_DEBUG        Set to `true` to see debugging logs. Default is `false`

From the output of the help command above, you can see that httpstat has a collection of useful environmental variables that influence its behavior.

To use them, simply export the variables with the appropriate value in the .bashrc or .zshrc file.

For instance:

export  HTTPSTAT_SHOW_IP=false
export  HTTPSTAT_SHOW_SPEED=true
export  HTTPSTAT_SAVE_BODY=false
export  HTTPSTAT_DEBUG=true

Once your are done adding them, save the file and run the command below to effect the changes:

$ source  ~/.bashrc

You can as well specify the cURL binary path to use, the default is curl from current shell $PATH environmental variable.

Below are a few examples showing how httpsat works.

$ python httpstat.py google.com
OR
$ httpstat google.com

httpstat - Showing Website Statistics

httpstat – Showing Website Statistics

In the next command:

  1. -x command flag specifies a custom request method to use while communicating with the HTTP server.
  2. --data-urlencode data posts data (a=b in this case) with URL-encoding turned on.
  3. -v enables a verbose mode.
$ python httpstat.py httpbin.org/post -X POST --data-urlencode "a=b" -v 

httpstat - Custom Post Request

httpstat – Custom Post Request

You can look through the cURL man page for more useful and advanced options or visit the httpstat Github repository: https://github.com/reorx/httpstat

In this article, we have covered a useful tool for monitoring cURL statistics is a simple and clear way. If you know of any such tools out there, do not hesitate to let us know and you can as well ask a question or make a comment about this article or httpstat via the feedback section below.

Source

iftop – A Real Time Linux Network Bandwidth Monitoring Tool

In our earlier article, we have reviewed the usage of TOP Command and it’s parameters. In this article we have came up with another excellent program called Interface TOP (IFTOP) is a real time console-based network bandwidth monitoring tool.

It will show a quick overview of network activities on an interface. Iftop shows a real time updated list of network usage bandwidth every 210 and 40 seconds on average. In this post we are going to see the installation and how to use IFTOP with examples in Linux.

Requirements:

  1. libpcap : library for capturing live network data.
  2. libncurses : a programming library that provides an API for building text-based interfaces in a terminal-independent way.

Install libpcap and libncurses

First start by installing libpcap and libncurses libraries using your Linux distribution package manager as shown.

$ sudo apt install libpcap0.8 libpcap0.8-dev libncurses5 libncurses5-dev  [On Debian/Ubuntu]
# yum  -y install libpcap libpcap-devel ncurses ncurses-devel             [On CentOS/RHEL]
# dnf  -y install libpcap libpcap-devel ncurses ncurses-devel             [On Fedora 22+]

Download and Install iftop

Iftop is available in the official software repositories of Debian/Ubuntu Linux, you can install it using apt command as shown.

$ sudo apt install iftop

On RHEL/CentOS, you need to enable the EPEL repository, and then install it as follows.

# yum install epel-release
# yum install  iftop

On Fedora distribution, iftop is also available from the default system repositories to install using the following command.

# dnf install iftop

Other Linux distributions, can download iftop source package using wget command and compile it from source as shown.

# wget http://www.ex-parrot.com/pdw/iftop/download/iftop-0.17.tar.gz
# tar -zxvf iftop-0.17.tar.gz
# cd iftop-0.17
# ./configure
# make
# make install

Basic usage of Iftop

Once installation done, go to your console and run the iftop command without any arguments to view bandwidth usage of default interface, as shown in the screen shot below.

$ sudo iftop

Sample output of iftop command which shows bandwidth of default interface as shown below.

Monitor Linux Network Bandwidth Real Time

Monitor Linux Network Bandwidth Real Time

Monitor Linux Network Interface

First run the following ifconfig command or ip command to find all attached network interfaces on your Linux system.

$ sudo ifconfig
OR
$ sudo ip addr show

Then use the -i flag to specify the interface you want to monitor. For example the command below used to monitor bandwidth on the wireless interface on the test computer.

$ sudo iftop -i wlp2s0

Monitor Linux Wifi Network Bandwidth

Monitor Linux Wifi Network Bandwidth

To disable hostname lookups, use the -n flag.

$ sudo iftop -n  eth0

To turn on port display, use the -P switch.

$ sudo iftop -P eth0

Iftop Options and Usage

While running iftop you can use the keys like SD to see more information like sourcedestination etc. Please do run man iftop if you want to explore more options and tricks. Press ‘q‘ to quit from running windows.

In this article, we’ve showed how to install and use iftop, a network interface monitoring tool in Linux. If you want to know more about iftop please visit iftop website. Kindly share it and send your comment through our comment box below.

Source

How to Install and Configure ‘Collectd’ and ‘Collectd-Web’ to Monitor Server Resources in Linux

Collectd-web is a web front-end monitoring tool based on RRDtool (Round-Robin Database Tool), which interprets and graphical outputs the data collected by the Collectd service on Linux systems.

Collectd service comes by default with a huge collection of available plug-ins into its default configuration file, some of them being, by default, already activated once you have installed the software package.

Collectd-web CGI scripts which interprets and generates the graphical html page statistics can be simply executed by the Apache CGI gateway with minimal of configurations required on Apache web server side.

However, the graphical web interface with the generated statistics can, also, be executed by the standalone web server offered by Python CGIHTTPServer script that comes pre-installed with the main Git repository.

This tutorial will cover the installation process of Collectd service and Collectd-web interface on RHEL/CentOS/Fedora and Ubuntu/Debian based systems with the minimal configurations needed to be done in order to run the services and to enable a Collectd service plug-in.

Please go through the following articles of collectd series.

Part 1Install and Configure ‘Collectd’ and ‘Collectd-Web’ to Monitor Linux Resources

Step 1: – Install Collectd Service

1. Basically, the Collectd daemon task is to gather and store data statistics on the system that it runs on. The Collectd package can be downloaded and installed from the default Debian based distribution repositories by issuing the following command:

On Ubuntu/Debian
# apt-get install collectd			[On Debian based Systems]

Install Collectd on Ubuntu

Install Collectd on Debian/Ubuntu

On RHEL/CentOS 6.x/5.x

On older RedHat based systems like CentOS/Fedora, you first need to enable epel repository under your system, then you can able to install collectd package from the epel repository.

# yum install collectd
On RHEL/CentOS 7.x

On latest version of RHEL/CentOS 7.x, you can install and enable epel repository from default yum repos as shown below.

# yum install epel-release
# yum install collectd

Install Collectd on CentOS

Install Collectd on CentOS/RHEL/Fedora

Note: For Fedora users, no need to enable any third party repositories, simple yum to get the collectd package from default yum repositories.

2. Once the package is installed on your system, run the below command in order to start the service.

# service collectd start			[On Debian based Systems]
# service collectd start                        [On RHEL/CentOS 6.x/5.x Systems]
# systemctl start collectd.service              [On RHEL/CentOS 7.x Systems]

Step 2: Install Collectd-Web and Dependencies

3. Before starting to import the Collectd-web Git repository, first you need to assure that Git software package and the following required dependencies are installed on your machine:

----------------- On Debian / Ubuntu systems -----------------
# apt-get install git
# apt-get install librrds-perl libjson-perl libhtml-parser-perl

Install Git on Ubuntu

Install Git on Debian/Ubuntu

----------------- On RedHat/CentOS/Fedora based systems -----------------
# yum install git
# yum install rrdtool rrdtool-devel rrdtool-perl perl-HTML-Parser perl-JSON

Install Git on CentOS

Install Git and Dependencies

Step 3: Import Collectd-Web Git Repository and Modify Standalone Python Server

4. On the next step choose and change the directory to a system path from the Linux tree hierarchy where you want to import the Git project (you can use /usr/local/ path), then run the following command to clone Collectd-web git repository:

# cd /usr/local/
# git clone https://github.com/httpdss/collectd-web.git

Git Clone Collectd-Web

Git Clone Collectd-Web

5. Once the Git repository is imported into your system, go ahead and enter the collectd-web directory and list its contents in order to identify the Python server script (runserver.py), which will be modified on the next step. Also, add execution permissions to the following CGI script: graphdefs.cgi.

# cd collectd-web/
# ls
# chmod +x cgi-bin/graphdefs.cgi

Set Execute Permission

Set Execute Permission

6. Collectd-web standalone Python server script is configured by default to run and bind only on loopback address (127.0.0.1).

In order to access Collectd-web interface from a remote browser, you need to edit the runserver.py script and change the 127.0.1.1 IP Address to 0.0.0.0, in order to bind on all network interfaces IP Addresses.

If you want to bind only on a specific interface, then use that interface IP Address (not advised to use this option in case your network interface Address is dynamically allocated by a DHCP server). Use the below screenshot as an excerpt on how the final runserver.py script should look like:

# nano runserver.py

Configure Collect-web

Configure Collect-web

If you want to use another network port than 8888, modify the PORT variable value.

Step 4: Run Python CGI Standalone Server and Browse Collectd-web Interface

7. After you have modified the standalone Python server script IP Address binding, go ahead and start the server in background by issuing the following command:

# ./runserver.py &

Optional, as an alternate method you can call the Python interpreter to start the server:

# python runserver.py &

Start Collect-Web Server

Start Collect-Web Server

8. To visit Collectd-web interface and display statistics about your host, open a browser and point the URL at your server IP Address and port 8888 using HTTP protocol.

By default you will see a number of graphics about CPU, disk usage, network traffic, RAM, processes and other system resources by clicking on the hostname displayed on Hosts form.

http://192.168.1.211:8888

Access Collect-Web Panel

Access Collect-Web Panel

Linux Disk Monitoring

Linux Disk Monitoring

9. To stop the standalone Python server issue the below command or you may cancel or stop the script by hitting Ctrl+c key:

# killall python

Step 5: Create a Custom Bash Script to Manage the Standalone Python Server

10. To manage the standalone PyhtonCGIServer script more easily (startstop and view status), create the following collectd-server Bash script at a system executable path with the following configurations:

# nano /usr/local/bin/collectd-server

Add the following excerpt to collectd-server file.

#!/bin/bash

PORT="8888"
  case $1 in
            start)
	cd /usr/local/collectd-web/
	python runserver.py 2> /tmp/collectd.log &
    sleep 1
    stat=`netstat -tlpn 2>/dev/null | grep $PORT | grep "python"| cut -d":" -f2 | cut -d" " -f1`
            if [[ $PORT -eq $stat ]]; then
    sock=`netstat -tlpn 2>/dev/null | grep $PORT | grep "python"`
    echo -e "Server is  still running:\n$sock"
            else
    echo -e "Server has stopped"
            fi
                    ;;
            stop)
    pid=`ps -x | grep "python runserver.py" | grep -v "color"`
            kill -9 $pid 2>/dev/null
    stat=`netstat -tlpn 2>/dev/null | grep $PORT | grep "python"| cut -d":" -f2 | cut -d" " -f1`
            if [[ $PORT -eq $stat ]]; then
    sock=`netstat -tlpn 2>/dev/null | grep $PORT | grep "python"`
    echo -e "Server is  still running:\n$sock"
            else
    echo -e "Server has stopped"
            fi
                    ;;
            status)
    stat=`netstat -tlpn 2>/dev/null |grep $PORT| grep "python" | cut -d":" -f2 | cut -d" " -f1`
            if [[ $PORT -eq $stat ]]; then
    sock=`netstat -tlpn 2>/dev/null | grep $PORT | grep "python"`
    echo -e "Server is running:\n$sock"
            else
    echo -e "Server is stopped"
            fi
                    ;;
            *)
    echo "Use $0 start|stop|status"
                    ;;
    esac

In case you have changed PORT variable number from runserver.py script, make sure you make the port variable changes on this bash file accordingly.

11. Once you have created the collectd-server script, add executing permissions in order to be able to run it. The only thing remaining now is to manage the Collectd-web server in a similar way as you do with a system service by issuing the following commands.

# chmod +x /usr/local/bin/collectd-server
# collectd-server start 
# collectd-server status
# collectd-server stop

Collectd Server Script

Collectd Server Script

Step 6: Enable a Collectd Daemon Plug-in

12. In order to activate a plug-in on Collectd service, you must go to its main configuration file, which is located at /etc/collectd/collectd.conf file, open this file for editing and uncomment, the first time (remove the # sign in front) the plug-in name you want to activate.

Once the LoadPlugin statement with the name of the plug-in has been uncommented you must deeply search through the file and locate the same plugin name which holds the configurations required to run.

As an example, here’s how you active Collectd Apache plugin. First open Collectd main configuration file for editing:

# nano /etc/collectd/collectd.conf

A. Use Ctrl+w to enable nano editor search and type apache on below terminal the search filed. Once LoadPlugin apache statement has been found, remove the comment special sign # to uncomment it, as illustrated in the below screenshot.

Enable Collectd Apache Plugin

Enable Collectd Apache Plugin

B. Next, type Ctrl+w to search again, apache should already appear on search filed and press Enter key to find the plug-in configurations.

Once apache plug-in configurations are located (they look similar to Apache web server statements) uncomment the following lines, so that the final configuration should resemble to this:

<Plugin apache>
        <Instance "example.lan">
                URL "http://localhost/server-status?auto"
#               User "www-user"
#               Password "secret"
#               VerifyPeer false
#               VerifyHost false
#               CACert "/etc/ssl/ca.crt"
#               Server "apache"
        </Instance>
#
#       <Instance "bar">
#               URL "http://some.domain.tld/status?auto"
#               Host "some.domain.tld"
#               Server "lighttpd"
#       </Instance>
</Plugin>

Enable Apache Configuration for Collectd

Enable Apache Configuration for Collectd

Note: Replace <Instance "example.lan"> statement string according to your server hostname.

C.
 After you finish editing the file, save it (Ctrl+o) and close it (Ctrl+x), then restart Collectd daemon to apply changes. Clear your browser cache and reload the page to view the statistics collected by Collectd daemon so far for Apache Web Server.

# /usr/local/bin/collectd-server start

Apache Monitoring

Apache Monitoring

To enable other plug-ins please visit Collectd Wiki page.

Step 7: Enable Collectd Daemon and Collectd-web Server System-Wide

13. In order to automatically start Collectd-web server from the Bash script at boot time, open /etc/rc.localfile for editing and add the following line before the exit 0 statement:

/usr/local/bin/collectd-server start

Enable Collectd Systemwide

Enable Collectd Systemwide

If you’re not using the collectd-server Bash script which manages the Python server script, replace the above line on rc.conf with the following line:

# cd /usr/local/collectd-web/ && python runserver.py 2> /tmp/collectd.log &

Then, enable both system services by issuing the following commands:

------------------ On Debian / Ubuntu ------------------
# update-rc.d collectd enable
# update-rc.d rc.local enable

Optionally, an alternate method to enable this services at boot time would be with the help on sysv-rc-confpackage:

------------------ On Debian / Ubuntu ------------------
# sysv-rc-conf collectd on
# sysv-rc-conf rc.local on
------------------ On RHEL/CentOS 6..x/5.x and Fedora 12-19 ------------------
# chkconfig collectd on
# chkconfig --level 5 collectd on
------------------ On RHEL/CentOS 7.x and Fedora 20 onwards ------------------
# systemctl enable collectd

That’s all! Collectd daemon and Collectd-web server prove to be excellent monitoring tools for Linux servers, with minimal impact concerning system resources, which can generate and display some interesting graphical statistics about machines workload, the only drawback so far being the fact the statistics are not displaying in real time without refreshing the browser.

Source

WP2Social Auto Publish Powered By : XYZScripts.com