Redirect a Website URL from One Server to Different Server in Apache

As promised in our previous two articles (Perform Internal Redirection with mod_rewrite and Show Custom Content Based on Browser), in this post we will explain how to perform a redirection to a resource that has been moved from one server to a different server in Apache using mod_rewrite module.

Suppose you are redesigning your company’s Intranet site. You have decided to store the content and styling (HTML filesJavaScript, and CSS) on one server and the documentation on another – perhaps a more robust one.

Suggested Read: 5 Tips to Boost the Performance of Your Apache Web Server

However, you want this change to be transparent to your users so that they are still able to access the docs at the usual URL.

In the following example, a file named assets.pdf has been moved from /var/www/html in 192.168.0.100(hostname: web) to the same location in 192.168.0.101 (hostname: web2).

In order for users to access this file when they browse to 192.168.0.100/assets.pdf, open Apache’s configuration file on 192.168.0.100 and add the following rewrite rule (or you can also add the following rule to your .htaccess file):

RewriteRule "^(/assets\.pdf$)" "http://192.168.0.101$1"  [R,L]

where $1 is a placeholder for anything that matches the regular expression inside parentheses.

Now save changes, don’t forget to restart Apache, and let’s see what happens when we attempt to access assets.pdf by browsing to 192.168.0.100/assets.pdf:

Suggested Read: 25 Useful ‘.htaccess’ Tricks for Websites

In the above below we can see that the request that was made for assets.pdf on 192.168.0.100 was actually handled by 192.168.0.101.

# tail -n 1 /var/log/apache2/access.log

Check Apache Logs

Check Apache Logs

In this article we have discussed how to perform a redirection to a resource that has been moved to a different server. To wrap up, I’d strongly suggest you take a look at the mod_rewrite guide and Apache redirect guide for future reference.

As always, feel free to use the comment form below if you have any concerns about this article. We look forward to hearing from you!

Source

How to Use Python ‘SimpleHTTPServer’ to Create Webserver or Serve Files Instantly

SimpleHTTPServer is a python module which allows you to instantly create a web server or serve your files in a snap. Main advantage of python’s SimpleHTTPServer is you don’t need to install anything since you have python interpreter installed. You don’t have to worry about python interpreter because almost all Linux distributions, python interpreter come handy by default.

You also can use SimpleHTTPServer as a file sharing method. You just have to enable the module within the location of your shareable files are located. I will show you several demonstrations in this article by using various options.

Step 1: Check for Python Installation

1. Check whether python is installed in your server or not, by issuing below command.

# python –V 

OR

# python  --version

It will show you the version of the python interpreter you’ve got and it will give you an error message if it is not installed.

Check Python Version

Check Python Version

2. You’re lucky if it was there by default. Less work actually. If it was not installed by any chance, install it following below commands.

If you have a SUSE distribution, type yast in the terminal –> Go to Software Management –> Type ‘python’without quotes –> select python interpreter –> press space key and select it –> and then install it.

Simple as that. For that, you need to have SUSE ISO mounted and configured it as a repo by YaST or you can simple install python from the web.

Install Python on Suse

Install Python on Suse

If you’re using different operating systems like RHEL, CentOS, Debian, Ubuntu or other Linux operating systems, you can just install python using yum or apt.

In my case I use SLES 11 SP3 OS and python interpreter comes installed by default in it. Most of the case you won’t have to worry about installing python interpreter on your server.

Step 2: Create a Test Directory and Enable SimpleHTTPServer

3. Create a test directory where you don’t mess with system files. In my case I have a partition called /x01 and I have created a directory called tecmint in there and also I have added some test files for testing.

Create Testing Directory

Create Testing Directory

4. Your prerequisites are ready now. All you have to do is try python’s SimpleHTTPServer module by issuing below command within your test directory (In my case, /x01//).

# python –m SimpleHTTPServer

Enable SimpleHTTPServer

Enable SimpleHTTPServer

5. After enabling SimpleHTTPServer successfully, it will start serving files through port number 8000. You just have to open up a web browser and enter ip_address:port_number (in my case its 192.168.5.67:8000).

SimpleHTTPServer Directory Listing

Directory Listing

6. Now click on link 'tecmint' to browse files and directories of tecmint directory, see the screen below for reference.

Browse Directory Files

Browse Directory Files

7. SimpleHTTPServer serves your files successfully. You can see what has happened at the terminal, after you accessed your server through web browser by having a look at where you executed your command.

Python SimpleHTTPServer Status

Python SimpleHTTPServer Status

Step 3: Changing SimpleHTTPServer Port

8. By default python’s SimpleHTTPServer serves files and directories through port 8000, but you can define a different port number (Here I am using port 9999) as you desire with the python command as shown below.

# python –m SimpleHTTPServer 9999

Change SimpleHTTPServer Port

Change SimpleHTTPServer Port

Directory Listing on Different Port

Directory Listing on Different Port

Step 4: Serve Files from Different Location

9. Now as you tried it, you might like to serve your files in a specific location without actually going to the path.

As an example, if you are in your home directory and you want to server your files in /x01/tecmint/ directory without cd in to /x01/tecmint, Let’s see, how we will do this.

# pushd /x01/tecmint/; python –m SimpleHTTPServer 9999; popd;

Serve Files from Location

Serve Files from Location

Directory Listing on Different Port

Directory Listing on Different Port

Step 5: Serve HTML Files

10. If there’s a index.html file located in your serving location, python interpreter will automatically detect it and serve the html file instead of serving your files.

Let’s have a look at it. In my case I include a simple html script in the file named index.html and locate it in /x01/tecmint/.

<html>
<header><title>TECMINT</title></header>
<body text="blue"><H1>
Hi all. SimpleHTTPServer works fine.
</H1>
<p><a href="https://www.tecmint.com">Visit TECMINT</a></p>
</body>
</html>

Create Index File

Create Index File

Now save it and run SimpleHTTPServer on /x01/tecmint and go to the location from a web browser.

# pushd /x01/tecmint/; python –m SimpleHTTPServer 9999; popd;

Enable Index Page

Enable Index Page

Serving Index Page

Serving Index Page

Very simple and handy. You can serve your files or your own html code in a snap. Best thing is you won’t have to worry about installing anything at all. In a scenario like you want to share a file with someone, you don’t have to copy the file to a shared location or making your directories shareable.

Just run SimpleHTTPServer on it and it is done. There is a few things you have to keep in mind when using this python module. When it serves files it runs on the terminal and prints out what happens in there. When you’re accessing it from the browser or download a file from it, it shows IP address accessed it and file downloaded etc. Very handy isn’t it?

If you want to stop serving, you will have to stop the running module by pressing ctrl+c. So now you know how to use python’s SimpleHTTPServer module as a quick solution to serve your files. Commenting below for the suggestions and new findings would be a great favour to enhance future articles and learn new things.

Reference Links

SimpleHTTPServer Docs

Source

3 Ways to Check Apache Server Status and Uptime in Linux

Apache is a world’s most popular, cross platform HTTP web server that is commonly used in Linux and Unix platforms to deploy and run web applications or websites. Importantly, it’s easy to install and has a simple configuration as well.

Read AlsoHow to Hide Apache Version Number and Other Sensitive Info

In this article, we will show how to check Apache web server uptime on a Linux system using different methods/commands explained below.

1. Systemctl Utility

Systemctl is a utility for controlling the systemd system and service manager; it is used it to start, restart, stop services and beyond. The systemctl status sub-command, as the name states is used to view the status of a service, you can use it for the above purpose like so:

$ sudo systemctl status apache2	  #Debian/Ubuntu 
# systemctl status httpd	  #RHEL/CentOS/Fedora 

Check Apache Status Using Systemctl

Check Apache Status Using Systemctl

2. Apachectl Utilities

Apachectl is a control interface for Apache HTTP server. This method requires the mod_status (which displays info about the server is performing including its uptime) module installed and enabled (which is the default setting).

On Debian/Ubuntu

The server-status component is enabled by default using the file /etc/apache2/mods-enabled/status.conf.

$ sudo vi /etc/apache2/mods-enabled/status.conf

Apache Mod_Status Configuration

Apache Mod_Status Configuration

On RHEL/CentOS

To enable server-status component, create a file below.

# vi /etc/httpd/conf.d/server-status.conf

and add the following configuration.

<Location "/server-status">
    SetHandler server-status
    #Require  host  localhost		#uncomment to only allow requests from localhost 
</Location>

Save the file and close it. Then restart the web server.

# systemctl restart httpd

If you are primarily using a terminal, then you also need a command line web browser such as lynx or links.

$ sudo apt install lynx		#Debian/Ubuntu
# yum install links		#RHEL/CentOS

Then run the command below to check the Apache service uptime:

$ apachectl status

Check Apache Status Using Apache2ctl

Check Apache Status Using Apache2ctl

Alternatively, use the URL below to view the Apache web server status information from a graphical web browser:

http://localhost/server-status
OR
http:SERVER_IP/server-status

3. ps Utility

ps is a utility which shows information concerning a selection of the active processes running on a Linux system, you can use it with grep command to check Apache service uptime as follows.

Here, the flag:

  • -e – enables selection of every processes on the system.
  • -o – is used to specify output (comm – command, etime – process execution time and user – process owner).
# ps -eo comm,etime,user | grep apache2
# ps -eo comm,etime,user | grep root | grep apache2
OR
# ps -eo comm,etime,user | grep httpd
# ps -eo comm,etime,user | grep root | grep httpd

The sample output below shows that apache2 service has been running for 4 hours, 10 minutes and 28 seconds (only consider the one started by root).

Check Apache Uptime

Check Apache Uptime

Lastly, check out more useful Apache web server guides:

    1. 13 Apache Web Server Security and Hardening Tips
    2. How to Check Which Apache Modules are Enabled/Loaded in Linux
    3. 5 Tips to Boost the Performance of Your Apache Web Server
    4. How to Password Protect Web Directories in Apache Using .htaccess File

In this article, we showed you three different ways to check Apache/HTTPD service uptime on a Linux system. If you have any questions or thoughts to share, do that via the comment section below.

Source

How to Monitor Apache Performance using Netdata on CentOS 7

Netdata is a free open source, simple yet powerful, and effective real-time system performance monitoring tool for Linux, FreeBSD and MacOS. It supports various plugins for monitoring general server status, applications, web services such as Apache or Nginx HTTP server and so much more.

Read AlsoHow to Monitor Nginx Performance Using Netdata on CentOS 7

In this article, we will explain how to monitor Apache HTTP server performance using Netdata performance monitoring tool on a CentOS 7 or RHEL 7 distribution. At the end of this article, you will be able to watch visualizations of requests, bandwidth, workers, and other Apache server metrics.

Requirements:

  1. CentOS 7 Server or RHEL 7 Server with Minimal Install.
  2. Apache HTTP server installation with mod_status module enabled.

Step 1: Install Apache on CentOS 7

1. First start by installing Apache HTTP server from the default software repositories using the YUM package manager.

# yum install httpd

2. After you have installed Apache web server, start it for the first time, check if it is up and running, and enable it to start automatically at system boot using following commands.

# systemctl start httpd
# systemctl enable httpd
# systemctl status httpd

3. If you are running a firewall for example firewalld, you need to open the ports 80 and 443 to allow web traffic to Apache via HTTP and HTTPS respectively, using the commands below.

# firewall-cmd --zone=public --permanent --add-port=80/tcp
# firewall-cmd --zone=public --permanent --add-port=443/tcp
# firewall-cmd --reload 

Step 2: Enable Mod_Status Module in Apache

4. In this step, you need to enable and configure mod_status module in Apache, this is required by Netdata for gathering server status information and statistics.

Open the file /etc/httpd/conf.modules.d/00-base.conf file using your favorite editor.

# vim /etc/httpd/conf.modules.d/00-base.conf

And ensure that the line below is uncommented to enable mod_status module, as shown in the screenshot.

Enable Mod_Status Module in Apache

Enable Mod_Status Module in Apache

5. Once you’ve enabled mod_status, next you need to create a server-status.conf configuration file for the Apache server status page.

# vim /etc/httpd/conf.d/server-status.conf

Add the following configuration inside the file.

<Location "/server-status">
    SetHandler server-status
    #Require host localhost           #uncomment to only allow requests from localhost 
</Location>

Save the file and close. Then restart the Apache HTTPD service.

# systemctl restart httpd

6. Next, you need to verify that the Apache server status and statistics page is working well by using a command-line web browser such as lynx as shown.

# yum install lynx
# lynx http://localhost/server-status   

Check Apache Server Status

Check Apache Server Status

Step 3: Install Netdata on CentOS 7

7. Fortunately, there is a kickstarter shell script for painlessly installing netdata from its github repository. This one-liner script downloads a second script which checks your Linux distribution and installs the required system packages for building netdata, then downloads the latest netdata source tree; builds and installs it on your server.

You can start the kickstarter script as shown, the all flag allows for installing required packages for all netdata plugins including the ones for Apache HTTP server.

# bash <(curl -Ss https://my-netdata.io/kickstart.sh) all

Note that if your not administering your system as root, you will be prompted to enter your user password for sudo command, and you will also be asked to confirm a number of functions by pressing [Enter].

Install Netdata on CentOS 7

Install Netdata on CentOS 7

8. Once the script has completed building and installing netdata, it will automatically start the netdata service via systemd service manager and enables it to start at system boot.

Netdata Installation Summary

Netdata Installation Summary

By default, netdata listens on port 19999, you will access the web UI using this port. So, open port 19999 in the firewall to access the netdata web UI.

# firewall-cmd --permanent --add-port=19999/tcp
# firewall-cmd --reload 

Step 4: Configure Netdata to Monitor Apache Performance

9. The netdata configuration for Apache plugin is /etc/netdata/python.d/apache.conf, this file is written in YaML format, you can open it using your favorite editor.

# vim /etc/netdata/python.d/apache.conf

The default configuration is just enough to get you started with monitoring your Apache HTTP server.

Netdata Configuration for Apache

Netdata Configuration for Apache

However, if you have read the documentation, and made any changes to it, restart the netdata service to effect the changes.

# systemctl restart netdata 

Step 5: Monitor Apache Performance Using Netdata

10. Next, open a web browser and use the following URL to access the netdata web UI.

http://domain_name:19999
OR
http://SERVER_IP:19999

From the netdata dashboard, search for “Apache local” on the right hand side list of plugins, and click on it to start monitoring your Apache server. You will be able to watch visualizations of requests, bandwidth, workers, and other server statistics, as shown in the following screenshot.

Monitor Apache Performance Using Netdata

Monitor Apache Performance Using Netdata

Netdata Github repositoryhttps://github.com/firehol/netdata

That’s all! In this article, we’ve explained how to monitor Apache performance using Netdata on CentOS 7. If you have any questions or additional thoughts to share, please reach us via the comment form below.

Source

How to Enable NGINX Status Page

Nginx is a free open source, high-performance, reliable, scalable and fully extensible web server, load balancer and reverse proxy software. It has a simple and easy-to-understand configuration language. It also supports a multitude of modules both static (which have existed in Nginx since the first version) and dynamic (introduced in version 1.9.11).

One of the important modules in Nginx is the ngx_http_stub_status_module module which provides access to basic Nginx status information via a “status page”. It shows information such as total number of active client connections, those accepted, and those handled, total number of requests and number of reading, writing and waiting connections.

Read AlsoAmplify – NGINX Monitoring Made Easy

On most Linux distributions, the Nginx version comes with the ngx_http_stub_status_module enabled. You can check out if the module is already enabled or not using following command.

# nginx -V 2>&1 | grep -o with-http_stub_status_module

Check Nginx Status Module

Check Nginx Status Module

If you see --with-http_stub_status_module as output in the terminal, means the status module is enabled. If the above command returns no output, you need to compile NGINX from source using the –with-http_stub_status_module as configuration parameter as shown.

# wget http://nginx.org/download/nginx-1.13.12.tar.gz
# tar xfz nginx-1.13.12.tar.gz
# cd nginx-1.13.12/
# ./configure --with-http_stub_status_module
# make
# make install

After verifying the module, you will also need to enable stub_status module in the NGINX configuration file /etc/nginx/nginx.conf to set up a locally reachable URL (e.g., http://www.example.com/nginx_status) for the status page.

location /nginx_status {
 	stub_status;
 	allow 127.0.0.1;	#only allow requests from localhost
 	deny all;		#deny all other hosts	
 }

Enable Nginx Status Page

Enable Nginx Status Page

Make sure to replace 127.0.0.1 with your server’s IP address and also make sure that this page accessible to only you.

After making configurations changes, make sure to check nginx configuration for any errors and restart the nginx service to effect the recent changes using following commands.

# nginx -t
# nginx -s reload 

Check Nginx Configuration

Check Nginx Configuration

After reloading nginx server, now you can visit the Nginx status page at the below URL using curl program to see your metrics.

# curl http://127.0.0.1/nginx_status
OR
# curl http://www.example.com/nginx_status

Check Nginx Status Page

Check Nginx Status Page

Important: The ngx_http_stub_status_module module has been superseded by the ngx_http_api_modulemodule in Nginx 1.13.0 version.

Read AlsoHow to Enable PHP-FPM Status Page in Nginx

That’s all! In this article, we have showed how to enable Nginx status page in Linux. Use the comment form below to ask any questions.

Source

How to Password Protect Web Directories in Nginx

Managers of web projects often need to protect their work one way or another. Often people ask how to password protect their website while it is still in development.

Nginx Password Protect Website

Nginx Password Protect Web Directory

In this tutorial, we are going to show you a simple, but effective technique how to password protected web directory when running Nginx as web server.

In case you are using Apache web server, you can check our guide for password protecting a web directory:

  1. Password Protect Web Directories in Apache

Requirements

To complete the steps in this tutorial, you will need to have:

  • Nginx web server installed
  • Root access to the server

Step 1: Create User and Password

1. To password protect our web directory, we will need to create the file that will contain our encrypted username and password.

When using Apache, you can use the “htpasswd” utility. If you have that utility installed on your system, you can use this command to generate the password file:

# htpasswd -c /path/to/file/.htpasswd username

When running this command, you will be asked to set a password for the above user and after that the .htpasswd file will be created in the specified directory.

Create Nginx User Password File

htpasswd: Create Nginx User Password File

2. If you don’t have that tool installed, you can create the .htpasswd file manually. The file should have the following syntax:

username:encrypted-password:comment

The username that you will use depends on you, choose whatever you like.

The more important part is the way that you will generate the password for that user.

Step 2: Generate Encrypted Password

3. To generate the password, use Perl’s integrated “crypt” function.

Here is an example of that command:

# perl -le 'print crypt("your-password", "salt-hash")'

A real life example:

# perl -le 'print crypt("#12Dfsaa$fa", "1xzcq")'

Generate Encrypted Pasword

Generate Encrypted Pasword

Now open a file and put your username and the generated in string it, separated with semicolon.

Here is how:

# vi /home/tecmint/.htpasswd

Put your username and password. In my case it looks like this:

tecmint:1xV2Rdw7Q6MK.

Save the file by hitting “Esc” followed by “:wq”.

Add Encrypted Password to htpasswd

Add Encrypted Password to htpasswd

Step 3: Update Nginx Configuration

4. Now open and edit the Nginx configuration file associated with the site you are working on. In our case we will use the default file at:

# vi /etc/nginx/conf.d/default.conf       [For CentOS based systems]
OR
# vi /etc/nginx/nginx.conf                [For CentOS based systems]


# vi /etc/nginx/sites-enabled/default     [For Debian based systems]

In our example, we will password protect the directory root for nginx, which is: /usr/share/nginx/html.

5. Now add the following two lines section under the path you wish to protect.

auth_basic "Administrator Login";
auth_basic_user_file /home/tecmint/.htpasswd;

Password Protect Nginx Directory

Password Protect Nginx Directory

Now save the file and restart Nginx with:

# systemctl restart nginx
OR
# service nginx restart

6. Now copy/paste that IP address in your browser and you should be asked for password:

Nginx Password Protect Login

Nginx Password Protect Login

That’s it! Your main web directory is now protected. When you want to remove the password protection on the site, simply remove the two lines that you just added to .htpasswd file or use the following command to remove the added user from a password file.

# htpasswd -D /path/to/file/.htpasswd username

Source

How to Limit the Network Bandwidth Used by Applications in a Linux System with Trickle

Have you ever encountered situations where one application dominated you all network bandwidth? If you have ever been in a situation where one application ate all your traffic, then you will value the role of the trickle bandwidth shaper application. Either you are a system admin or just a Linux user, you need to learn how to control the upload and download speeds for applications to make sure that your network bandwidth is not burned by a single application.

Bandwidth limit in Linux

Install Trickle Bandwidth Limit in Linux

What is Trickle?

Trickle is a network bandwidth shaper tool that allows us to manage the upload and download speeds of applications in order to prevent any single one of them to hog all (or most) of the available bandwidth. In few words, trickle lets you control the network traffic rate on a per-application basis, as opposed to per-user control, which is the classic example of bandwidth shaping in a client-server environment, and is probably the setup we are more familiar with.

How Trickle Works?

In addition, trickle can help us to define priorities on a per-application basis, so that when overall limits have been set for the entire system, priority apps will still get more bandwidth automatically. To accomplish this task, trickle sets traffic limits to the way in which data is sent to, and received from, sockets using TCP connections. We must note that, other than the data transfer rates, trickle does not modify in any way the behavior of the process it is shaping at any given moment.

What Can’t Trickle do?

The only limitation, so to speak, is that trickle will not work with statically linked applications or binaries with the SUID or SGID bits set since it uses dynamic linking and loading to place itself between the shaped process and its associated network socket. Trickle then acts as a proxy between these two software components.

Since trickle does not require superuser privileges in order to run, users can set their own traffic limits. Since this may not be desirable, we will explore how to set overall limits that system users cannot exceed. In other words, users will still be able to manage their traffic rates, but always within the boundaries set by the system administrator.

In this article we will explain how to limit the network bandwidth used by applications in a Linux server with trickle. To generate the necessary traffic, we will use ncftpput and ncftpget (both tools are available by installing ncftp) on the client (CentOS 7 server – dev1: 192.168.0.17), and vsftpd on the server (Debian Wheezy 7.5 – dev2: 192.168.0.15) for demonstration purposes. The same instructions also works on RedHat, Fedora and Ubuntu based systems.

Prerequisites

1. For RHEL/CentOS 7/6enable the EPEL repository. Extra Packages for Enterprise Linux (EPEL) is a repository of high-quality free and open-source software maintained by the Fedora project and is 100% compatible with its spinoffs, such as Red Hat Enterprise Linux and CentOS. Both trickle and ncftp are made available from this repository.

2. Install ncftp as follows:

# yum update && sudo yum install ncftp		[On RedHat based systems]
# aptitude update && aptitude install ncftp	[On Debian based systems]	

3. Set up a FTP server in a separate server. Please note that although FTP is inherently insecure, it is still widely used in cases when security in uploading or downloading files is not needed. We are using it in this article to illustrate the bounties of trickle and because it shows the transfer rates in stdout on the client, and we will leave the discussion of whether it should or should not be used for another date and time :).

# yum update && yum install vsftpd 		[On RedHat based systems]
# aptitude update && aptitude install vsftpd 	[On Debian based systems]

Now, edit the /etc/vsftpd/vsftpd.conf file on the FTP server as follows:

anonymous_enable=NO
local_enable=YES
chroot_local_user=YES
allow_writeable_chroot=YES

After that, make sure to start vsftpd for your current session and to enable it for automatic start on future boots:

# systemctl start vsftpd 		[For systemd-based systems]
# systemctl enable vsftpd
# service vsftpd start 			[For init-based systems]
# chkconfig vsftpd on

4. If you chose to set up the FTP server in a CentOS/RHEL 7 droplet with SSH keys for remote access, you will need a password-protected user account with the appropriate directory and file permissions for uploading and downloading the desired content OUTSIDE root’s home directory.

You can then browse to your home directory by entering the following URL in your browser. A login window will pop up prompting you for a valid user account and password on the FTP server.

ftp://192.168.0.15

If the authentication succeeds, you will see the contents of your home directory. Later in this tutorial you will be able to refresh that page to display the files that have been uploaded during previous steps.

FTP Directory Tree

FTP Directory Tree

How to Install Trickle in Linux

1. Install trickle via yum or aptitude.

To ensure a successful installation, it is considered good practice to make sure the currently installed packages are up-to-date (using yum update) before installing the tool itself.

# yum -y update && yum install trickle 		        [On RedHat based systems]
# aptitude -y update && aptitude install trickle 	[On Debian based systems]

2. Verify whether trickle will work with the desired binary.

As we explained earlier, trickle will only work with binaries using dynamic, or shared, libraries. To verify whether we can use this tool with a certain application, we can use the well-known ldd utility, where ldd stands for list dynamic dependencies. Specifically, we will look for the presence of glibc (the GNU C library) in the list of dynamic dependencies of any given program because it is precisely that library which defines the system calls involved in communication through sockets.

Run the following command against a given binary to see if trickle can be used to shape its bandwidth:

# ldd $(which [binary]) | grep libc.so

For example,

# ldd $(which ncftp) | grep libc.so

whose output is:

# libc.so.6 => /lib64/libc.so.6 (0x00007efff2e6c000)

The string between brackets in the output may change from system to system and even between subsequent runs of the same command, since it represents the load address of the library in physical memory.

If the above command does not return any results, it means that the binary it was run against does not use libcand thus trickle cannot be used as bandwidth shaper in that case.

Learn How to Use Trickle

The most basic usage of trickle is in standalone mode. Using this approach, trickle is used to explicitly define the download and upload speeds of a given application. As we explained earlier, for the sake of brevity, we will use the same application for download and upload tests.

Running Trickle in Standalone Mode

We will compare the download and upload speeds with and without using trickle. The -d option indicates the download speed in KB/s, while the -u flag tells trickle to limit the upload speed by the same unit. In addition, we will use the -s flag, which specifies that trickle should run in standalone mode.

The basic syntax to run trickle in standalone mode is as follows:

# trickle -s -d [download rate in KB/s] -u [upload rate in KB/s]

In order to perform the following examples on your own, make sure to have trickle and ncftp installed on the client machine (192.168.0.17 in my case).

Example 1: Uploading a 2.8 MB PDF file with and without trickle.

We are using the freely-distributable Linux Fundamentals PDF file (available from here) for the following tests.

You can initially download this file to your current working directory with the following command:

# wget http://linux-training.be/files/books/LinuxFun.pdf 

The syntax to upload a file to our FTP server without trickle is as follows:

# ncftpput -u username -p password 192.168.0.15  /remote_directory local-filename 

Where /remote_directory is the path of the upload directory relative to username’s home, and local-filename is a file in your current working directory.

Specifically, without trickle we get a peak upload speed of 52.02 MB/s (please note that this is not the real average upload speed, but an instant starting peak), and the file gets uploaded almost instantly:

# ncftpput -u username -p password 192.168.0.15  /testdir LinuxFun.pdf 

Output:

LinuxFun.pdf:                                        	2.79 MB   52.02 MB/s

With trickle, we will limit the upload transfer rate at 5 KB/s. Before uploading the file for the second time, we need to delete it from the destination directory; otherwise, ncftp will inform us that the file at the destination directory is the same that we are trying to upload, and will not perform the transfer:

# rm /absolute/path/to/destination/directory/LinuxFun.pdf 

Then:

# trickle -s -u 5 ncftpput -u username -p password 111.111.111.111 /testdir LinuxFun.pdf 

Output:

LinuxFun.pdf:                                        	2.79 MB	4.94 kB/s

In the example above, we can see that the average upload speed dropped to ~5 KB/s.

Example 2: Downloading the same 2.8 MB PDF file with and without trickle

First, remember to delete the PDF from the original source directory:

# rm /absolute/path/to/source/directory/LinuxFun.pdf 

Please note that the following cases will download the remote file to the current directory in the client machine. This fact is indicated by the period (‘.‘) that appears after the IP address of the FTP server.

Without trickle:

# ncftpget -u username -p  password 111.111.111.111 . /testdir/LinuxFun.pdf 

Output:

LinuxFun.pdf:                                        	2.79 MB  260.53 MB/s

With trickle, limiting the download speed at 20 KB/s:

# trickle -s -d 30 ncftpget -u username -p password 111.111.111.111 . /testdir/LinuxFun.pdf 

Output:

LinuxFun.pdf:                                        	2.79 MB   17.76 kB/s

Running Trickle in Supervised [unmanaged] Mode

Trickle can also run in unmanaged mode, following a series of parameters defined in /etc/trickled.conf. This file defines how trickled (the daemon) behaves and manages trickle.

In addition, if we want to set global settings to be used, overall, by all applications, we will need to use the trickled command. This command runs the daemon and allows us to define download and upload limits that will be shared by all the applications run through trickle without us needing to specify limits each time.

For example, running:

# trickled -d 50 -u 10

Will cause that the download and upload speeds of any application run through trickle be limited to 30 KB/s and 10 KB/s, respectively.

Please note that you can check at any time whether trickled is running and with what arguments:

# ps -ef | grep trickled | grep -v grep

Output:

root 	16475 	1  0 Dec24 ?    	00:00:04 trickled -d 50 -u 10

How to Get the Size of a Directory in Linux

When listing the contents of a directory using the ls command, you may have noticed that the size of the directories is almost always 4096 bytes (4 KB). That’s the size of space on the disk that is used to store the meta information for the directory, not what it contains.

The command you’ll want to use to get the actual size of a directory is du which is short for “disk usage”. We’ll show you how to use this command.

The du command displays the amount of file space used by the specified files or directories. If the specified path is a directory, du will summarize disk usage of each file and subdirectory in that directory. If no path is specified, du will report the disk usage of the current working directory.

If you run du without any option it will display the disk usage the specified directory and each of its subdirectories in bytes.

In most cases, you would want to display only the space occupied by the directory in a human-readable format. For example, to get the total size of the /var directory, you would run the following command:

sudo du -sh /var

The output will look something like this.

85G	/var

Let’s explain the command and its arguments:

  • The command starts with sudo because most of the files and directories inside the /var directory are owned by the root user and are not readable by the regular users. If you omit sudo the du command will print “du: cannot read directory”.
  • s – Display only the total size of the specified directory, do not display file size totals for subdirectories.
  • h – Print sizes in a human-readable format (h).
  • /var – The path to the directory you want to get the size.

What if you want to display the disk usage of the first-level subdirectories. You have two options, the first one is to use the asterisk symbol as shown below which means “everything that doesn’t start with a period (.)“. The c switch tells du to print a grand total of all sizes:

sudo du -shc /var/*
.0G	/var/cache
24K	/var/db
4.0K	/var/empty
4.0K	/var/games
77G	/var/lib
4.0K	/var/local
0	/var/lock
3.3G	/var/log
0	/var/mail
4.0K	/var/opt
0	/var/run
196K	/var/spool
28K	/var/tmp
85G	total

Another option is to use the --max-depth switch and specify the subdirectories level:

sudo du -h --max-depth=1 /var
77G	  /var/lib
24K	  /var/db
4.0K	/var/empty
4.0K	/var/local
4.0K	/var/opt
196K	/var/spool
4.0K	/var/games
3.3G	/var/log
5.0G	/var/cache
28K	/var/tmp
85G	/var
85G	total

By default, the du utility shows the disk space used by the directory or file. The “apparent size” of a file is how much data is actually in the file.

To find the apparent size of a directory use the --apparent-size switch.

sudo du -sh --apparent-size /var

When you transfer a directory via SCPRsync or SFTP the amount of data that will be transferred over the network is the apparent size of the files. This is why the size of space on the disk that is used on the source when displayed with du (without --apparent-size) will not be the same as the size on the target.

The du command can also be combined with other commands with pipes. For example, to print the 5 largest directories in the /var directory you would use:

sudo du -h /var/ | sort -rh | head -5

Copy

85G	/var/
77G	/var/lib
75G	/var/lib/libvirt/images
75G	/var/lib/libvirt
5.0G	/var/cache/pacman/pkg

In this tutorial, you learned how to get the size of a directory using the du command. If you have any question or remark, please leave a comment below.

Source

A Basic Guide to Linux Boot Process

As promised in our earlier post, in this post we are going to review boot process in Linux Operating System. How Operating system passes through different stage of booting states. This article is written for those readers who has just steps in Linux world. Understanding how Linux boots up is very important in terms of effectively troubleshooting in case of system failure. When a system switched on and after few moment we get a login prompt. Have we try to find out what all stage of booting sequence has crossed and what happened behind the scene during system boots up.

Linux Boot Process

Linux Boot Loader Process

Power on

  1. BIOS (Basic Input Output System) is a software program comes pre-built in a motherboard chipset.
  2. BIOS loads and scans for devices such as Hard DiskCD-ROMRAM, etc.
  3. BIOS searches for MBR (Master Boot Record: 1st sector) of the primary hard drive, it scans for 1st stage loader (In our case boot loader is (GRUB LILO) and hands over the responsibility to MBR.
  4. Boot PROM/FLASH/BIOS is proficient of loading the MBR into RAM and executing it.

MBR (Master Boot Record)

  • 512 bytes of space –> MBR
  • MBR contains the information of loader of most operating system e.g UNIXLinux and WINDOWS
  • MBR holds the small binary information of 1st stage of loader
  • MBR consist physical sector of the first disk drive (i.e 512 bytes) and it’s not part of any partition.
  • Placed on the prime disk drive, in the prime sector of the first cylinder of track is 0 and head is 0 (this whole path is generally booked for boot programs)
  • MBR involve a mini executable programs and a table specify the primary partitions.
Boot Code (GRUB) 446 bytes
partition 1: 16 bytes
partition 2: 16 bytes
partition 3: 16 bytes
partition 4: 16 bytes
magic Number: 2 bytes
  1. MBR also document which primary partition is ACTIVE.
  2. The BIOS surrender rights to the first stage boot loader, which then scans partition table and finds second stage boot loader on the partition configured as bootable.

Boot Loader

  1. The boot loader termed from 1st stage loader and loads itself into RAM. All this go on in milliseconds.
  2. The default stage 2 boot loader is a GRUB (Grand Unified Boot Loader) or LILO (Linux Loader)
  3. Once GRUB is loaded into RAM, then it’s search for the location of Kernel.
  4. GRUB will scrutinize the map file to find the kernel image, that is located under (/boot) and load it.
  5. GRUB loads the kernel (vmlinuz-version) from /boot partition

Trivia 1

GRUB organize RAMDISK for initrd —> (RAMDISK is reserved space from RAM). In addition, it drives initrd into RAM to ready the kernel for loading itself into memory and depended modules so that it can leave the system to “init” process

In, Linux most of the drivers are pre-built as modules, these would be initial ram drive (initrd.img) where it can keep all the information of additional modules. So, when the kernel boots, it creates ramdrive, loads the initrd.img and its depended modules.

GRUB reads /boot/grub/grub.conf & shows us a clean interface for selecting Operating System

Once Kernel loads its depended modules and then it hand over to “init” process. The kernel image has a small, unpacked program that un-compresses kernel and runs it.

Trivia 2

LILO needed to indicate MBR in order to locate operating systems on the hard drive. Any modifications done to /etc/lilo.conf, that must be updated in MBR, but in GRUB‘s case no need to update, it reads directly from the file /boot/grub/grub.conf.

After making changes in /etc/lilo.conf, we’ll have to update the MBR manually

# /sbin/lilo -v

Trivia 3

The GRUB second stage loader resides within the MBR and within /boot partition. Once GRUB is loaded into memory it becomes 2nd stage loader.

Trivia 4

The /initrd directory should not be removed it is a temporary place holder for kernel to have quick access to the modules that it needs to start the system modules include device drivers.

Kernel initialization highlights include:

  1. initialize CPU components, eg, MMU
  2. initialize the scheduler (PID 0)
  3. mount the root filesystem in rw mode
  4. fork off the init process (PID 1)

In essence, kernel initialization does two things:

  1. Start the core system of shared resource managers (RAM, processor and mass storage).
  2. Starts a single process, /sbin/init.

Init process (sbin/init) is the very fist process which loads all the various daemons and mounts all the partitions which are listed under /etc/fstab.

About /etc/fstab

  1. The /sbin/init reads /etc/inittab file
  2. Set default runlevel ( the telinit command allows administrators to tell the init process to change its current runlevel)
  3. Calls /etc/rc.d/rc.sysinit and /etc/rc.d/rc x (where ‘x‘ is a runlevel)
  4. In /etc/rc.d/rc5.d directory files starting with letter K –> kill scripts and files starting with letter S –> Startup scripts.
  5. Start up the tty processes and xdm ( X display manager)
  6. Starts User’s login screen

Source

How to Scan for Rootkits, backdoors and Exploits Using ‘Rootkit Hunter’ in Linux

Guys, if you are a regular reader of tecmint.com you will notice that this is our third article on security tools. In our previous two articles we have given you all the guidance in how to secure Apache and Linux Systems from MalwareDOS and DDOS attacks using mod_security and mod_evasive and LMD (Linux Malware Detect).

Again we are here to introduce a new security tool called Rkhunter (Rootkit Hunter). This article will guide you a way to install and configure RKH (RootKit Hunter) in Linux systems using source code.

Rootkit Hunter - Scans Linux Systems for Rootkits, backdoors and Local Exploits

Rootkit Hunter – Scans Linux Systems for Rootkits, backdoors and Local Exploits

What Is Rkhunter?

Rkhunter (Rootkit Hunter) is an open source Unix/Linux based scanner tool for Linux systems released under GPL that scans backdoors, rootkits and local exploits on your systems.

It scans hidden files, wrong permissions set on binaries, suspicious strings in kernel etc. To know more about Rkhunter and its features visit http://www.rootkit.nl/.

Install Rootkit Hunter Scanner in Linux Systems

Step 1: Downloading Rkhunter

First download the latest stable version of Rkhunter tool by going to http://www.rootkit.nl/projects/rootkit_hunter.html or use below Wget command to download it on your systems.

# cd /tmp
# wget http://downloads.sourceforge.net/project/rkhunter/rkhunter/1.4.2/rkhunter-1.4.2.tar.gz

Step 2: Installing Rkhunter

Once you have downloaded the latest version, run the following commands as a root user to install it.

# tar -xvf rkhunter-1.4.2.tar.gz 
# cd rkhunter-1.4.2
# ./installer.sh --layout default --install
Sample Output
Checking system for:
 Rootkit Hunter installer files: found
 A web file download command: wget found
Starting installation:
 Checking installation directory "/usr/local": it exists and is writable.
 Checking installation directories:
  Directory /usr/local/share/doc/rkhunter-1.4.2: creating: OK
  Directory /usr/local/share/man/man8: exists and is writable.
  Directory /etc: exists and is writable.
  Directory /usr/local/bin: exists and is writable.
  Directory /usr/local/lib64: exists and is writable.
  Directory /var/lib: exists and is writable.
  Directory /usr/local/lib64/rkhunter/scripts: creating: OK
  Directory /var/lib/rkhunter/db: creating: OK
  Directory /var/lib/rkhunter/tmp: creating: OK
  Directory /var/lib/rkhunter/db/i18n: creating: OK
  Directory /var/lib/rkhunter/db/signatures: creating: OK
 Installing check_modules.pl: OK
 Installing filehashsha.pl: OK
 Installing stat.pl: OK
 Installing readlink.sh: OK
 Installing backdoorports.dat: OK
 Installing mirrors.dat: OK
 Installing programs_bad.dat: OK
 Installing suspscan.dat: OK
 Installing rkhunter.8: OK
 Installing ACKNOWLEDGMENTS: OK
 Installing CHANGELOG: OK
 Installing FAQ: OK
 Installing LICENSE: OK
 Installing README: OK
 Installing language support files: OK
 Installing ClamAV signatures: OK
 Installing rkhunter: OK
 Installing rkhunter.conf: OK
Installation complete

Step 3: Updating Rkhunter

Run the RKH updater to fill the database properties by running the following command.

# /usr/local/bin/rkhunter --update
# /usr/local/bin/rkhunter --propupd
Sample Output
[ Rootkit Hunter version 1.4.2 ]

Checking rkhunter data files...
  Checking file mirrors.dat                                  [ No update ]
  Checking file programs_bad.dat                             [ Updated ]
  Checking file backdoorports.dat                            [ No update ]
  Checking file suspscan.dat                                 [ No update ]
  Checking file i18n/cn                                      [ No update ]
  Checking file i18n/de                                      [ No update ]
  Checking file i18n/en                                      [ No update ]
  Checking file i18n/tr                                      [ No update ]
  Checking file i18n/tr.utf8                                 [ No update ]
  Checking file i18n/zh                                      [ No update ]
  Checking file i18n/zh.utf8                                 [ No update ]

[ Rootkit Hunter version 1.4.2 ]
File created: searched for 174 files, found 137

Step 4: Setting Cronjob and Email Alerts

Create a file called rkhunter.sh under /etc/cron.daily/, which then scans your file system every day and sends email notifications to your email id. Create following file with the help of your favourite editor.

# vi /etc/cron.daily/rkhunter.sh

Add the following lines of code to it and replace “YourServerNameHere” with your “Server Name” and “your@email.com” with your “Email Id“.

#!/bin/sh
(
/usr/local/bin/rkhunter --versioncheck
/usr/local/bin/rkhunter --update
/usr/local/bin/rkhunter --cronjob --report-warnings-only
) | /bin/mail -s 'rkhunter Daily Run (PutYourServerNameHere)' your@email.com

Set execute permission on the file.

# chmod 755 /etc/cron.daily/rkhunter.sh

Step 5: Manual Scan and Usage

To scan the entire file system, run the Rkhunter as a root user.

# rkhunter --check
Sample Output
[ Rootkit Hunter version 1.4.2 ]

Checking system commands...

  Performing 'strings' command checks
    Checking 'strings' command                               [ OK ]

  Performing 'shared libraries' checks
    Checking for preloading variables                        [ None found ]
    Checking for preloaded libraries                         [ None found ]
    Checking LD_LIBRARY_PATH variable                        [ Not found ]

  Performing file properties checks
    Checking for prerequisites                               [ OK ]
    /usr/local/bin/rkhunter                                  [ OK ]
    /usr/sbin/adduser                                        [ OK ]
    /usr/sbin/chkconfig                                      [ OK ]
    /usr/sbin/chroot                                         [ OK ]
    /usr/sbin/depmod                                         [ OK ]
    /usr/sbin/fsck                                           [ OK ]
    /usr/sbin/fuser                                          [ OK ]
    /usr/sbin/groupadd                                       [ OK ]
    /usr/sbin/groupdel                                       [ OK ]
    /usr/sbin/groupmod                                       [ OK ]
    /usr/sbin/grpck                                          [ OK ]
    /usr/sbin/ifconfig                                       [ OK ]
    /usr/sbin/ifdown                                         [ Warning ]
    /usr/sbin/ifup                                           [ Warning ]
    /usr/sbin/init                                           [ OK ]
    /usr/sbin/insmod                                         [ OK ]
    /usr/sbin/ip                                             [ OK ]
    /usr/sbin/lsmod                                          [ OK ]
    /usr/sbin/lsof                                           [ OK ]
    /usr/sbin/modinfo                                        [ OK ]
    /usr/sbin/modprobe                                       [ OK ]
    /usr/sbin/nologin                                        [ OK ]
    /usr/sbin/pwck                                           [ OK ]
    /usr/sbin/rmmod                                          [ OK ]
    /usr/sbin/route                                          [ OK ]
    /usr/sbin/rsyslogd                                       [ OK ]
    /usr/sbin/runlevel                                       [ OK ]
    /usr/sbin/sestatus                                       [ OK ]
    /usr/sbin/sshd                                           [ OK ]
    /usr/sbin/sulogin                                        [ OK ]
    /usr/sbin/sysctl                                         [ OK ]
    /usr/sbin/tcpd                                           [ OK ]
    /usr/sbin/useradd                                        [ OK ]
    /usr/sbin/userdel                                        [ OK ]
    /usr/sbin/usermod                                        [ OK ]
....
[Press  to continue]


Checking for rootkits...

  Performing check of known rootkit files and directories
    55808 Trojan - Variant A                                 [ Not found ]
    ADM Worm                                                 [ Not found ]
    AjaKit Rootkit                                           [ Not found ]
    Adore Rootkit                                            [ Not found ]
    aPa Kit                                                  [ Not found ]
.....

[Press  to continue]


  Performing additional rootkit checks
    Suckit Rookit additional checks                          [ OK ]
    Checking for possible rootkit files and directories      [ None found ]
    Checking for possible rootkit strings                    [ None found ]

....
[Press  to continue]


Checking the network...

  Performing checks on the network ports
    Checking for backdoor ports                              [ None found ]
....
  Performing system configuration file checks
    Checking for an SSH configuration file                   [ Found ]
    Checking if SSH root access is allowed                   [ Warning ]
    Checking if SSH protocol v1 is allowed                   [ Warning ]
    Checking for a running system logging daemon             [ Found ]
    Checking for a system logging configuration file         [ Found ]
    Checking if syslog remote logging is allowed             [ Not allowed ]
...
System checks summary
=====================

File properties checks...
    Files checked: 137
    Suspect files: 6

Rootkit checks...
    Rootkits checked : 383
    Possible rootkits: 0

Applications checks...
    Applications checked: 5
    Suspect applications: 2

The system checks took: 5 minutes and 38 seconds

All results have been written to the log file: /var/log/rkhunter.log

One or more warnings have been found while checking the system.
Please check the log file (/var/log/rkhunter.log)

The above command generates log file under /var/log/rkhunter.log with the checks results made by Rkhunter.

# cat /var/log/rkhunter.log
Sample Output
03:33:40] Running Rootkit Hunter version 1.4.2 on server
[03:33:40]
[03:33:40] Info: Start date is Tue May 31 03:33:40 EDT 2016
[03:33:40]
[03:33:40] Checking configuration file and command-line options...
[03:33:40] Info: Detected operating system is 'Linux'
[03:33:40] Info: Found O/S name: CentOS Linux release 7.2.1511 (Core) 
[03:33:40] Info: Command line is /usr/local/bin/rkhunter --check
[03:33:40] Info: Environment shell is /bin/bash; rkhunter is using bash
[03:33:40] Info: Using configuration file '/etc/rkhunter.conf'
[03:33:40] Info: Installation directory is '/usr/local'
[03:33:40] Info: Using language 'en'
[03:33:40] Info: Using '/var/lib/rkhunter/db' as the database directory
[03:33:40] Info: Using '/usr/local/lib64/rkhunter/scripts' as the support script directory
[03:33:40] Info: Using '/usr/lib64/qt-3.3/bin /usr/local/sbin /usr/local/bin /usr/sbin /usr/bin /bin /sbin /usr/libexec /usr/local/libexec' as the command directories
[03:33:40] Info: Using '/var/lib/rkhunter/tmp' as the temporary directory
[03:33:40] Info: No mail-on-warning address configured
[03:33:40] Info: X will be automatically detected
[03:33:40] Info: Found the 'basename' command: /usr/bin/basename
[03:33:40] Info: Found the 'diff' command: /usr/bin/diff
[03:33:40] Info: Found the 'dirname' command: /usr/bin/dirname
[03:33:40] Info: Found the 'file' command: /usr/bin/file
[03:33:40] Info: Found the 'find' command: /usr/bin/find
[03:33:40] Info: Found the 'ifconfig' command: /usr/sbin/ifconfig
[03:33:40] Info: Found the 'ip' command: /usr/sbin/ip
...

For more information and options please run the following command.

# rkhunter --help

If you liked this article, then sharing is the right way to say thanks.

Source

WP2Social Auto Publish Powered By : XYZScripts.com