Linux INTERVIEW QUESTIONS & ANSWERS

11 Basic Linux Interview Questions and Answers

Theories apart, we are proud to announce a new section on Tecmint, dedicated to Linux Interview. Here we will bring to you Linux Interview Questions and all other aspects of Linux, which is must for a professional in this cut-throat competition world.

Basic Linux Interview Questions

Basic Linux Interview Questions

A new article in this section (Linux Interview) will be posted on every weekend. The initiative taken by Tecmint is first of it’s kind among other Linux Dedicated websites, along with quality and unique articles.

We will start with Basic Linux Interview Question and will go advance article by article, for which your response is highly appreciated, which put us on a higher note.

Q.1: What is the core of Linux Operating System?
  1. Shell
  2. Kernel
  3. Command
  4. Script
  5. Terminal
Answer : Kernel is the core of Linux Operating System. Shell is a command Line Interpreter, Command is user Instruction to Computer, Script is collection of commands stored in a file and Terminal is a command Line Interface
Q.2: What Linus Torvalds Created?
  1. Fedora
  2. Slackware
  3. Debian
  4. Gentoo
  5. Linux
Answer : Linus Torvalds created Linux, which is the kernel (heart) of all of the above Operating System and all other Linux Operating System.
Q.3: Torvalds, Wrote most of the Linux Kernel in C++ programming Language, do you agree?
Answer : No! Linux Kernel contains 12,020,528 Lines of codes out of which 2,151,595 Lines are comments. So remaining 9,868,933 lines are codes and out of 9,868,933 Lines of codes 7,896,318 are written in C Programming Language.

The remaining Lines of code 1,972,615 is written in C++, Assembly, Perl, Shell Script, Python, Bash Script, HTML, awk, yacc, lex, sed, etc.

Note : The Number of Lines of codes varies on daily basis and an average of more than 3,509 lines are being added to Kernel.

Q.4: Linux initially was developed for intel X86 architecture but has been ported to other hardware platform than any other Operating System. Do you agree?.
Answer : Yes, I do agree. Linux was written for x86 machine, and has been ported to all kind of platform. Today’s more than 90% of supercomputers are using Linux. Linux made a very promising future in mobile phone, Tablets. In-fact we are surrounded by Linux in remote controls, space science, Research, Web, Desktop Computing. The list is endless.
Q.5: Is it legal to edit Linux Kernel?
Answer : Yes, Kernel is released under General Public Licence (GPL), and anyone can edit Linux Kernel to the extent permitted under GPL. Linux Kernel comes under the category of Free and Open Source Software (FOSS).
Q.6: What is the basic difference between UNIX and Linux Operating System.
Answer : Linux Operating System is Free and Open Source Software, the kernel of which is created by Linus Torvalds and community. Well you can not say UNIX Operating System doesn’t comes under the category of Free and Open Source Software, BSD, is a variant of UNIX which comes under the category of FOSS. Moreover Big companies like Apple, IBM, Oracle, HP, etc. are contributing to UNIX Kernel.
Q. 7: Choose the odd one out.
  1. HP-UX
  2. AIX
  3. OSX
  4. Slackware
  5. Solaris
Answer : Slackware is the odd in the above list. HP-UX, AIX, OSX, Solaris are developed by HP, IBM, APPLE, Oracle respectively and all are UNIX variant. Slackware is a Linux Operating System.
Q.8: Is Linux Operating system Virus free?
Answer : No! There doesn’t exist any Operating System on this earth that is virus free. However Linux is known to have least number of Viruses, till date, yes even less than UNIX OS. Linux has had about 60-100 viruses listed till date. None of them actively spreading nowadays. A rough estimate of UNIX viruses is between 85 -120 viruses reported till date.
Q.9: Linux is which kind of Operating System?
  1. Multi User
  2. Multi Tasking
  3. Multi Process
  4. All of the above
  5. None of the above
Answer : All of the Above. Linux is an Operating System which supports Multi User, Running a Number of Processes performing different tasks simultaneously.
Q.10: Syntax of any Linux command is:
  1. command [options] [arguments]
  2. command options [arguments]
  3. command [options] [arguments]
  4. command options arguments
Answer : The correct Syntax of Linux Command is Command [options] [arguments].
Q.11: Choose the odd one out.
  1. Vi
  2. vim
  3. cd
  4. nano
Answer : The odd one in the above list is cd. Vi, vim and nano are editors which is useful in editing files, while cd command is used for changing directory.

That’s all for now. How much you learned for the above questions? How it helped you in your Interview? We would like to hear all these from you in our comment section. Wait till the next weekend, for new set of questions.

Basic Linux Interview Questions and Answers – Part II

Continuing the Interview Series, we are giving 10 Questions here, in this article. These questions and the questions in the future articles doesn’t necessarily means they were asked in any interview. We are presenting you an interactive learning platform through these kind of posts, which surely will be helpful.

Basic Linux Interview Questions

Basic Linux Interview Questions – 2

Upon the analysis of comments in different forums on last article 11 Basic Linux Interview Questions of this series, it is important to mention here that to bring up a quality article to our readers. We give our time and money, and in return what we expect from you? Nothing. If you can’t praise our work, please don’t demoralize us from your negative comments.

If you find nothing new in a post, don’t forget that for someone it was helpful, and for that he/she was thankful. We can’t make everyone happy in each of our article. Hope you readers would take pain to understand this.

Q.1: Which command is used to record a user login session in a file?
  1. macro
  2. read
  3. script
  4. record
  5. sessionrecord
Answer : The ‘script’ command is used to record a user’s login session in a file. Script command can be implemented in a shell script or can directly be used in terminal. Here is an example which records everything between script and exit.

Let’s record the user’s login session with script command as shown.

[root@tecmint ~]# script my-session-record.txt

Script started, file is my-session-record.txt

The content of log file ‘my-session-record.txt’ can be views as:

[root@tecmint ~]# nano my-session-record.txt

script started on Friday 22 November 2013 08:19:01 PM IST
[root@tecmint ~]# ls
^[[0m^[[01;34mBinary^[[0m ^[[01;34mDocuments^[[0m ^[[01;34mMusic^[[0m $
^[[01;34mDesktop^[[0m ^[[01;34mDownloads^[[0m my-session-record.txt ^[[01;34$
Q.2: The kernel log message can be viewed using which of the following command?
  1. dmesg
  2. kernel
  3. ls -i
  4. uname
  5. None of the above
Answer : The kernel log message can be viewed by executing ‘dmesg’ command. In the list kernel is not a valid Linux command, ‘ls -i’ lists the file with inode within the working directory and ‘uname’ command shows os.
[root@tecmint ~]# dmesg

Initializing cgroup subsys cpuset
Initializing cgroup subsys cpu
Linux version 2.6.32-279.el6.i686 (mockbuild@c6b9.bsys.dev.centos.org) (gcc version 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) ) #1 SMP Fri Jun 22 10:59:55 UTC 2012
KERNEL supported cpus:
  Intel GenuineIntel
  AMD AuthenticAMD
  NSC Geode by NSC
  Cyrix CyrixInstead
  Centaur CentaurHauls
  Transmeta GenuineTMx86
  Transmeta TransmetaCPU
  UMC UMC UMC UMC
Disabled fast string operations
BIOS-provided physical RAM map:
...
Q.3: Which command is used to display the release of Linux Kernel?
  1. uname -v
  2. uname -r
  3. uname -m
  4. uname -n
  5. uname -o
Answer : The command ‘uname -r’ display the kernel release information. The switch ‘-v’ , ‘-m’ , ‘-n’ , ‘o’ display kernel version, machine hardware name, network node, hostname and operating system, respectively.
[root@tecmint ~]# uname -r

2.6.32-279.el6.i686
Q.4: Which command is used to identify the types of file?
  1. type
  2. info
  3. file
  4. which
  5. ls
Answer : The ‘file’ command is used to identify the types of file. The syntax is ‘file [option] File_name’.
[root@tecmint ~]# file wtop

wtop: POSIX shell script text executable
Q.5: Which command locate the binary, source and man page of a command?
Answer : The ‘whereis’ command comes to rescue here. The ‘whereis’ command locate the binary, source, and manual page files for a command.
[root@tecmint ~]# whereis /usr/bin/ftp

ftp: /usr/bin/ftp /usr/share/man/man1/ftp.1.gz
Q.6: When a user login, which files are called for user profile, by default??
Answer : The ‘.profile’ and ‘.bashrc’ present under home directory are called for user profile by default.
[root@tecmint ~]# ls -al
-rw-r--r--.  1 tecmint     tecmint            176 May 11  2012 .bash_profile
-rw-r--r--.  1 tecmint     tecmint            124 May 11  2012 .bashrc
Q.7: The ‘resolv.conf’ file is a configuration file for?
Answer : The ‘/etc/resolv.conf’ is the configuration file for DNS at client side.
[root@tecmint ~]# cat /etc/resolv.conf

nameserver 172.16.16.94
Q.8: Which command is used to create soft link of a file?
  1. ln
  2. ln -s
  3. link
  4. link -soft
  5. None of the above
Answer : The ‘ln -s’ command is used to create soft link of a file in Linux Environment.
[root@tecmint ~]# ln -s /etc/httpd/conf/httpd.conf httpd.original.conf
Q.9: The command ‘pwd’ is an alias of command ‘passwd’ in Linux?
Answer : No! The command ‘pwd’ is not an alias of command ‘passwd’ by default. ‘pwd’ stands for ‘print working directory’, which shows current directory and ‘passwd is used to change the password of user account in Linux.
[root@tecmint ~]# pwd

/home/tecmint
[root@tecmint ~]# passwd
Changing password for user root.
New password:
Retype new password:
Q.10: How will you check pci devices vendor and version on a Linux?
Answer : The Linux command ‘lspci’ comes to rescue here.
[root@tecmint ~]# lspci

00:00.0 Host bridge: Intel Corporation 5000P Chipset Memory Controller Hub (rev b1)
00:02.0 PCI bridge: Intel Corporation 5000 Series Chipset PCI Express x8 Port 2-3 (rev b1)
00:04.0 PCI bridge: Intel Corporation 5000 Series Chipset PCI Express x8 Port 4-5 (rev b1)
00:06.0 PCI bridge: Intel Corporation 5000 Series Chipset PCI Express x8 Port 6-7 (rev b1)
00:08.0 System peripheral: Intel Corporation 5000 Series Chipset DMA Engine (rev b1)
...

That’s all for now. I hope these above questions might be very helpful to you. In our next weekend we again come-up with some new set of questions.

10 Linux Interview Questions and Answers for Linux Beginners – Part 3

Continuing the Interview Questions series, with a big thanks for the nice feedback on last two articles of this series, we are here presenting 10 questions again for interactive learning.

  1. 11 Basic Linux Interview Questions and Answers – Part 1
  2. 10 Basic Linux Interview Questions and Answers – Part II

Linux Interview Questions

Linux Interview Questions Part – 3

1. How will you add a new user (say, tux) to your system.?
  1. useradd command
  2. adduser command
  3. linuxconf command
  4. All of the above
  5. None of the above
Answer : All of the above commands i.e., useraddadduser and linuxconf will add an user to the Linux system.
2. How many primary partition is possible on one drive?
  1. 1
  2. 2
  3. 4
  4. 16
Answer : There are a maximum of ‘4‘ primary partition possible on a drive.
3. The default port for Apache/Http is?
  1. 8080
  2. 80
  3. 8443
  4. 91
  5. None of the above.
Answer : By default Apache/Http is configured on port 80.
4. What does GNU stand for?
  1. GNU’s not Unix
  2. General Unix
  3. General Noble Unix
  4. Greek Needed Unix
  5. None of the above
Answer : GNU stands for ‘GNU‘s not Unix‘.
5. You typed at shell prompt “mysql” and what you got in return was “can’t connect to local MySQL server through socket ‘/var/mysql/mysql.sock’”, what would you check first.
Answer : Seeing the error message, I will first check if mysql is running or not using commands service mysql status or service mysqld status. If mysql service is not running, starting of the service is required.

Note:The above error message can be the result of ill configured my.cnf or mysql user permission. If mysql service starting doesn’t help, you need to see into the above said issues.

6. How to Mount a windows ntfs partition on Linux?
Answer : First install ntfs­3g pack on the system using apt or yum tool and then use “mount sudo mount ­t ntfs­3g /dev/<Windows­partition>/<Mount­point>” command to mount Windows partition on Linux.
7. From the following which is not an RPM based OS.?
  1. RedHat Linux
  2. Centos
  3. Scientific Linux
  4. Debian
  5. Fedora
Answer : The ‘Debian‘ operating system is not an RPM based and all listed above are ‘RPM‘ based except Debian.
8. Which command can be used to rename a file in Linux.?
  1. mv
  2. ren
  3. rename
  4. change
  5. None of the Above
Answer : The mv command is used to rename a file in Linux. For example, mv /path_to_File/original_file_name.extension /Path_to_File/New_name.extension.
9. Which command is used to create and display file in Linux?
  1. ed
  2. vi
  3. cat
  4. nano
  5. None of the above
Answer : The ‘cat‘ command can be used to create and display file in Linux.
10. What layer protocol is responsible for user and the application program support such as passwords, resource sharing, file transfer and network management?
  1. Layer 4 protocols
  2. Layer 5 protocols
  3. Layer 6 protocols
  4. Layer 7 protocols
  5. None of the above
Answer : The ‘Layer 7 Protocol‘ is responsible for user and the application program support such as passwords, resource sharing, file transfer and network management.

That’s all for now. I will be writing on another useful topic soon.

15 Basic MySQL Interview Questions for Database Administrators

Prior to This Article, three articles has already been published in ‘Linux Interview‘ Section and all of them were highly appreciated by our notable readers, however we were receiving feedback to make this interactive learning process, section-wise. From idea to action, we are providing you 15 MySQL Interview Questions.

Mysql Interview Questions

Mysql Interview Questions

1. How would you check if MySql service is running or not?
Answer : Issue the command “service mysql status” in ‘Debian’ and “service mysqld status” in RedHat. Check the output, and all done.
root@localhost:/home/avi# service mysql status

/usr/bin/mysqladmin  Ver 8.42 Distrib 5.1.72, for debian-linux-gnu on i486
Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Server version 5.1.72-2
Protocol version 10
Connection Localhost via UNIX socket
UNIX socket /var/run/mysqld/mysqld.sock
Uptime: 1 hour 22 min 49 sec

Threads: 1  Questions: 112138  Slow queries: 1  Opens: 1485  Flush tables: 1  Open tables: 64  Queries per second avg: 22.567.
2. If the service is running/stop how would you stop/start the service?
Answer : To start MySql service use command as service mysqld start and to stop use service mysqld stop.
root@localhost:/home/avi# service mysql stop

Stopping MySQL database server: mysqld.

root@localhost:/home/avi# service mysql start

Starting MySQL database server: mysqld.

Checking for corrupt, not cleanly closed and upgrade needing tables..
3. How will you login to MySQL from Linux Shell?
Answer : To connect or login to MySQL service, use command: mysql -u root -p.
root@localhost:/home/avi# mysql -u root -p 
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g. 
Your MySQL connection id is 207 
Server version: 5.1.72-2 (Debian) 

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved. 

Oracle is a registered trademark of Oracle Corporation and/or its 
affiliates. Other names may be trademarks of their respective 
owners. 

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. 

mysql>
4. How will you obtain list of all the databases?
Answer : To list all currently running databases run the command on mysql shell as: show databases;
mysql> show databases; 
+--------------------+ 
| Database           | 
+--------------------+ 
| information_schema | 
| a1                 | 
| cloud              | 
| mysql              | 
| phpmyadmin         | 
| playsms            | 
| sisso              | 
| test               | 
| ukolovnik          | 
| wordpress          | 
+--------------------+ 
10 rows in set (0.14 sec)
5. How will you switch to a database, and start working on that?
Answer : To use or switch to a specific database run the command on mysql shell as: use database_name;
mysql> use cloud; 
Reading table information for completion of table and column names 
You can turn off this feature to get a quicker startup with -A 

Database changed 
mysql>
6. How will you get the list of all the tables, in a database?
Answer : To list all the tables of a database use the command on mysql shell as: show tables;
mysql> show tables; 
+----------------------------+ 
| Tables_in_cloud            | 
+----------------------------+ 
| oc_appconfig               | 
| oc_calendar_calendars      | 
| oc_calendar_objects        | 
| oc_calendar_repeat         | 
| oc_calendar_share_calendar | 
| oc_calendar_share_event    | 
| oc_contacts_addressbooks   | 
| oc_contacts_cards          | 
| oc_fscache                 | 
| oc_gallery_sharing         | 
+----------------------------+ 
10 rows in set (0.00 sec)
7. How will you get the Field Name and Type of a MySql table?
Answer : To get the Field Name and Type of a table use the command on mysql shell as: describe table_name;
mysql> describe oc_users; 
+----------+--------------+------+-----+---------+-------+ 
| Field    | Type         | Null | Key | Default | Extra | 
+----------+--------------+------+-----+---------+-------+ 
| uid      | varchar(64)  | NO   | PRI |         |       | 
| password | varchar(255) | NO   |     |         |       | 
+----------+--------------+------+-----+---------+-------+ 
2 rows in set (0.00 sec)
8. How will you delete a table?
Answer : To delte a specific table use the command on mysql shell as: drop table table_name;
mysql> drop table lookup; 

Query OK, 0 rows affected (0.00 sec)
9. What about database? How will you delete a database?
Answer : To delte a specific database use the command on mysql shell as: drop database database-name;
mysql> drop database a1; 

Query OK, 11 rows affected (0.07 sec)
10. How will you see all the contents of a table?
Answer : To view all the contents of a particular table use the command on mysql shell as: select * from table_name;
mysql> select * from engines; 
+------------+---------+----------------------------------------------------------------+--------------+------+------------+ 
| ENGINE     | SUPPORT | COMMENT                                                        | TRANSACTIONS | XA   | SAVEPOINTS | 
+------------+---------+----------------------------------------------------------------+--------------+------+------------+ 
| InnoDB     | YES     | Supports transactions, row-level locking, and foreign keys     | YES          | YES  | YES        | 
| MRG_MYISAM | YES     | Collection of identical MyISAM tables                          | NO           | NO   | NO         | 
| BLACKHOLE  | YES     | /dev/null storage engine (anything you write to it disappears) | NO           | NO   | NO         | 
| CSV        | YES     | CSV storage engine                                             | NO           | NO   | NO         | 
| MEMORY     | YES     | Hash based, stored in memory, useful for temporary tables      | NO           | NO   | NO         | 
| FEDERATED  | NO      | Federated MySQL storage engine                                 | NULL         | NULL | NULL       | 
| ARCHIVE    | YES     | Archive storage engine                                         | NO           | NO   | NO         | 
| MyISAM     | DEFAULT | Default engine as of MySQL 3.23 with great performance         | NO           | NO   | NO         | 
+------------+---------+----------------------------------------------------------------+--------------+------+------------+ 
8 rows in set (0.00 sec)
11. How will you see all the data in a field (say, uid), from table (say, oc_users)?
Answer : To view all the data in a field use the command on mysql shell as: select uid from oc_users;
mysql> select uid from oc_users; 
+-----+ 
| uid | 
+-----+ 
| avi | 
+-----+ 
1 row in set (0.03 sec)
12. Say you have a table ‘xyz’, which contains several fields including ‘create_time’ and ‘engine’. The field ‘engine’ is populated with two types of data ‘Memory’ and ‘MyIsam’. How will you get only ‘create_time’ and ‘engine’ from the table where engine is ‘MyIsam’?
Answer : Use the command on mysql shell as: select create_time, engine from xyz where engine=”MyIsam”;
12. mysql> select create_time, engine from xyz where engine="MyIsam";

+---------------------+--------+ 
| create_time         | engine | 
+---------------------+--------+ 
| 2013-12-15 13:43:27 | MyISAM | 
| 2013-12-15 13:43:27 | MyISAM | 
| 2013-12-15 13:43:27 | MyISAM | 
| 2013-12-15 13:43:27 | MyISAM | 
| 2013-12-15 13:43:27 | MyISAM | 
| 2013-12-15 13:43:27 | MyISAM | 
| 2013-12-15 13:43:27 | MyISAM | 
| 2013-12-15 13:43:27 | MyISAM | 
| 2013-10-23 14:56:38 | MyISAM | 
| 2013-10-23 14:56:38 | MyISAM | 
| 2013-10-23 14:56:38 | MyISAM | 
| 2013-10-23 14:56:38 | MyISAM | 
| 2013-10-23 14:56:38 | MyISAM | 
| 2013-10-23 14:56:38 | MyISAM | 
| 2013-10-23 14:56:38 | MyISAM | 
+---------------------+--------+ 
132 rows in set (0.29 sec)
13. How will you show all the records from table ‘xrt’ where name is ‘tecmint’ and web_address is ‘tecmint.com’?
Answer : Use the command on mysql shell as: select * from xrt where name = “tecmint” and web_address = “tecmint.com”;
mysql> select  * from xrt where name = "tecmint" and web_address = “tecmint.com”;
+---------------+---------------------+---------------+ 
| Id                  | name                   | web_address | 
+---------------+---------------------+----------------+ 
| 13                 |  tecmint               | tecmint.com  |
+---------------+---------------------+----------------+ 
| 41                 |  tecmint               | tecmint.com  |
+---------------+---------------------+----------------+
14. How will you show all the records from table ‘xrt’ where name is not ‘tecmint’ and web_address is ‘tecmint.com’?
Answer : Use the command on mysql shell as: select * from xrt where name != “tecmint” and web_address = “tecmint.com”;
mysql> select * from xrt where name != ”tecmint” and web_address = ”tecmint.com”;

+---------------+---------------------+---------------+ 
| Id            | name                | web_address   | 
+---------------+---------------------+----------------+ 
| 1173          |  tecmint            | tecmint.com   |
+---------------+---------------------+----------------+
15. You need to know total number of row entry in a table. How will you achieve it?
Answer : Use the command on mysql shell as: select count(*) from table_name;
mysql> select count(*) from Tables; 

+----------+ 
| count(*) | 
+----------+ 
|      282 | 
+----------+ 
1 row in set (0.01 sec)

Read Also : 10 MySQL Database Interview Questions Intermediates

That’s all for now. How you feel about this ‘Linux Interview Question‘ section. Don’t forget to provide us with your valuable feedback in our comment section.

25 Apache Interview Questions for Beginners and Intermediates

We are very thankful to All our readers for the response we are getting for our new Linux Interview section. And now we have started section wise learning for Interview questions and continuing with the same today’s article focuses on Basic to Intermediate Apache interview Questions that will help you to prepare yourself.

Apache Interview Questions

Apache Job Interview Questions

In this section, we have covered some interesting 25 Apache Job Interview Questions along with their answers so that you can easily understand some new things about Apache that you might never known before.

Before you read this article, We strongly recommend you to don’t try to memorize the answers, always first try to understand the scenarios on a practical basis.

1. What is Apache web server?
Answer : Apache web server HTTP is a most popular, powerful and Open Source to host websites on the web server by serving web files on the networks. It works on HTTP as in Hypertext Transfer protocol, which provides a standard for servers and client side web browsers to communicate. It supports SSL, CGI files, Virtual hosting and many other features.
2. How to check Apache and it’s version?
Answer : First, use the rpm command to check whether Apache installed or not. If it’s installed, then use httpd -v command to check its version.
[root@tecmint ~]# rpm -qa | grep httpd

httpd-devel-2.2.15-29.el6.centos.i686
httpd-2.2.15-29.el6.centos.i686
httpd-tools-2.2.15-29.el6.centos.i686
[root@tecmint ~]# httpd -v

Server version: Apache/2.2.15 (Unix)
Server built:   Aug 13 2013 17:27:11
3. Apache runs as which user? and location of main config file?.
Answer : Apache runs with the user “nobody” and httpd daemon. Apache main configuration file: /etc/httpd/conf/httpd.conf (CentOS/RHEL/Fedora) and /etc/apache2.conf (Ubuntu/Debian).
4. On which port Apache listens http and https both?
Answer : By default Apache runs on http port 80 and https port 443 (for SSL certificate). You can also use netstat command to check ports.
[root@tecmint ~]# netstat -antp | grep http

tcp        0      0 :::80                       :::*                        LISTEN      1076/httpd          
tcp        0      0 :::443                      :::*                        LISTEN      1076/httpd
5. How do you install Apache Server on your Linux machine?
Answer : Simply, you can use any package installer such as yum on (RHEL/CentOS/Fedora) and apt-get on (Debian/Ubuntu) to install Apache server on your Linux machine.
[root@tecmint ~]# yum install httpd
[root@tecmint ~]# apt-get install apache2
6. Where you can find all configuration directories of Apache Web Server?
Answer : By default Apache configuration directories installed under /etc/httpd/ on (RHEL/CentOS/Fedora) and /etc/apache2 on (Debian/Ubuntu).
[root@tecmint ~]# cd /etc/httpd/
[root@tecmint httpd]# ls -l
total 8
drwxr-xr-x. 2 root root 4096 Dec 24 21:44 conf
drwxr-xr-x. 2 root root 4096 Dec 25 02:09 conf.d
lrwxrwxrwx  1 root root   19 Oct 13 19:06 logs -> ../../var/log/httpd
lrwxrwxrwx  1 root root   27 Oct 13 19:06 modules -> ../../usr/lib/httpd/modules
lrwxrwxrwx  1 root root   19 Oct 13 19:06 run -> ../../var/run/httpd
[root@tecmint ~]# cd /etc/apache2
[root@tecmint apache2]# ls -l
total 84
-rw-r--r-- 1 root root  7113 Jul 24 16:15 apache2.conf
drwxr-xr-x 2 root root  4096 Dec 16 11:48 conf-available
drwxr-xr-x 2 root root  4096 Dec 16 11:45 conf.d
drwxr-xr-x 2 root root  4096 Dec 16 11:48 conf-enabled
-rw-r--r-- 1 root root  1782 Jul 21 02:14 envvars
-rw-r--r-- 1 root root 31063 Jul 21 02:14 magic
drwxr-xr-x 2 root root 12288 Dec 16 11:48 mods-available
drwxr-xr-x 2 root root  4096 Dec 16 11:48 mods-enabled
-rw-r--r-- 1 root root   315 Jul 21 02:14 ports.conf
drwxr-xr-x 2 root root  4096 Dec 16 11:48 sites-available
drwxr-xr-x 2 root root  4096 Dec  6 00:04 sites-enabled

7. Can Apache be secured with TCP wrappers?

Answer : No, It can’t be secured with the TCP wrappers since it doesn’t support libwrap.a library of Linux.
8. How to change default Apache Port and How Listen Directive works in Apache?
Answer : There is a directive “Listen” in httpd.conf file which allows us to change the default Apache port. With the help of Listen directive we can make Apache listen on different port as well as different interfaces.

Suppose you have multiple IPs assigned to your Linux machine and want Apache to receive HTTP requests on a special Ethernet port or Interface, even that can be done with Listen directive.

To change the Apache default port, please open your Apache main configuration file httpd.conf or apache2.conffile with VI editor.

[root@tecmint ~]# vi /etc/httpd/conf/httpd.conf

[root@tecmint ~]# vi /etc/apache2/apache2.conf

Search for the word ”Listen”, comment the original line and write your own directive below that line.

# Listen 80
Listen 8080

OR

Listen 172.16.16.1:8080

Save the file and restart the web server.

[root@tecmint ~]# service httpd restart

[root@tecmint ~]# service apache2 restart
9. Can we have two Apache Web servers on a single machine?
Answer : Yes, we can run two different Apache servers at one time on a Linux machine, but the condition for that is they should listen on different ports and we can change the ports with Listen directive of Apache.
10. What do you mean by DocumentRoot of Apache?
Answer : DocumentRoot in Apache means, it’s the location of web files are stored in the server, the default DocumentRoot of Apache is /var/www/html or /var/www. This can be changed to anything, by setting up “DocumentRoot” in a virtual host of configuration file of domain.
11. How to host files in different folder and what is Alias directive?
Answer : Yes, this can be achieved by Alias directive in the main Apache configuration file. Alias directive maps resources in File system, it takes a URL path and substitute it with a file or directory path on the system with is set up to redirect.

To use Alias directive, Its the part of mod_alias module of Apache. The default syntax of Alias directive is:

Alias /images /var/data/images/

Here in above example, /images url prefix to the /var/data/images prefix that mean clients will query for “http://www.example.com/images/sample-image.png” and Apache will pick up the “sample-image.png” file from /var/data/images/sample-image.png on the server. It’s also known as URL Mapping.

12. What do you understand by “DirectoryIndex”?
Answer : DirectoryIndex is the name of first file which Apache looks for when a request comes from a domain. For example: www.example.com is requested by the client, so Apache will go the document root of that website and looks for the index file (first file to display).

The default setting of DirectoryIndex is .html index.html index.php, if you have different names of your first file, you need to make the changes in httpd.conf or apache2.conf for DirectoryIndex value to display that to your client browser.

#
# DirectoryIndex: sets the file that Apache will serve if a directory
# is requested.
#
# The index.html.var file (a type-map) is used to deliver content-
# negotiated documents.  The MultiViews Option can be used for the
# same purpose, but it is much slower.
#
DirectoryIndex index.html index.html.var index.cgi .exe
13. How to disable Directory listing when an index file is missing?
Answer : If, the main index file is missing in the website root directory, then the Apache will lists all the contents like files and folder of the website on the browser instead of Main website pages.

To stop Apache directory listing, you can set the following rule in the main configuration file globally or in .htaccess file for a particular website.

<Directory /var/www/html>
   Options -Indexes
</Directory>
14. What are different log files of Apache Web Server?
Answer : The default log files of Apache Web Server are access log “/var/log/httpd/access_log” and error log :/var/log/httpd/error_log”.
15. What do you understand by “connection reset by peer” in error logs?
Answer : When the server is serving any ongoing Apache request and end user terminates the connection in between, we see “connection reset by peer” in the Apache error logs.
16. What is Virtual Host in Apache?
Answer : The Virtual Host section contains the information like Website name, Document root, Directory Index, Server Admin Email, ErrorLog File location etc.

You are free to add as many directives you require for your domain, but the two minimal entries for a working website is the ServerName and DocumentRoot. We usually define our Virtual Host section at the bottom of httpd.conf file in Linux machines.

Sample VirtualHost
<VirtualHost *:80>
   ServerAdmin webmaster@dummy-host.example.com
   DocumentRoot /www/docs/dummy-host.example.com
   ServerName dummy-host.example.com
   ErrorLog logs/dummy-host.example.com-error_log
   CustomLog logs/dummy-host.example.com-access_log common
</VirtualHost>
  1. ServerAdmin : Its usually the email address of the website owner, where the error or notification can be sent.
  2. DocumentRoot : location where the web files are located in the server(Necessary).
  3. ServerName : Its the domain name which you want to access from your web browser(Necessary).
  4. ErrorLog : Its the location of the log file where all the domain related logs are being recorded.
17. What’s the difference between <Location> and <Directory>?

Answer :

  1. <Location> is used to set element related to the URL / address bar of the web server.
  2. <Directory> refers that the location of file system object on the server
18. What is Apache Virtual Hosting?
Answer : Apache Virtual hosting is the concept of hosting multiple website on a single web server. There are two types of Virtual hosts can be setup with Apache are Name Based Virtual hosting and IP based virtual hosting.

For more information, read on How to Create Name/IP based Virtual Hosts in Apache.

19. What do you understand by MPM in Apache?
Answer : MPM stands for Multi Processing Modules, actually Apache follows some mechanism to accept and complete web server requests.
20. What is the difference between Worker and Prefork MPM?
Answer : Both MPMs, Worker and prefork has their own mechanism to work with Apache. It totally depends on you that in which mode you want to start your Apache.
  1. Basic difference between Worker and MPM is in their process of spawning the child process. In the Prefork MPM, a master httpd process is started and this master process starts manages all other child processes to serve client requests. Whereas, In the worker MPM one httpd process is active, and it uses different threads to serve client requests.
  2. Prefork MPM uses multiple child processes with one thread each, where worker MPM uses multiple child processes with many threads each.
  3. Connection handling in the Prefork MPM, each process handles one connection at a time, whereas in the Worker mpm each thread handles one connection at a time.
  4. Memory footprints Prefork MPM Large memory footprints, where Worker has smaller memory footprints.
21. What’s the use of “LimitRequestBody” and how to put limit on your uploads?
Answer : LimitRequestBody directive is used to put a limit on the upload size.

For example: I want to put limits of 100000 Bytes in the folder /var/www/html/tecmin/uploads. So, you need to add following directive in Apache configuration file.

<Directory "/var/www/html/tecmint/uploads">
LimitRequestBody 100000
</Directory>
22. What is mod_perl and mod _php?

Answer :

  1. mod_perl is an Apache module which is compiled with Apache for easy integration and to increase the performance of Perl scripts.
  2. mod_php is used for easy integration of PHP scripts by the web server, it embeds the PHP interpreter inside the Apache process. Its forces Apache child process to use more memory and works with Apache only but still very popular.
23. What is Mod_evasive?
Answer : Its a third-party module which helps us to prevent your web server from the web attacks like DDOS because it performs only one task at a time and performs it very well.

For more information, read the article that guides you how to install and configure mod_evasive in Apache.

24. What is Loglevel debug in httpd.conf file?
Answer : With the help of Loglevel Debug option, we can get/log more information in the error logs which helps us to debug a problem.
25. What’s the use of mod_ssl and how SSL works with Apache?
Answer : Mod_ssl package is an Apache module, which allows Apache to establish its connection and transfer all the data in a secure encrypted environment. With the help of SSL certificates, all the Login details and other important secret details get transferred in an encrypted manner over the Internet, which prevents our data from Eavesdropping and IP spoofing.
How SSL works with Apache

Whenever an https requests comes, these three steps Apache follows:

  1. Apache generates its private key and converts that private key to .CSR file (Certificate signing request).
  2. Then Apache sends the .csr file to the CA (Certificate Authority).
  3. CA will take the .csr file and convert it to .crt (certificate) and will send that .crt file back to Apache to secure and complete the https connection request.

These are just most popular 25 questions being asked these days by Interviewers, please provide some more interview questions which you have faced in your recent interview and help others via our Comment section below.

We are also recommend you to read our previous articles on Apache.

  1. 13 Apache Web Server Security and Hardening Tips
  2. How to Sync Two Apache Web Servers/Websites Using Rsync

10 MySQL Database Interview Questions for Beginners and Intermediates

In our last article, we’ve covered 15 Basic MySQL Questions, again we are here with another set interview questions for intermediate users. As we said earlier these questions can be asked in Job Interviews. But some of our critics on the last article said, that I don’t give response to my critics and the questions are very basic and will never be asked in any Database Administrator Interview.

Mysql Job Interview Questions

10 Mysql Job Interview Questions

To them we must admit all the articles and question can not be composed keeping all the flock in mind. We are coming from basic to expert level step by step. Please Co­operate with us.

1. Define SQL?
Answer : SQL stands for Structured Query Language. SQL is a programming Language designed specially for managing data in Relational Database Management System (RDBMS).
2. What is RDBMS? Explain its features?

Answer : A Relational Database Management System (RDBMS) is the most widely used database Management System based on the Relational Database model.

Features of RDBMS
  1. Stores data in tables.
  2. Tables have rows and column.
  3. Creation and Retrieval of Table is allowed through SQL.
3. What is Data Mining?
Answer : Data Mining is a subcategory of Computer Science which aims at extraction of information from set of data and transform it into Human Readable structure, to be used later.
4. What is an ERD?
Answer : ERD stands for Entity Relationship Diagram. Entity Relationship Diagram is the graphical representation of tables, with the relationship between them.
5. What is the difference between Primary Key and Unique Key?
Answer : Both Primary and Unique Key is implemented for Uniqueness of the column. Primary Key creates a clustered index of column where as an Unique creates unclustered index of column. Moreover, Primary Key doesn’t allow NULL value, however Unique Key does allows one NULL value.
6. How to store picture file in the database. What Object type is used?
Answer : Storing Pictures in a database is a bad idea. To store picture in a database Object Type ‘Blob’ is recommended.
7. What is Data Warehousing?
Answer : A Data Warehousing generally refereed as Enterprise Data Warehousing is a central Data repository, created using different Data Sources.
8. What are indexes in a Database. What are the types of indexes?

Answer : Indexes are the quick references for fast data retrieval of data from a database. There are two different kinds of indexes.

Clustered Index
  1. Only one per table.
  2. Faster to read than non clustered as data is physically stored in index order.
Non­clustered Index
  1. Can be used many times per table.
  2. Quicker for insert and update operations than a clustered index.
9. How many TRIGGERS are possible in MySql?

Answer : There are only six triggers are allowed to use in MySQL database and they are.

  1. Before Insert
  2. After Insert
  3. Before Update
  4. After Update
  5. Before Delete
  6. After Delete
10. What is Heap table?
Answer : Tables that are present in the memory are called as HEAP tables. These tables are commonly known as memory tables. These memory tables never have values with data type like “BLOB” or “TEXT”. They use indexes which make them faster.

That’s all for now on MySQL questions, I will be coming up with another set of questions soon. Don’t forget to provide your valuable feedback in comment section.

10 Core Linux Interview Questions and Answers

Again its time to read some serious content in light mood, Yup! Its another article on Interview question and here we are presenting 10 Core Linux Questions, which surely will add to your knowledge.

Linux Interview Job Questions

10 Core Linux Interview Questions

1. You need to define a macro, a key binding for the existing command. How would you do it?
Answer : There is a command called bind, in bash shell which is capable of defining macro, or binding a key. In order to bind a key with an existing command, we need to generate Character Sequence emitted by the key. Press Ctrl+v and then key F12, I got ^[[24~
[root@localhost ~]# bind '"\e[24~":"date"'

Note : Different types of terminals or terminal emulators can emit different codes for the same key.

2. A user is new to Linux and he wants to know full list of available commands, what would you suggest him?
Answer : A command ‘compgen ­c’ will show a full list of available commands.
[root@localhost ~]$ compgen -c

l.
ll
ls
which
if
then
else
elif
fi
case
esac
for
select
while
until
do
done
...
3. Your assistant needs to print directory stack, what would you suggest?
Answer : The Linux command ‘dirs’, will print the directory stack.
[root@localhost ~]# dirs

/usr/share/X11
4. You have lots of running jobs, how would you remove all the running processes, without restarting the machine?
Answer : The Linux command ‘disown -r’ will remove all the running Processes.
5. What does the command ‘hash’ is used for in bash Shell?
Answer : Linux command ‘hash’ manages internal hash table, fins and remember full path of the specified command, Display used command names and number of times the command is used.
[root@localhost ~]# hash

hits    command
   2    /bin/ls
   2    /bin/su
6. Which built­in Linux command performs arithmetic operation of Integers in Bash?
Answer : The ‘let’ command that performs, arithmetic operation of integer in bash shell.
#! /bin/bash

...

...

let c=a+b

...

...
7. You have a large text file, and you need to see one page at a time. What will you do?
Answer : You can achieve the above result by pipeling the output of ‘cat file_name.txt’ with ‘more’ command.
[root@localhost ~]# cat file_name.txt | more
8. Who own the data dictionary?
Answer : The user ‘SYS’ owns the data dictionary. Users ‘SYS’ and ‘SYSEM are created by default, automatically.
9. How to know a command summary and useability in Linux?

Assume you came across a command in /bin directory, which you are completely unaware of, and have no idea what it does. What will you do to know its useability?

Answer : The command ‘whatis’ display a summary of its useability from the man page. For example, you would like to see a summary of ‘zcat’ command which you don’t know previously.
[root@localhost ~]# whatis zcat

zcat [gzip]          (1)  - compress or expand files
10. What command should you use to check the number of files and disk space used by each user’s defined quotas?
Answer : The command ‘repquota’ comes to rescue here. Command repquota summaries quotas for a file system.

That’s all for now. Provide your valuable Feedback in our comment section. Stay tuned for more Linux and Foss posts.

10 VsFTP (Very Secure File Transfer Protocol) Interview Questions and Answers

FTP stands for ‘File Transfer Protocol‘ is one of the most widely used and standard protocol available over Internet. FTP works in a ServerClient architecture and is used to transfer file. Initially FTP client were command-­line based. Now most of the platform comes bundled with FTP client and server program and a lot of FTP Client/Server Program is available. Here we are presenting 10 Interview Questions based on Vsftp (Very Secure File Transfer Protocol) on a Linux Server.

VsFTP Interview Questions

10 VsFTP Interview Questions

1. What is the Difference between TFTP and FTP Server?
Answer : TFTP is File Transfer Protocol which usages User Datagram Protocol (UDP) whereas FTP usages Transmission Control Protocol (TCP). TCP usages port number 20 for Data and 21 for control by default whereas TFTP usages port 69 by default.

Note: Briefly you can say FTP usages port 21 by default when clarification between Data and Control is not required.

2. How to Restrict Users and Disallow browsing beyond their Home Directories? How?
Answer : Yes! It is possible to restrict users to their home directories and browsing beyond home directories. This can be done by enabling chroot option in ftp configuration file (i.e. vsftpd.conf).
chroot_local_user=YES
3. How would you manage number of FTP clients that connect to your FTP server?

Answer : We need to set ‘max_client parameter’. This parameter controls the number of clients connecting, if max_client is set to 0, it will allow unlimited clients to connect FTP server.The maximum client parameter needs to be changed in vsftpd.conf and the default value is 0.

4. How to limit the FTP login attempts to fight against botnet/illegal login attempts?
Answer : We need to edit ‘max_login_fails parameter’. This parameter manages the maximum number of login attempts before the session is killed. The default value is ‘3’ which means a maximum of ‘3’ login attempts are possible failing which the session will be killed.
5. How to enable file upload from anonymous users to FTP Server?
Answer : Anonymous users can be allowed to upload files to FTP server by modifying parameter ‘anon_upload_enable’. If Value of anon_upload_enable is set to Yes, Anonymous users are permitted to upload files. In order to have a working anonymous upload, we must have parameter ‘write_enable’ activated. The Default Value is NO, which means anonymous upload is disabled.
6. How would you disabled downloads from FTP server?
Answer : Disabling Downloads from FTP Server can be implemented by modifying the parameter ‘download_enable’. If set to NO, all download request will be denied. The Default value is YES which Means, Downloading is Enabled.
7. How to enable and permit FTP login to local users?
Answer : The parameter ‘Local_enable’ is responsible for managing local users login. In order to activate local users login, we must set ‘local_enable=yes’ in file vsftpd.conf. The default value is NO, which means Local User Login is not permitted.
8. Is it Possible to maintain log of FTP requests and responses?
Answer : Yes! We can log FTP requests and responses. What we need to do is to modify the binary value of parameter ‘log_ftp_protocol’. If set to Yes, it will log all the requests, responses. The log may be very useful in Debugging. The default value of above parameter is NO which means no logs are maintained by default.

Note: In order to create and maintains logs successfully, the parameter ‘xferlog_std_format’ must be enabled.

9. How to disable the login for few seconds, in case of failed login. How will you achieve this?
Answer : The number of seconds we need to pause in case of failed login attempt can be achieved by modifying the value of parameter ‘delay_failed_login’. The default value is 1.
10. How to display certain text message before a client connects to FTP server. How would you get this done?
Answer : We can achieve this by setting ‘banner_file’. We need to set ftpd_banner=/path/to/banner-file in vsftpd.conf file.

FTP is a very Useful tool and is vast yet very interesting. Moreover it is useful from Interview Point of View. We have taken the pain to bring these questions to you and will cover more of these questions in our future article. Till then stay tuned and connected to Tecmint.

Read Also10 Advance VsFTP Interview Questions and Answers – Part II

10 Advance VsFTP Interview Questions and Answers – Part II

We were overwhelmed with the response we’ve received on our last article. Where we’ve presented 10 wonderful question on Very Secure File Transfer Protocol. Continuing VSFTP interview article we are here presenting you yet another 10 Advance Interview Questions which surely will help you.

  1. 10 Basic Vsftp Interview Question/Answers – Part I

VsFTP Intervew Questions

VsFTP Intervew Questions – Part II

Please note vsftpd.conf file is used to control various aspects of configuration as specified in this article. By default, the vsftpd searches for the configuration file under /etc/vsftpd/vsftpd.conf. However, the format of file is very simple and it contains comment or directive. Comment lines begins with a ‘#‘ are ignored and a directive line has the following format.

option=value

Before we start the Question and their well explained Answer we would like to answer a question “Who is going to attend FTP Interview?”. Well no one. Perhaps no one would be attending FTP interview. But we are presenting subject wise questions to maintain a systematic approach so that in any Interview, you wont get a new question which you wont be knowing on any of the topics/subjects we covered here.

11. How would you block an IP which is acting malicious on your internal private VSFTP network?
Answer : We can Block IP either by adding the suspicious IP to ‘/etc/hosts.deny’ file or alternatively adding a DROP rule for the suspicious IP to iptables INPUT chain.
Block IP using host.deny file

Open ‘/etc/hosts.deny’ file.

# vi /etc/hosts.deny

Append the following line at the bottom of the file with the IP address that you want to block access to FTP.

#
# hosts.deny    This file contains access rules which are used to
#               deny connections to network services that either use
#               the tcp_wrappers library or that have been
#               started through a tcp_wrappers-enabled xinetd.
#
#               The rules in this file can also be set up in
#               /etc/hosts.allow with a 'deny' option instead.
#
#               See 'man 5 hosts_options' and 'man 5 hosts_access'
#               for information on rule syntax.
#               See 'man tcpd' for information on tcp_wrappers
#
vsftpd:172.16.16.1
Block IP using iptables rule

To block FTP access to particular IP address, add the following drop rule to iptables INPUT chain.

iptables -A RH-Firewall-1-INPUT -p tcp -s 172.16.16.1 -m state --state NEW -m tcp --dport 21 -j DROP
12. How to allow secured SSL connections to Anonymous users? How would you do?
Answer : Yes! It is possible to allow anonymous users to use secured SSL connections. The value of parameter ‘allow_anon_ssl’ should be ‘YES’ in the vsftpd.conf file. If it, set to NO it wont allow anonymous users to use SSL connections. The default value is NO.
# Add this line to enable secured SSL connection to anonymous users.
allow_anon_ssl=YES
13. How to allow Anonymous users to create new directory and write to that directory?
Answer : We need to edit the parameter ‘anon_mkdir_write_enable’ and set it’s value to ‘YES’. But in order to make the parameter working, ‘write_enable’ must be activated. The default is NO.
# Uncomment this to enable any form of FTP write command.
write_enable=YES
# Uncomment this if you want the anonymous FTP user to be able to create
# new directories.
anon_mkdir_write_enable=YES
14. How to enable Anonymous downloads, but disable permission to write?
Answer : In the above said scenario, we need to edit the parameter ‘anon_world_readable_only’. The parameter should be enabled and set to ‘YES’. The default value is YES.
# Add this line to enable read only permission to anonymous users.
allow_anon_ssl=YES
15. How to CHMOD all Anonymous uploads automatically. How would you do?
Answer : To chmod all anonymous uploads automatically, we need to edit the parameter ‘chmod_enable’ and set it to ‘YES’. Anonymous users never get to use SITE CHMOD. The default value is YES.
# Add this line to chmod all anonymous uploads automatically.
chmod_enable=YES
16. How to disable directory listing in a FTP server?
Answer : The parameter ‘dirlist_enable’ comes to rescue at this point. The value of ‘dirlist_enable’ should be set to NO. The default value is YES.
# Add this line to disable directory listing.
dirlist_enable=NO
17. How to maintain sessions for logins of VSFTP. How will you do?
Answer : The parameter ‘session_support’ needs to be modified. This parameter controls and manages vsftp attempts to maintain session for logins. The default value is NO.
# Add this line to maintain session logins.
session_support=YES
18. How to display time in local time zone, when listing the contents of directory?
Answer : The parameter ‘usr_localtime’ needs to be modified. If enabled, vsftpd will list directory files in local time zone format. The default is to display GMT. The default value is NO.
# Add this line to display directory listing in local time zone.
usr_localtime=YES
19. How will you limit the maximum transfer rate from VSFTP server?
Answer : To limit the maximum transfer rate of VSFTP server we need the parameter ‘anon_max_rate’ in bytes per second, for anonymous client. The default value is 0 which means unlimited.
# Add this line to limit the ftp transfer rate.
anon_max_rate=0 # 0 means unlimited
20. How will you timeout the idle session of VSFTP?
Answer : The parameter ‘idle_session_timeout’ needs to be modified here. The timeout in second, which is the maximum time an anonymous user can spend in a session between his client machine and VSFTP server. As soon as the the timeout triggers, the client is logged out. The default time is 300.
# Add this line to set the ftp timeout session.
idle_session_timeout=300

That’s all for now. We will be coming up with next article very soon, till then stay tuned and connected and don’t forget to provide us with your valuable feedback in our comment section.

10 Useful Random Linux Interview Questions and Answers

To a little surprise this time we are not presenting Interview question on any specific subject but on random topics. These question will surely help you in cracking Interviews beside adding to your Knowledge.

Linux Interview Questions

10 Random Linux Questions and Answers

1. Let’s say you maintains a backup on regular basis for the company you are working. The backups are maintained in Compressed file format. You need to examine a log, two months old. What would you suggest without decompressing the compressed file?
Answer : To check the contents of a compressed file without the need of decompressing it, we need to use ‘zcat’. The zcat utility makes it possible to view the contents of a compressed file.
# zcat ­f phpshell­2.4.tar.gz
2. You need to track events on your system. What will you do?
Answer : For tracking the events on the system, we need a daemon called syslogd. The syslogd daemon is useful in tracking the information of system and then saving it to specified log files.

Running ‘syslogd‘ application in terminal generates log file at the location ‘/var/log/syslog‘. The syslogd application is very useful in troubleshooting Linux sytems. A sample log file looks similar to below.

syslongd command

syslongd

3. How will you restrict IP so that the restricted IP’s may not use the FTP Server?
Answer : We can block suspicious IP by integrating tcp_wrapper. We need to enable the parameter “tcp_wrapper=YES” in the configuration file at ‘/etc/vsftpd.conf’. And then add the suspicious IP in the ‘host.deny’ file at location ‘/etc/host.deny’.
Block IP Address

Open ‘/etc/hosts.deny’ file.

# vi /etc/hosts.deny

Add the IP address that you want to block at the bottom of the file.

#
# hosts.deny    This file contains access rules which are used to
#               deny connections to network services that either use
#               the tcp_wrappers library or that have been
#               started through a tcp_wrappers-enabled xinetd.
#
#               The rules in this file can also be set up in
#               /etc/hosts.allow with a 'deny' option instead.
#
#               See 'man 5 hosts_options' and 'man 5 hosts_access'
#               for information on rule syntax.
#               See 'man tcpd' for information on tcp_wrappers
#
vsftpd:172.16.16.1
4. Tell us the difference between Telnet and SSH?
Answer : Telnet and SSH both are communication protocol which are used to manage remote system. SSH is Secured, which requires exchanging of key opposite of telnet which transmit data in plain text, which means telnet is less secure than SSH.
6. You need to stop your X server. When you tries to kill your X server, You got an error message that you cannot quit X server. What will you do?
Answer : When killing a X server, it won’t work normal way like doing ‘/etc/init.d/gdm stop’. We need to execute a special key combination ‘Ctrl+ Alt+ Back Space’ which will force X server to restart.
6. What is the difference between command ‘ping’ and ‘ping6’?
Answer : Both the commands are same and used for the same purpose except that the fact that ping6 is used with ipv6 IP address.
7. You want to search for all the *.tar files in your Home directory and wants to delete all at once. How will you do it?
Answer : We need to use find command with rm command to delete all “.tar” files.
# find /home/ ­name '*.tar' | xargs rm ­rf
8. What is the difference between locate and slocate command?
Answer : The slocate looks for the files that user have access whereas locate will search for the file with updated result.
9. You need to search for the string “Tecmint” in all the “.txt” files in the current directory. How will you do it?
Answer : We need to run the fine command to search for the text “Tecmint” in the current directory, recursively.
# find -­name “*.txt” | xargs grep “Tecmint”
10. You want to send a message to all connected users as “Server is going down for maintenance”, what will you do?
Answer : This can be achieved using the wall command. The wall command sends a message to all connected users on the sever.
# echo please save your work, immediately. The server is going down for Maintenance at 12:30 Pm, sharply. | wall

wall command

wall command

That’s all for now. Don’t Forget to give your valuable feedback in comment section below.

10 Useful Interview Questions on Linux Services and Daemons

Daemon is a computer program that runs as a background process and generally do not remains under the direct control of user. The parent process of a daemon in most cases are init, but not always.

In Linux, a Service is an application that runs in a background carrying out essential task or waiting for its execution.

Questions on Linux Services and Daemons

Questions on Linux Services and Daemons

Generally, there is no difference between a Daemon and a Service. Daemon is Service but service may be bigger than Daemon. Daemon provide some services and services may contain more than one Daemon.

Here in this series of Interview Article, we would be covering Services and Daemons in Linux.

1. What is Exim Service? What is the purpose of this Service?
Answer : Exim is an Open Source Mail Transfer Agent (MTA) which deals with routing, receiving and delivering of Electronic Mail. Exim service serves to be a great replacement of sendmail service which comes bundled with most of the distro.

2. What is NIS server? What is the purpose of NIS Server?

Answer : The NIS server, serves the purpose of dealing with Network Information Service which in-turn facilitates to login to other Systems with same log-in credentials. NIS is a directory service protocol which functions in Client-Server Model.
3. What will you prefer for a reverse proxy in Linux?
Answer : Reverse Proxy refers to the type of proxy that retrieves resources on account of client from server(s). The solution of ‘Reverse Proxy’ in Linux is squid as well as Apache reverse Proxy. However ‘squid’ is more preferred than ‘Apache reverse Proxy’ because of its simplicity and straight forward nature.
4. You are getting following codes (2xx, 3xx, 4xx, 5xx) in Apache, at some point of time. What does this means?

Answer : In Apache each error code points towards a specific area of problem.

  1. 2xx : Request of connection Successful
  2. 3xx: Redirection
  3. 4xx: Client Error
  4. 5xx: Server Error
5. You are asked to stop Apache Service through its control Script. What will you do?
Answer : The Apache service is controlled using a script called apachectl. In order to stop apache using its control script we need to run.
# apachectl stop		[On Debian based Systems]
# /etc/inid.t/httpd stop	[On Red Hat based Systems]
6. How is ‘apachectl restart’ different from ‘apachectl graceful’
Answer : The ‘apachectl restart’ when executed will force Apache to restart immediately, before the task complete whereas ‘apachectl graceful’ will wait for the current task to be finished before restarting the service. Not to mention ‘apachectl graceful’ is more safe to execute but the execution time for ‘apachectl restart’ is less as compared to ‘apachectl graceful’.
7. How will you configure the nfs mounts to export it, from your local machine?
Answer : The /etc/export allows the creation of nfs exports on local machine and make it available to the whole world.
8. You are supposed to create a new Apache VirtualHost configuration for the host www.Tecmint.com that is available at /home/Tecmint/public_html/ and maintains log at /var/log/httpd/ by default.
Answer : You need to create a Apache virtual host container in main apache configuration file located at ‘/etc/httpd/conf/httpd.conf’. The following is the virtual container for host www.tecmint.com.
<VirtualHost *:80>
DocumentRoot /home/Tecmint/public_html
ServerName www.Tecmint.com
Server Alias Tecmint.com
CustomLog /var/log/httpd/Tecmint.com.log combined
ErrorLog /var/log/httpd/Tecmint.com.error.log
</VirtualHost>
9. You are supposed to dump all the packets of http traffic in file http.out. What will you suggest?
Answer : In order to dump all the network traffic, we need to use command ‘tcpdump’ with the following switches.
# tcpdump tcp port 80 -s0 -w http.out
10. How will you add a service (say httpd) to start at INIT Level 3?
Answer : We need to use ‘chkconfig’ tool to hook up a service at INIT Level 3 by changing its runlevel parameter.
chkconfig –level 3 httpd on

That’s all for now. I’ll be here again with another interesting article very soon.

10 Useful SSH (Secure Shell) Interview Questions and Answers

SSH stands for Secure Shell is a network protocol, used to access remote machine in order to execute command-line network services and other commands over a Network. SSH is Known for its high security, cryptographic behavior and it is most widely used by Network Admins to control remote web servers primarily.

SSH Interview Questions

10 SSH Interview Questions

Here in this Interview Questions series article, we are presenting some useful 10 SSH (Secure Shell) Questionsand their Answers.

1. SSH is configured on what Port Number, by default? How to change the port of SSH?
Answer : SSH is configured on port 22, by default. We can change or set custom port number for SSH in configuration file.

We can check port number of SSH by running the below one liner script, directly on terminal.

# grep Port /etc/ssh/sshd_config		[On Red Hat based systems]

# grep Port /etc/ssh/ssh_config		        [On Debian based systems]

To change the port of SSH, we need to modify the configuration file of SSH which is located at ‘/etc/ssh/sshd_config‘ or ‘/etc/ssh/ssh_config‘.

# nano /etc/ssh/sshd_config	[On Red Hat based systems]

# nano /etc/ssh/ssh_config		[On Debian based systems]

Searh for the Line.

Port 22

And replace ‘22‘ with any UN-engaged port Number say ‘1080‘. Save the file and restart the SSH service to take the changes into effect.

# service sshd restart					[On Red Hat based systems]

# service ssh restart					[On Debian based systems]
2. As a security implementation, you need to disable root Login on SSH Server, in Linux. What would you suggest?
Answer : The above action can be implemented in the configuration file. We need to change the parameter ‘PermitRootLogin’ to ‘no’ in the configuration file to disable direct root login.

To disable SSH root login, open the configuration file located at ‘/etc/ssh/sshd_config‘ or ‘/etc/ssh/ssh_config‘.

# nano /etc/ssh/sshd_config			[On Red Hat based systems]

# nano Port /etc/ssh/ssh_config			[On Debian based systems]

Change the parameter ‘PermitRootLogin‘ to ‘no‘ and restart the SSH service as show above.

3. SSH or Telnet? Why?
Answer : Both SSH and Telnet are network Protocol. Both the services are used in order to connect and communicate to another machine over Network. SSH uses Port 22 and Telnet uses port 23 by default. Telnet send data in plain text and non-encrypted format everyone can understand whereas SSH sends data in encrypted format. Not to mention SSH is more secure than Telnet and hence SSH is preferred over Telnet.
4. Is it possible to login to SSH server without password? How
Answer : Yes! It is possible to login to a remote SSH server without entering password. We need to use ssh-keygen technology to create public and private keys.

Create ssh-keygen using the command below.

$ ssh-keygen

Copy public keys to remote host using the command below.

$ ssh-copy-id -i /home/USER/.ssh/id_rsa.pub REMOTE-SERVER

Note: Replace USER with user name and REMOTE-SERVER by remote server address.

The next time we try to login to SSH server, it will allow login without asking password, using the keygen. For more detailed instructions, read how to login remote SSH server without password.

5. How will you allows users and groups to have access to SSH Sever?
Answer : Yes! It is possible to allow users and groups to have access to SSH server.

Here again we need to edit the configuration file of SSH service. Open the configuration file and add users and groups at the bottom as show below and then, restart the service.

AllowUsers Tecmint Tecmint1 Tecmint2
AllowGroups group_1 group_2 group_3
6. How to add welcome/warning message as soon as a user login to SSH Server?
Answer : In order to add a welcome/warning message as soon as a user logged into SSH server, we need to edit file called ‘/etc/issue’ and add message there.
# nano /etc/issue

And add your custom message in this file. See, below a screen grab that shows a custom message as soon as user logged into server.

SSH Login Banner

SSH Login Message

7. SSH has two protocols? Justify this statement.
Answer : SSH uses two protocols – Protocol 1 and Protocol 2. Protocol 1 is older than protocol 2. Protocol 1 is less secure than protocol 2 and should be disabled in the config file.

Again, we need to open the SSH configuration file and add/edit the lines as shown below.

# protocol 2,1

to

Protocol 2

Save the configuration file and restart the service.

8. Is it possible to trace unauthorized login attempts to SSH Server with date of Intrusion along with their corresponding IP.
Answer : Yes! we can find the failed login attempts in the log file created at location ‘/var/log/secure’. We can make a filter using the grep command as shown below.
# cat /var/log/secure | grep “Failed password for”

Note: The grep command can be tweaked in any other way to produce the same result.

9. Is it possible to copy files over SSH? How?
Answer : Yes! We can copy files over SSH using command SCP, stands for ‘Secure CopY’. SCP copies file using SSH and is very secure in functioning.

A dummy SCP command in action is depicted below:

$ scp text_file_to_be_copied Your_username@Remote_Host_server:/Path/To/Remote/Directory

For more practical examples on how to copy files/folders using scp command, read the 10 SCP Commands to Copy Files/Folders in Linux.

10. Is it possible to pass input to SSH from a local file? If Yes! How?
Answer : Yes! We can pass input to SSH from a local file. We can do this simply as we do in scripting Language. Here is a simple one liner command, which will pass input from local files to SSH.
# ssh username@servername < local_file.txt

SSH is a very hot topic from interview point, of all times. The above questions would have surely added to your knowledge.

That’s all for now. I’ll soon be here with another interesting article.

10 Interview Questions and Answers on Various Commands in Linux

Our last article, “10 Useful SSH Interview Questions” was highly appreciated on various Social Networking sites as well as on Tecmint. This time we are presenting you with “10 Questions on various Linux commands“. These questions will prove to be brainstorming to you and will add to your knowledge which surely will help you in day-to-day interaction with Linux and in Interviews.

Linux Questions on Commands

Questions on Various Commands

Q1. You have a file (say virgin.txt). You want this file to be alter-proof so that no one can edit or delete this file, not even root. What will you do?
Answer : In order to make this file immune to editing and deleting we need to use command “chattr”. Chattr changes the attributes of a file on Linux System.

The Syntax of command chattr, for the above purpose is:

# chattr +i virgin.txt

Now try to remove the file using normal user.

$ rm -r virgin.txt 

rm: remove write-protected regular empty file `virgin.txt'? Y 
rm: cannot remove `virgin.txt': Operation not permitted

Now try to remove the file using root user.

# rm -r virgin.txt 

cannot remove `virgin.txt': Operation not permitted
Q2. If several users are using your Linux Server, how will you find the usage time of all the users, individually on your server?
Answer : To fulfill the above task, we need to execute command ‘ac’. The Linux command ‘ac’ may not be installed, in your Linux box, by default. On a Debian based System you need a package ‘acct’ installed to run ac.
# apt-get install acct
# ac -p 

(unknown)                     14.18 
server                             235.23 
total      249.42
Q3. Which is preferred tool to create Network Statistics for your server?
Answer : A mrtg stands for Multi Router Traffic Grapher is one of the most commonly used tool to monitor network Statistics. mrtg is most widely recommended FOSS tool, which is very powerful. mrtg may not be installed on your Linux Box, by default and you need to install it manually from repo.
# apt-get install mrtg
Q4. It is possible to send query to BIOS from Linux Command Line?
Answer : Yes! it is possible to send query and signals to BIOS, directly from the command line. For this you need a tool called “biosdecode”. On my Debian wheezy (7.4), it is already installed.
# biosdecode 

# biosdecode 2.11 

ACPI 2.0 present. 
	OEM Identifier: LENOVO 
	RSD Table 32-bit Address: 0xDDFCA028 
	XSD Table 64-bit Address: 0x00000000DDFCA078 
SMBIOS 2.7 present. 
	Structure Table Length: 3446 bytes 
	Structure Table Address: 0x000ED9D0 
	Number Of Structures: 89 
	Maximum Structure Size: 184 bytes 
PNP BIOS 1.0 present. 
	Event Notification: Not Supported 
	Real Mode 16-bit Code Address: F000:BD76 
	Real Mode 16-bit Data Address: F000:0000 
	16-bit Protected Mode Code Address: 0x000FBD9E 
	16-bit Protected Mode Data Address: 0x000F0000 
PCI Interrupt Routing 1.0 present. 
	Router ID: 00:1f.0 
	Exclusive IRQs: None 
	Compatible Router: 8086:27b8 
	Slot Entry 1: ID 00:1f, on-board 
	...
	Slot Entry 15: ID 02:0c, slot number 2
Q5. Most of the Linux Server are headless, i.e., they run in command mode only. No GUI is installed. How will you find hardware description and configuration of your box?
Answer : It is easy to find Hardware description and configuration of a headless Linux Server using command “dmidecode”, which is the DMI table decoder.
# dmidecode

The output of dmidecode is extensive. It will be a nice idea to redirect its output to a file.

# dmidecode > /path/to/text/file/text_file.txt
Q6. You need to know all the libraries being used and needed by a binary, say ‘/bin/echo’. How will you achieve desirable output?
Answer : The command ‘ldd’, which print shared library dependencies of a binary in Linux.
$ ldd /bin/echo 

linux-gate.so.1 =>  (0xb76f1000) 
libc.so.6 => /lib/i386-linux-gnu/i686/cmov/libc.so.6 (0xb7575000) 
/lib/ld-linux.so.2 (0xb76f2000)
Q7. You are working for the country’s army. You have a file (say “topsecret.txt”) which contains confidential and country’s security Information, Nuclear missiles, etc. What will be your preferred method to delete this file?
Answer : The file, being so confidential needs special deletion technique which can not be recovered by any means. For this, to implement practically we need to utilize an application “shred”. Shred tool overwrites a file repeatedly several times, thus making file recovery for any illegal activity almost nil and practically impossible.
# shred -n 15 -z topsecret.txt

shread – overwrite a file to hide its contents, and optionally delete it.

  1. -n – Overwrites the files n times
  2. -z – Add a final overwrite with zeros to hide shredding.

Note: The above command overwrites the file 15 times before overwriting with zero, to hide shredding.

Q8. Is it possible to mount an NTFS partition on Linux?
Answer : Yes! We can mount an NTFS partition/disk on Linux system using application ‘mount.ntfs’ which optionally is called as ‘ntfs-3g’ in order to mount ntfs partition on Linux System.

For more information, read article on how to monitor an NTFS Partition on Linux.

Q9. What and where you need to edit so that the default desktop at login will be KDE, which at present is GNOME.
Answer : We need to edit a file ‘/etc/sysconfig/desktop’ and add/edit the below lines to load KDE by default and not GNOME.
DESKTOP=”KDE”
DISPLAYMANAGER=”KDE”

Save the file with above content. Next time when machine boots, it automatically will load KDE as default display manager.

Q10. What does an intrid image file refers to?
Answer : An intrid is Initial Ram Disk Image that loads into Memory after Power On Self Test (POST) in order to improve machine I/O performance. intrid contains temporary root file system.

That’s all for now. I’ll be here again with another interesting topic, worth knowing.

10 Useful ‘Interview Questions and Answers’ on Linux Shell Scripting

Greeting of the day. The vastness of Linux makes it possible to come up with a unique post every time. Questions on Shell Scripting

Questions on Shell Scripting

We have lots of tutorials on Shell Scripting language and Interview Questions for readers of all kind, here are the links to those articles.

  1. Shell Scripting Series
  2. Interview Question and Answer Series

Adding to the shell scripting posts here, in this article we will be going through questions related to Linux Shell from interview point of view.

1. How will you abort a shell script before it is successfully executed?
Answer : We need to use ‘exit’ command to fulfil the above described situation. A ‘exit’ command when forced to output any value other than 0 (zero), the script will throw an error and will abort. The value 0 (zero) under Unix environment shell scripting represents successful execution. Hence putting ‘exit -1’, without quotes before script termination will abort the script.

For example, create a following shell script as ‘anything.sh‘.

#!/bin/bash
echo "Hello"
exit -1
echo "bye"

Save the file and execute it.

# sh anything.sh

Hello
exit.sh: 3: exit: Illegal number: -1

From the above script, it is clear that the execution went well before exit -1 command.

2. How to remove the headers from a file using command in Linux?
Answer : A ‘sed’ command comes to rescue here, when we need to delete certain lines of a file.

Here it the exact command to remove headers from a file (or first line of a file).

# sed '1 d' file.txt

The only problem with above command is that, it outputs the file on standard output without the first line. In order to save the output to file, we need to use redirect operator which will redirects the output to a file.

# sed '1 d' file.txt > new_file.txt

Well the built in switch ‘-i‘ for sed command, can perform this operation without a redirect operator.

# sed -i '1 d' file.txt
3. How will you check the length of a line from a text file?
Answer : Again ‘sed’ command is used to find or check the length of a line from a text file.

A ‘sed –n ‘n p’ file.txt‘, where ‘n‘ represents the line number and ‘p‘ print out the pattern space (to the standard output). This command is usually only used in conjunction with the -n command-line option. So, how to get the length count? Obviously! we need to pipeline the output with ‘wc‘ command.

# sed –n 'n p' file.txt | wc –c

To get the length of line number ‘5’ in the text file ‘tecmint.txt‘, we need to run.

# sed -n '5 p' tecmint.txt | wc -c
4. Is it possible to view all the non-printable characters from a text file on Linux System? How will you achieve this?
Answer : Yes! it is very much possible to view all the non-printable characters in Linux. In order to achieve the above said scenario, we need to take the help of editor ‘vi’.

How to show non-printable characters in ‘vi‘ editor?

  1. Open vi editor.
  2. Go to command mode of vi editor by pressing [esc] followed by ‘:’.
  3. The final step is to type execute [set list] command, from command interface of ‘vi’ editor.

Note: This way we can see all the non-printable characters from a text file including ctrl+m (^M).

5. You are a Team-Leader of a group of staffs working for a company xyz. The company ask you to create a directory ‘dir_xyz’, such that any member of the group can create a file or access a file under it, but no one can delete the file, except the one created it. what will you do?
Answer : An interesting scenario to work upon. Well in the above said scenario we need to implement the below steps which is as easy as cake walk.
# mkdir dir_xyz
# chmod g+wx dir_xyz
# chmod +t dir_xyz

The first line of command create a directory (dir_xyz). The second line of command above allow group (g) to have permission to ‘write‘ and ‘execute‘ and the last line of the above command – The ‘+t‘ in the end of the permissions is called the ‘sticky bit‘. It replaces the ‘x‘ and indicates that in this directory, files can only be deleted by their owners, the owner of the directory or the root superuser.

6. Can you tell me the various stages of a Linux process, it passes through?
Answer : A Linux process normally goes through four major stages in its processing life.

Here are the 4 stages of Linux process.

  1. Waiting: Linux Process waiting for a resource.
  2. Running : A Linux process is currently being executed.
  3. Stopped : A Linux Process is stopped after successful execution or after receiving kill signal.
  4. Zombie : A Process is said to be ‘Zombie’ if it has stopped but still active in process table.
7. What is the use of cut command in Linux?
Answer : A ‘cut’ is a very useful Linux command which proves to be helpful when we need to cut certain specific part of a file and print it on standard output, for better manipulation when the field of the file and file itself is too heavy.

For example, extract first 10 columns of a text file ‘txt_tecmint‘.

# cut -c1-10 txt_tecmint

To extract 2nd, 5th and 7th column of the same text file.

# cut -d;-f2 -f5 -f7 txt_tecmint
8. What is the difference between commands ‘cmp’ and ‘diff’?
Answer : The command ‘cmp’ and ‘diff’ means to obtain the same thing but with different mindset.

The ‘diff‘ command reports the changes one should make so that both the files look the same. Whereas ‘cmp‘ command compares the two files byte-by-byte and reports the first mismatch.

9. Is it possible to substitute ‘ls’ command with ‘echo’ command?
Answer : Yes! the ‘ls’ command can be substituted by ‘echo’ command. The command ‘ls’ lists the content of file. From the point of view of replacement of above command we can use ‘echo *’, obviously without quotes. The output of both the commands are same.
10. You might have heard about inodes. can you describe inode briefly?
Answer : A ‘inode’ is a ‘data-structure’, which is used for file identification on Linux. Each file on an Unix System has a separate ‘inode’ and an ‘Unique’ inode Number.

That’s all for now. We will be coming up with another interesting and knowledgeable Interview questions, in the next article.

Practical Interview Questions and Answers on Linux Shell Scripting

RHCE (Red Hat Certified Engineer)

RHCE Series: How to Setup and Test Static Network Routing – Part 1

RHCE (Red Hat Certified Engineer) is a certification from Red Hat company, which gives an open source operating system and software to the enterprise community, It also gives training, support and consulting services for the companies.

RHCE Exam Preparation Guide

RHCE Exam Preparation Guide

This RHCE (Red Hat Certified Engineer) is a performance-based exam (codename EX300), who possesses the additional skills, knowledge, and abilities required of a senior system administrator responsible for Red Hat Enterprise Linux (RHEL) systems.

ImportantRed Hat Certified System Administrator (RHCSA) certification is required to earn RHCE certification.

Following are the exam objectives based on the Red Hat Enterprise Linux 7 version of the exam, which will going to cover in this RHCE series:

Part 1How to Setup and Test Static Routing in RHEL 7

To view fees and register for an exam in your country, check the RHCE Certification page.

In this Part 1 of the RHCE series and the next, we will present basic, yet typical, cases where the principles of static routing, packet filtering, and network address translation come into play.

Setup Static Network Routing in RHEL

RHCE: Setup and Test Network Static Routing – Part 1

Please note that we will not cover them in depth, but rather organize these contents in such a way that will be helpful to take the first steps and build from there.

Static Routing in Red Hat Enterprise Linux 7

One of the wonders of modern networking is the vast availability of devices that can connect groups of computers, whether in relatively small numbers and confined to a single room or several machines in the same building, city, country, or across continents.

However, in order to effectively accomplish this in any situation, network packets need to be routed, or in other words, the path they follow from source to destination must be ruled somehow.

Static routing is the process of specifying a route for network packets other than the default, which is provided by a network device known as the default gateway. Unless specified otherwise through static routing, network packets are directed to the default gateway; with static routing, other paths are defined based on predefined criteria, such as the packet destination.

Let us define the following scenario for this tutorial. We have a Red Hat Enterprise Linux 7 box connecting to router #1 [192.168.0.1] to access the Internet and machines in 192.168.0.0/24.

A second router (router #2) has two network interface cards: enp0s3 is also connected to router #1 to access the Internet and to communicate with the RHEL 7 box and other machines in the same network, whereas the other (enp0s8) is used to grant access to the 10.0.0.0/24 network where internal services reside, such as a web and / or database server.

This scenario is illustrated in the diagram below:

Static Routing Network Diagram

Static Routing Network Diagram

In this article we will focus exclusively on setting up the routing table on our RHEL 7 box to make sure that it can both access the Internet through router #1 and the internal network via router #2.

In RHEL 7, you will use the ip command to configure and show devices and routing using the command line. These changes can take effect immediately on a running system but since they are not persistent across reboots, we will use ifcfg-enp0sX and route-enp0sX files inside /etc/sysconfig/network-scripts to save our configuration permanently.

To begin, let’s print our current routing table:

# ip route show

Check Routing Table in Linux

Check Current Routing Table

From the output above, we can see the following facts:

  1. The default gateway’s IP address is 192.168.0.1 and can be accessed via the enp0s3 NIC.
  2. When the system booted up, it enabled the zeroconf route to 169.254.0.0/16 (just in case). In few words, if a machine is set to obtain an IP address through DHCP but fails to do so for some reason, it is automatically assigned an address in this network. Bottom line is, this route will allow us to communicate, also via enp0s3, with other machines who have failed to obtain an IP address from a DHCP server.
  3. Last, but not least, we can communicate with other boxes inside the 192.168.0.0/24 network through enp0s3, whose IP address is 192.168.0.18.

These are the typical tasks that you would have to perform in such a setting. Unless specified otherwise, the following tasks should be performed in router #2:

Make sure all NICs have been properly installed:

# ip link show

If one of them is down, bring it up:

# ip link set dev enp0s8 up

and assign an IP address in the 10.0.0.0/24 network to it:

# ip addr add 10.0.0.17 dev enp0s8

Oops! We made a mistake in the IP address. We will have to remove the one we assigned earlier and then add the right one (10.0.0.18):

# ip addr del 10.0.0.17 dev enp0s8
# ip addr add 10.0.0.18 dev enp0s8

Now, please note that you can only add a route to a destination network through a gateway that is itself already reachable. For that reason, we need to assign an IP address within the 192.168.0.0/24 range to enp0s3 so that our RHEL 7 box can communicate with it:

# ip addr add 192.168.0.19 dev enp0s3

Finally, we will need to enable packet forwarding:

# echo "1" > /proc/sys/net/ipv4/ip_forward

and stop / disable (just for the time being – until we cover packet filtering in the next article) the firewall:

# systemctl stop firewalld
# systemctl disable firewalld

Back in our RHEL 7 box (192.168.0.18), let’s configure a route to 10.0.0.0/24 through 192.168.0.19 (enp0s3 in router #2):

# ip route add 10.0.0.0/24 via 192.168.0.19

After that, the routing table looks as follows:

# ip route show

Show Network Routing Table

Confirm Network Routing Table

Likewise, add the corresponding route in the machine(s) you’re trying to reach in 10.0.0.0/24:

# ip route add 192.168.0.0/24 via 10.0.0.18

You can test for basic connectivity using ping:

In the RHEL 7 box, run

# ping -c 4 10.0.0.20

where 10.0.0.20 is the IP address of a web server in the 10.0.0.0/24 network.

In the web server (10.0.0.20), run

# ping -c 192.168.0.18

where 192.168.0.18 is, as you will recall, the IP address of our RHEL 7 machine.

Alternatively, we can use tcpdump (you may need to install it with yum install tcpdump) to check the 2-way communication over TCP between our RHEL 7 box and the web server at 10.0.0.20.

To do so, let’s start the logging in the first machine with:

# tcpdump -qnnvvv -i enp0s3 host 10.0.0.20

and from another terminal in the same system let’s telnet to port 80 in the web server (assuming Apache is listening on that port; otherwise, indicate the right port in the following command):

# telnet 10.0.0.20 80

The tcpdump log should look as follows:

Check Network Communication between Servers

Check Network Communication between Servers

Where the connection has been properly initialized, as we can tell by looking at the 2-way communication between our RHEL 7 box (192.168.0.18) and the web server (10.0.0.20).

Please remember that these changes will go away when you restart the system. If you want to make them persistent, you will need to edit (or create, if they don’t already exist) the following files, in the same systems where we performed the above commands.

Though not strictly necessary for our test case, you should know that /etc/sysconfig/network contains system-wide network parameters. A typical /etc/sysconfig/network looks as follows:

# Enable networking on this system?
NETWORKING=yes
# Hostname. Should match the value in /etc/hostname
HOSTNAME=yourhostnamehere
# Default gateway
GATEWAY=XXX.XXX.XXX.XXX
# Device used to connect to default gateway. Replace X with the appropriate number.
GATEWAYDEV=enp0sX

When it comes to setting specific variables and values for each NIC (as we did for router #2), you will have to edit /etc/sysconfig/network-scripts/ifcfg-enp0s3 and /etc/sysconfig/network-scripts/ifcfg-enp0s8.

Following our case,

TYPE=Ethernet
BOOTPROTO=static
IPADDR=192.168.0.19
NETMASK=255.255.255.0
GATEWAY=192.168.0.1
NAME=enp0s3
ONBOOT=yes

and

TYPE=Ethernet
BOOTPROTO=static
IPADDR=10.0.0.18
NETMASK=255.255.255.0
GATEWAY=10.0.0.1
NAME=enp0s8
ONBOOT=yes

for enp0s3 and enp0s8, respectively.

As for routing in our client machine (192.168.0.18), we will need to edit /etc/sysconfig/network-scripts/route-enp0s3:

10.0.0.0/24 via 192.168.0.19 dev enp0s3

Now reboot your system and you should see that route in your table.

Summary

In this article we have covered the essentials of static routing in Red Hat Enterprise Linux 7. Although scenarios may vary, the case presented here illustrates the required principles and the procedures to perform this task. Before wrapping up, I would like to suggest you to take a look at Chapter 4 of the Securing and Optimizing Linuxsection in The Linux Documentation Project site for further details on the topics covered here.

Free ebook on Securing & Optimizing Linux: The Hacking Solution (v.3.0) – This 800+ eBook contains comprehensive collection of Linux security tips and how to use them safely and easily to configure Linux-based applications and services.

Linux Security and Optimization Book

Linux Security and Optimization Book

Download Now

In the next article we will talk about packet filtering and network address translation to sum up the networking basic skills needed for the RHCE certification.

As always, we look forward to hearing from you, so feel free to leave your questions, comments, and suggestions using the form below.

How to Perform Packet Filtering, Network Address Translation and Set Kernel Runtime Parameters – Part 2

As promised in Part 1 (“Setup Static Network Routing”), in this article (Part 2 of RHCE series) we will begin by introducing the principles of packet filtering and network address translation (NAT) in Red Hat Enterprise Linux 7, before diving into setting runtime kernel parameters to modify the behavior of a running kernel if certain conditions change or needs arise.

Network Packet Filtering in RHEL

RHCE: Network Packet Filtering – Part 2

Network Packet Filtering in RHEL 7

When we talk about packet filtering, we refer to a process performed by a firewall in which it reads the header of each data packet that attempts to pass through it. Then, it filters the packet by taking the required action based on rules that have been previously defined by the system administrator.

As you probably know, beginning with RHEL 7, the default service that manages firewall rules is firewalld. Like iptables, it talks to the netfilter module in the Linux kernel in order to examine and manipulate network packets. Unlike iptables, updates can take effect immediately without interrupting active connections – you don’t even have to restart the service.

Another advantage of firewalld is that it allows us to define rules based on pre-configured service names (more on that in a minute).

In Part 1, we used the following scenario:

Static Routing Network Diagram

Static Routing Network Diagram

However, you will recall that we disabled the firewall on router #2 to simplify the example since we had not covered packet filtering yet. Let’s see now how we can enable incoming packets destined for a specific service or port in the destination.

First, let’s add a permanent rule to allow inbound traffic in enp0s3 (192.168.0.19) to enp0s8 (10.0.0.18):

# firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 0 -i enp0s3 -o enp0s8 -j ACCEPT

The above command will save the rule to /etc/firewalld/direct.xml:

# cat /etc/firewalld/direct.xml

Check Firewalld Saved Rules in CentOS 7

Check Firewalld Saved Rules

Then enable the rule for it to take effect immediately:

# firewall-cmd --direct --add-rule ipv4 filter FORWARD 0 -i enp0s3 -o enp0s8 -j ACCEPT

Now you can telnet to the web server from the RHEL 7 box and run tcpdump again to monitor the TCP traffic between the two machines, this time with the firewall in router #2 enabled.

# telnet 10.0.0.20 80
# tcpdump -qnnvvv -i enp0s3 host 10.0.0.20

What if you want to only allow incoming connections to the web server (port 80) from 192.168.0.18 and block connections from other sources in the 192.168.0.0/24 network?

In the web server’s firewall, add the following rules:

# firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.18/24" service name="http" accept'
# firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.18/24" service name="http" accept' --permanent
# firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.0/24" service name="http" drop'
# firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.0/24" service name="http" drop' --permanent

Now you can make HTTP requests to the web server, from 192.168.0.18 and from some other machine in 192.168.0.0/24. In the first case the connection should complete successfully, whereas in the second it will eventually timeout.

To do so, any of the following commands will do the trick:

# telnet 10.0.0.20 80
# wget 10.0.0.20

I strongly advise you to check out the Firewalld Rich Language documentation in the Fedora Project Wiki for further details on rich rules.

Network Address Translation in RHEL 7

Network Address Translation (NAT) is the process where a group of computers (it can also be just one of them) in a private network are assigned an unique public IP address. As result, they are still uniquely identified by their own private IP address inside the network but to the outside they all “seem” the same.

In addition, NAT makes it possible that computers inside a network sends requests to outside resources (like the Internet) and have the corresponding responses be sent back to the source system only.

Let’s now consider the following scenario:

Network Address Translation in RHEL

Network Address Translation

In router #2, we will move the enp0s3 interface to the external zone, and enp0s8 to the internal zone, where masquerading, or NAT, is enabled by default:

# firewall-cmd --list-all --zone=external
# firewall-cmd --change-interface=enp0s3 --zone=external
# firewall-cmd --change-interface=enp0s3 --zone=external --permanent
# firewall-cmd --change-interface=enp0s8 --zone=internal
# firewall-cmd --change-interface=enp0s8 --zone=internal --permanent

For our current setup, the internal zone – along with everything that is enabled in it will be the default zone:

# firewall-cmd --set-default-zone=internal

Next, let’s reload firewall rules and keep state information:

# firewall-cmd --reload

Finally, let’s add router #2 as default gateway in the web server:

# ip route add default via 10.0.0.18

You can now verify that you can ping router #1 and an external site (tecmint.com, for example) from the web server:

# ping -c 2 192.168.0.1
# ping -c 2 tecmint.com

Verify Network Routing

Verify Network Routing

Setting Kernel Runtime Parameters in RHEL 7

In Linux, you are allowed to change, enable, and disable the kernel runtime parameters, and RHEL is no exception. The /proc/sys interface (sysctl) lets you set runtime parameters on-the-fly to modify the system’s behavior without much hassle when operating conditions change.

To do so, the echo shell built-in is used to write to files inside /proc/sys/<category>, where <category> is most likely one of the following directories:

  1. dev: parameters for specific devices connected to the machine.
  2. fs: filesystem configuration (quotas and inodes, for example).
  3. kernel: kernel-specific configuration.
  4. net: network configuration.
  5. vm: use of the kernel’s virtual memory.

To display the list of all the currently available values, run

# sysctl -a | less

In Part 1, we changed the value of the net.ipv4.ip_forward parameter by doing

# echo 1 > /proc/sys/net/ipv4/ip_forward

in order to allow a Linux machine to act as router.

Another runtime parameter that you may want to set is kernel.sysrq, which enables the Sysrq key in your keyboard to instruct the system to perform gracefully some low-level functions, such as rebooting the system if it has frozen for some reason:

# echo 1 > /proc/sys/kernel/sysrq

To display the value of a specific parameter, use sysctl as follows:

# sysctl <parameter.name>

For example,

# sysctl net.ipv4.ip_forward
# sysctl kernel.sysrq

Some parameters, such as the ones mentioned above, require only one value, whereas others (for example, fs.inode-state) require multiple values:

Check Kernel Parameters in Linux

Check Kernel Parameters

In either case, you need to read the kernel’s documentation before making any changes.

Please note that these settings will go away when the system is rebooted. To make these changes permanent, we will need to add .conf files inside the /etc/sysctl.d as follows:

# echo "net.ipv4.ip_forward = 1" > /etc/sysctl.d/10-forward.conf

(where the number 10 indicates the order of processing relative to other files in the same directory).

and enable the changes with

# sysctl -p /etc/sysctl.d/10-forward.conf

Summary

In this tutorial we have explained the basics of packet filtering, network address translation, and setting kernel runtime parameters on a running system and persistently across reboots. I hope you have found this information useful, and as always, we look forward to hearing from you!
Don’t hesitate to share with us your questions, comments, or suggestions using the form below.

How to Produce and Deliver System Activity Reports Using Linux Toolsets – Part 3

As a system engineer, you will often need to produce reports that show the utilization of your system’s resources in order to make sure that: 1) they are being utilized optimally, 2) prevent bottlenecks, and 3) ensure scalability, among other reasons.

Monitor Linux Performance Activity Reports

RHCE: Monitor Linux Performance Activity Reports – Part 3

Besides the well-known native Linux tools that are used to check disk, memory, and CPU usage – to name a few examples, Red Hat Enterprise Linux 7 provides two additional toolsets to enhance the data you can collect for your reports: sysstat and dstat.

In this article we will describe both, but let’s first start by reviewing the usage of the classic tools.

Native Linux Tools

With df, you will be able to report disk space and inode usage of by filesystem. You need to monitor both because a lack of space will prevent you from being able to save further files (and may even cause the system to crash), just like running out of inodes will mean you can’t link further files with their corresponding data structures, thus producing the same effect: you won’t be able to save those files to disk.

# df -h 		[Display output in human-readable form]
# df -h --total         [Produce a grand total]

Check Linux Total Disk Usage

Check Linux Total Disk Usage

# df -i 		[Show inode count by filesystem]
# df -i --total 	[Produce a grand total]

Check Linux Total inode Numbers

Check Linux Total inode Numbers

With du, you can estimate file space usage by either file, directory, or filesystem.

For example, let’s see how much space is used by the /home directory, which includes all of the user’s personal files. The first command will return the overall space currently used by the entire /home directory, whereas the second will also display a disaggregated list by sub-directory as well:

# du -sch /home
# du -sch /home/*

Check Linux Directory Disk Size

Check Linux Directory Disk Size

Don’t Miss:

  1. 12 ‘df’ Command Examples to Check Linux Disk Space Usage
  2. 10 ‘du’ Command Examples to Find Disk Usage of Files/Directories

Another utility that can’t be missing from your toolset is vmstat. It will allow you to see at a quick glance information about processes, CPU and memory usage, disk activity, and more.

If run without arguments, vmstat will return averages since the last reboot. While you may use this form of the command once in a while, it will be more helpful to take a certain amount of system utilization samples, one after another, with a defined time separation between samples.

For example,

# vmstat 5 10

will return 10 samples taken every 5 seconds:

Check Linux System Performance

Check Linux System Performance

As you can see in the above picture, the output of vmstat is divided by columns: procs (processes), memoryswapiosystem, and cpu. The meaning of each field can be found in the FIELD DESCRIPTION sections in the man page of vmstat.

Where can vmstat come in handy? Let’s examine the behavior of the system before and during a yum update:

# vmstat -a 1 5

Vmstat Linux Performance Monitoring

Vmstat Linux Performance Monitoring

Please note that as files are being modified on disk, the amount of active memory increases and so does the number of blocks written to disk (bo) and the CPU time that is dedicated to user processes (us).

Or during the saving process of a large file directly to disk (caused by dsync):

# vmstat -a 1 5
# dd if=/dev/zero of=dummy.out bs=1M count=1000 oflag=dsync

VmStat Linux Disk Performance Monitoring

VmStat Linux Disk Performance Monitoring

In this case, we can see a yet larger number of blocks being written to disk (bo), which was to be expected, but also an increase of the amount of CPU time that it has to wait for I/O operations to complete before processing tasks (wa).

Don’t MissVmstat – Linux Performance Monitoring

Other Linux Tools

As mentioned in the introduction of this chapter, there are other tools that you can use to check the system status and utilization (they are not only provided by Red Hat but also by other major distributions from their officially supported repositories).

The sysstat package contains the following utilities:

  1. sar (collect, report, or save system activity information).
  2. sadf (display data collected by sar in multiple formats).
  3. mpstat (report processors related statistics).
  4. iostat (report CPU statistics and I/O statistics for devices and partitions).
  5. pidstat (report statistics for Linux tasks).
  6. nfsiostat (report input/output statistics for NFS).
  7. cifsiostat (report CIFS statistics) and
  8. sa1 (collect and store binary data in the system activity daily data file.
  9. sa2 (write a daily report in the /var/log/sa directory) tools.

whereas dstat adds some extra features to the functionality provided by those tools, along with more counters and flexibility. You can find an overall description of each tool by running yum info sysstat or yum info dstat, respectively, or checking the individual man pages after installation.

To install both packages:

# yum update && yum install sysstat dstat

The main configuration file for sysstat is /etc/sysconfig/sysstat. You will find the following parameters in that file:

# How long to keep log files (in days).
# If value is greater than 28, then log files are kept in
# multiple directories, one for each month.
HISTORY=28
# Compress (using gzip or bzip2) sa and sar files older than (in days):
COMPRESSAFTER=31
# Parameters for the system activity data collector (see sadc manual page)
# which are used for the generation of log files.
SADC_OPTIONS="-S DISK"
# Compression program to use.
ZIP="bzip2"

When sysstat is installed, two cron jobs are added and enabled in /etc/cron.d/sysstat. The first job runs the system activity accounting tool every 10 minutes and stores the reports in /var/log/sa/saXX where XX is the day of the month.

Thus, /var/log/sa/sa05 will contain all the system activity reports from the 5th of the month. This assumes that we are using the default value in the HISTORY variable in the configuration file above:

*/10 * * * * root /usr/lib64/sa/sa1 1 1

The second job generates a daily summary of process accounting at 11:53 pm every day and stores it in /var/log/sa/sarXX files, where XX has the same meaning as in the previous example:

53 23 * * * root /usr/lib64/sa/sa2 -A

For example, you may want to output system statistics from 9:30 am through 5:30 pm of the sixth of the month to a .csv file that can easily be viewed using LibreOffice Calc or Microsoft Excel (this approach will also allow you to create charts or graphs):

# sadf -s 09:30:00 -e 17:30:00 -dh /var/log/sa/sa06 -- | sed 's/;/,/g' > system_stats20150806.csv

You could alternatively use the -j flag instead of -d in the sadf command above to output the system stats in JSON format, which could be useful if you need to consume the data in a web application, for example.

Linux System Statistics

Linux System Statistics

Finally, let’s see what dstat has to offer. Please note that if run without arguments, dstat assumes -cdngy by default (short for CPU, disk, network, memory pages, and system stats, respectively), and adds one line every second (execution can be interrupted anytime with Ctrl + C):

# dstat

Linux Disk Statistics Monitoring

Linux Disk Statistics Monitoring

To output the stats to a .csv file, use the –output flag followed by a file name. Let’s see how this looks on LibreOffice Calc:

Monitor Linux Statistics Output

Monitor Linux Statistics Output

I strongly advise you to check out the man page of dstat along with the man page of sysstat in PDF format for your reading convenience. You will find several other options that will help you create custom and detailed system activity reports.

Don’t Miss: Sysstat – Linux Usage Activity Monitoring Tool

Summary

In this guide we have explained how to use both native Linux tools and specific utilities provided with RHEL 7 in order to produce reports on system utilization. At one point or another, you will come to rely on these reports as best friends.

You will probably have used other tools that we have not covered in this tutorial. If so, feel free to share them with the rest of the community along with any other suggestions / questions / comments that you may have- using the form below.

We look forward to hearing from you.

Using Shell Scripting to Automate Linux System Maintenance Tasks – Part 4

Some time ago I read that one of the distinguishing characteristics of an effective system administrator / engineer is laziness. It seemed a little contradictory at first but the author then proceeded to explain why:

Automate Linux System Maintenance Tasks

RHCE Series: Automate Linux System Maintenance Tasks – Part 4

if a sysadmin spends most of his time solving issues and doing repetitive tasks, you can suspect he or she is not doing things quite right. In other words, an effective system administrator / engineer should develop a plan to perform repetitive tasks with as less action on his / her part as possible, and should foresee problems by using,

for example, the tools reviewed in Part 3 – Monitor System Activity Reports Using Linux Toolsets of this series. Thus, although he or she may not seem to be doing much, it’s because most of his / her responsibilities have been taken care of with the help of shell scripting, which is what we’re going to talk about in this tutorial.

What is a shell script?

In few words, a shell script is nothing more and nothing less than a program that is executed step by step by a shell, which is another program that provides an interface layer between the Linux kernel and the end user.

By default, the shell used for user accounts in RHEL 7 is bash (/bin/bash). If you want a detailed description and some historical background, you can refer to this Wikipedia article.

To find out more about the enormous set of features provided by this shell, you may want to check out its man page, which is downloaded in in PDF format at (Bash Commands). Other than that, it is assumed that you are familiar with Linux commands (if not, I strongly advise you to go through A Guide from Newbies to SysAdminarticle in Tecmint.com before proceeding). Now let’s get started.

Writing a script to display system information

For our convenience, let’s create a directory to store our shell scripts:

# mkdir scripts
# cd scripts

And open a new text file named system_info.sh with your preferred text editor. We will begin by inserting a few comments at the top and some commands afterwards:

#!/bin/bash

# Sample script written for Part 4 of the RHCE series
# This script will return the following set of system information:
# -Hostname information:
echo -e "\e[31;43m***** HOSTNAME INFORMATION *****\e[0m"
hostnamectl
echo ""
# -File system disk space usage:
echo -e "\e[31;43m***** FILE SYSTEM DISK SPACE USAGE *****\e[0m"
df -h
echo ""
# -Free and used memory in the system:
echo -e "\e[31;43m ***** FREE AND USED MEMORY *****\e[0m"
free
echo ""
# -System uptime and load:
echo -e "\e[31;43m***** SYSTEM UPTIME AND LOAD *****\e[0m"
uptime
echo ""
# -Logged-in users:
echo -e "\e[31;43m***** CURRENTLY LOGGED-IN USERS *****\e[0m"
who
echo ""
# -Top 5 processes as far as memory usage is concerned
echo -e "\e[31;43m***** TOP 5 MEMORY-CONSUMING PROCESSES *****\e[0m"
ps -eo %mem,%cpu,comm --sort=-%mem | head -n 6
echo ""
echo -e "\e[1;32mDone.\e[0m"

Next, give the script execute permissions:

# chmod +x system_info.sh

and run it:

./system_info.sh

Note that the headers of each section are shown in color for better visualization:

Server Monitoring Shell Script

Server Monitoring Shell Script

That functionality is provided by this command:

echo -e "\e[COLOR1;COLOR2m<YOUR TEXT HERE>\e[0m"

Where COLOR1 and COLOR2 are the foreground and background colors, respectively (more info and options are explained in this entry from the Arch Linux Wiki) and <YOUR TEXT HERE> is the string that you want to show in color.

Automating Tasks

The tasks that you may need to automate may vary from case to case. Thus, we cannot possibly cover all of the possible scenarios in a single article, but we will present three classic tasks that can be automated using shell scripting:

1) update the local file database, 2) find (and alternatively delete) files with 777 permissions, and 3) alert when filesystem usage surpasses a defined limit.

Let’s create a file named auto_tasks.sh in our scripts directory with the following content:

#!/bin/bash

# Sample script to automate tasks:
# -Update local file database:
echo -e "\e[4;32mUPDATING LOCAL FILE DATABASE\e[0m"
updatedb
if [ $? == 0 ]; then
        echo "The local file database was updated correctly."
else
        echo "The local file database was not updated correctly."
fi
echo ""

# -Find and / or delete files with 777 permissions.
echo -e "\e[4;32mLOOKING FOR FILES WITH 777 PERMISSIONS\e[0m"
# Enable either option (comment out the other line), but not both.
# Option 1: Delete files without prompting for confirmation. Assumes GNU version of find.
#find -type f -perm 0777 -delete
# Option 2: Ask for confirmation before deleting files. More portable across systems.
find -type f -perm 0777 -exec rm -i {} +;
echo ""
# -Alert when file system usage surpasses a defined limit 
echo -e "\e[4;32mCHECKING FILE SYSTEM USAGE\e[0m"
THRESHOLD=30
while read line; do
        # This variable stores the file system path as a string
        FILESYSTEM=$(echo $line | awk '{print $1}')
        # This variable stores the use percentage (XX%)
        PERCENTAGE=$(echo $line | awk '{print $5}')
        # Use percentage without the % sign.
        USAGE=${PERCENTAGE%?}
        if [ $USAGE -gt $THRESHOLD ]; then
                echo "The remaining available space in $FILESYSTEM is critically low. Used: $PERCENTAGE"
        fi
done < <(df -h --total | grep -vi filesystem)

Please note that there is a space between the two < signs in the last line of the script.

Shell Script to Find 777 Permissions

Shell Script to Find 777 Permissions

Using Cron

To take efficiency one step further, you will not want to sit in front of your computer and run those scripts manually. Rather, you will use cron to schedule those tasks to run on a periodic basis and sends the results to a predefined list of recipients via email or save them to a file that can be viewed using a web browser.

The following script (filesystem_usage.sh) will run the well-known df -h command, format the output into a HTML table and save it in the report.html file:

#!/bin/bash
# Sample script to demonstrate the creation of an HTML report using shell scripting
# Web directory
WEB_DIR=/var/www/html
# A little CSS and table layout to make the report look a little nicer
echo "<HTML>
<HEAD>
<style>
.titulo{font-size: 1em; color: white; background:#0863CE; padding: 0.1em 0.2em;}
table
{
border-collapse:collapse;
}
table, td, th
{
border:1px solid black;
}
</style>
<meta http-equiv='Content-Type' content='text/html; charset=UTF-8' />
</HEAD>
<BODY>" > $WEB_DIR/report.html
# View hostname and insert it at the top of the html body
HOST=$(hostname)
echo "Filesystem usage for host <strong>$HOST</strong><br>
Last updated: <strong>$(date)</strong><br><br>
<table border='1'>
<tr><th class='titulo'>Filesystem</td>
<th class='titulo'>Size</td>
<th class='titulo'>Use %</td>
</tr>" >> $WEB_DIR/report.html
# Read the output of df -h line by line
while read line; do
echo "<tr><td align='center'>" >> $WEB_DIR/report.html
echo $line | awk '{print $1}' >> $WEB_DIR/report.html
echo "</td><td align='center'>" >> $WEB_DIR/report.html
echo $line | awk '{print $2}' >> $WEB_DIR/report.html
echo "</td><td align='center'>" >> $WEB_DIR/report.html
echo $line | awk '{print $5}' >> $WEB_DIR/report.html
echo "</td></tr>" >> $WEB_DIR/report.html
done < <(df -h | grep -vi filesystem)
echo "</table></BODY></HTML>" >> $WEB_DIR/report.html

In our RHEL 7 server (192.168.0.18), this looks as follows:

Server Monitoring Report

Server Monitoring Report

You can add to that report as much information as you want. To run the script every day at 1:30 pm, add the following crontab entry:

30 13 * * * /root/scripts/filesystem_usage.sh

Summary

You will most likely think of several other tasks that you want or need to automate; as you can see, using shell scripting will greatly simplify this effort. Feel free to let us know if you find this article helpful and don’t hesitate to add your own ideas or comments via the form below.

How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7 – Part 5

In order to keep your RHEL 7 systems secure, you need to know how to monitor all of the activities that take place on such systems by examining log files. Thus, you will be able to detect any unusual or potentially malicious activity and perform system troubleshooting or take another appropriate action.

Linux Rotate Log Files Using Rsyslog and Logrotate

RHCE Exam: Manage System LogsUsing Rsyslogd and Logrotate – Part 5

In RHEL 7, the rsyslogd daemon is responsible for system logging and reads its configuration from /etc/rsyslog.conf (this file specifies the default location for all system logs) and from files inside /etc/rsyslog.d, if any.

Rsyslogd Configuration

A quick inspection of the rsyslog.conf will be helpful to start. This file is divided into 3 main sections: Modules(since rsyslog follows a modular design), Global directives (used to set global properties of the rsyslogd daemon), and Rules. As you will probably guess, this last section indicates what gets logged or shown (also known as the selector) and where, and will be our focus throughout this article.

A typical line in rsyslog.conf is as follows:

Rsyslogd Configuration

Rsyslogd Configuration

In the image above, we can see that a selector consists of one or more pairs Facility:Priority separated by semicolons, where Facility describes the type of message (refer to section 4.1.1 in RFC 3164 to see the complete list of facilities available for rsyslog) and Priority indicates its severity, which can be one of the following self-explanatory words:

  1. debug
  2. info
  3. notice
  4. warning
  5. err
  6. crit
  7. alert
  8. emerg

Though not a priority itself, the keyword none means no priority at all of the given facility.

Note: That a given priority indicates that all messages of such priority and above should be logged. Thus, the line in the example above instructs the rsyslogd daemon to log all messages of priority info or higher (regardless of the facility) except those belonging to mailauthpriv, and cron services (no messages coming from this facilities will be taken into account) to /var/log/messages.

You can also group multiple facilities using the colon sign to apply the same priority to all of them. Thus, the line:

*.info;mail.none;authpriv.none;cron.none                /var/log/messages

Could be rewritten as

*.info;mail,authpriv,cron.none                /var/log/messages

In other words, the facilities mailauthpriv, and cron are grouped and the keyword none is applied to the three of them.

Creating a custom log file

To log all daemon messages to /var/log/tecmint.log, we need to add the following line either in rsyslog.conf or in a separate file (easier to manage) inside /etc/rsyslog.d:

daemon.*    /var/log/tecmint.log

Let’s restart the daemon (note that the service name does not end with a d):

# systemctl restart rsyslog

And check the contents of our custom log before and after restarting two random daemons:

Linux Create Custom Log File

Create Custom Log File

As a self-study exercise, I would recommend you play around with the facilities and priorities and either log additional messages to existing log files or create new ones as in the previous example.

Rotating Logs using Logrotate

To prevent log files from growing endlessly, the logrotate utility is used to rotate, compress, remove, and alternatively mail logs, thus easing the administration of systems that generate large numbers of log files.

Suggested Read: How to Setup and Manage Log Rotation Using Logrotate in Linux

Logrotate runs daily as a cron job (/etc/cron.daily/logrotate) and reads its configuration from /etc/logrotate.conf and from files located in /etc/logrotate.d, if any.

As with the case of rsyslog, even when you can include settings for specific services in the main file, creating separate configuration files for each one will help organize your settings better.

Let’s take a look at a typical logrotate.conf:

Logrotate Configuration

Logrotate Configuration

In the example above, logrotate will perform the following actions for /var/loh/wtmp: attempt to rotate only once a month, but only if the file is at least 1 MB in size, then create a brand new log file with permissions set to 0664 and ownership given to user root and group utmp. Next, only keep one archived log, as specified by the rotate directive:

Logrotate Logs Monthly

Logrotate Logs Monthly

Let’s now consider another example as found in /etc/logrotate.d/httpd:

Rotate Apache Log Files

Rotate Apache Log Files

You can read more about the settings for logrotate in its man pages (man logrotate and man logrotate.conf). Both files are provided along with this article in PDF format for your reading convenience.

As a system engineer, it will be pretty much up to you to decide for how long logs will be stored and in what format, depending on whether you have /var in a separate partition / logical volume. Otherwise, you really want to consider removing old logs to save storage space. On the other hand, you may be forced to keep several logs for future security auditing according to your company’s or client’s internal policies.

Saving Logs to a Database

Of course examining logs (even with the help of tools such as grep and regular expressions) can become a rather tedious task. For that reason, rsyslog allows us to export them into a database (OTB supported RDBMS include MySQL, MariaDB, PostgreSQL, and Oracle.

This section of the tutorial assumes that you have already installed the MariaDB server and client in the same RHEL 7 box where the logs are being managed:

# yum update && yum install mariadb mariadb-server mariadb-client rsyslog-mysql
# systemctl enable mariadb && systemctl start mariadb

Then use the mysql_secure_installation utility to set the password for the root user and other security considerations:

Secure MySQL Database

Secure MySQL Database

Note: If you don’t want to use the MariaDB root user to insert log messages to the database, you can configure another user account to do so. Explaining how to do that is out of the scope of this tutorial but is explained in detail in MariaDB knowledge base. In this tutorial we will use the root account for simplicity.

Next, download the createDB.sql script from GitHub and import it into your database server:

# mysql -u root -p < createDB.sql

Save Server Logs to Database

Save Server Logs to Database

Finally, add the following lines to /etc/rsyslog.conf:

$ModLoad ommysql
$ActionOmmysqlServerPort 3306
*.* :ommysql:localhost,Syslog,root,YourPasswordHere

Restart rsyslog and the database server:

# systemctl restart rsyslog 
# systemctl restart mariadb

Querying the Logs using SQL syntax

Now perform some tasks that will modify the logs (like stopping and starting services, for example), then log to your DB server and use standard SQL commands to display and search in the logs:

USE Syslog;
SELECT ReceivedAt, Message FROM SystemEvents;

Query Logs in Database

Query Logs in Database

Summary

In this article we have explained how to set up system logging, how to rotate logs, and how to redirect the messages to a database for easier search. We hope that these skills will be helpful as you prepare for the RHCE exam and in your daily responsibilities as well.

As always, your feedback is more than welcome. Feel free to use the form below to reach us.

Setting Up Samba and Configure FirewallD and SELinux to Allow File Sharing on Linux/Windows Clients – Part 6

Since computers seldom work as isolated systems, it is to be expected that as a system administrator or engineer, you know how to set up and maintain a network with multiple types of servers.

In this article and in the next of this series we will go through the essentials of setting up Samba and NFSservers with Windows/Linux and Linux clients, respectively.

Setup Samba File Sharing on Linux

RHCE: Setup Samba File Sharing – Part 6

This article will definitely come in handy if you’re called upon to set up file servers in corporate or enterprise environments where you are likely to find different operating systems and types of devices.

Since you can read about the background and the technical aspects of both Samba and NFS all over the Internet, in this article and the next we will cut right to the chase with the topic at hand.

Step 1: Installing Samba Server

Our current testing environment consists of two RHEL 7 boxes and one Windows 8 machine, in that order:

1. Samba / NFS server [box1 (RHEL 7): 192.168.0.18], 
2. Samba client #1 [box2 (RHEL 7): 192.168.0.20]
3. Samba client #2 [Windows 8 machine: 192.168.0.106]

Testing Setup for Samba

Testing Setup for Samba

On box1, install the following packages:

# yum update && yum install samba samba-client samba-common

On box2:

# yum update && yum install samba samba-client samba-common cifs-utils

Once the installation is complete, we’re ready to configure our share.

Step 2: Setting Up File Sharing Through Samba

One of the reason why Samba is so relevant is because it provides file and print services to SMB/CIFS clients, which causes those clients to see the server as if it was a Windows system (I must admit I tend to get a little emotional while writing about this topic as it was my first setup as a new Linux system administrator some years ago).

Adding system users and setting up permissions and ownership

To allow for group collaboration, we will create a group named finance with two users (user1 and user2) with useradd command and a directory /finance in box1.

We will also change the group owner of this directory to finance and set its permissions to 0770 (read, write, and execution permissions for the owner and the group owner):

# groupadd finance
# useradd user1
# useradd user2
# usermod -a -G finance user1
# usermod -a -G finance user2
# mkdir /finance
# chmod 0770 /finance
# chgrp finance /finance

Step 3:​ Configuring SELinux and Firewalld

In preparation to configure /finance as a Samba share, we will need to either disable SELinux or set the proper boolean and security context values as follows (otherwise, SELinux will prevent clients from accessing the share):

# setsebool -P samba_export_all_ro=1 samba_export_all_rw=1
# getsebool –a | grep samba_export
# semanage fcontext –at samba_share_t "/finance(/.*)?"
# restorecon /finance

In addition, we must ensure that Samba traffic is allowed by the firewalld.

# firewall-cmd --permanent --add-service=samba
# firewall-cmd --reload

Step 4: Configure Samba Share

Now it’s time to dive into the configuration file /etc/samba/smb.conf and add the section for our share: we want the members of the finance group to be able to browse the contents of /finance, and save / create files or subdirectories in it (which by default will have their permission bits set to 0770 and finance will be their group owner):

smb.conf
[finance]
comment=Directory for collaboration of the company's finance team
browsable=yes
path=/finance
public=no
valid users=@finance
write list=@finance
writeable=yes
create mask=0770
Force create mode=0770
force group=finance

Save the file and then test it with the testparm utility. If there are any errors, the output of the following command will indicate what you need to fix. Otherwise, it will display a review of your Samba server configuration:

Test Samba Configuration

Test Samba Configuration

Should you want to add another share that is open to the public (meaning without any authentication whatsoever), create another section in /etc/samba/smb.conf and under the new share’s name copy the section above, only changing public=no to public=yes and not including the valid users and write list directives.

Step 5: Adding Samba Users

Next, you will need to add user1 and user2 as Samba users. To do so, you will use the smbpasswd command, which interacts with Samba’s internal database. You will be prompted to enter a password that you will later use to connect to the share:

# smbpasswd -a user1
# smbpasswd -a user2

Finally, restart Samba, enable the service to start on boot, and make sure the share is actually available to network clients:

# systemctl start smb
# systemctl enable smb
# smbclient -L localhost –U user1
# smbclient -L localhost –U user2

Verify Samba Share

Verify Samba Share

At this point, the Samba file server has been properly installed and configured. Now it’s time to test this setup on our RHEL 7 and Windows 8 clients.

Step 6:​ Mounting the Samba Share in Linux

First, make sure the Samba share is accessible from this client:

# smbclient –L 192.168.0.18 -U user2

Mount Samba Share on Linux

Mount Samba Share on Linux

(repeat the above command for user1)

As any other storage media, you can mount (and later unmount) this network share when needed:

# mount //192.168.0.18/finance /media/samba -o username=user1

Mount Samba Network Share

Mount Samba Network Share

(where /media/samba is an existing directory)

or permanently, by adding the following entry in /etc/fstab file:

fstab
//192.168.0.18/finance /media/samba cifs credentials=/media/samba/.smbcredentials,defaults 0 0

Where the hidden file /media/samba/.smbcredentials (whose permissions and ownership have been set to 600and root:root, respectively) contains two lines that indicate the username and password of an account that is allowed to use the share:

.smbcredentials
username=user1
password=PasswordForUser1

Finally, let’s create a file inside /finance and check the permissions and ownership:

# touch /media/samba/FileCreatedInRHELClient.txt

Create File in Samba Share

Create File in Samba Share

As you can see, the file was created with 0770 permissions and ownership set to user1:finance.

Step 7: Mounting the Samba Share in Windows

To mount the Samba share in Windows, go to My PC and choose Computer, then Map network drive. Next, assign a letter for the drive to be mapped and check Connect using different credentials (the screenshots below are in Spanish, my native language):

Mount Samba Share in Windows

Mount Samba Share in Windows

Finally, let’s create a file and check the permissions and ownership:

Create Files on Windows Samba Share

Create Files on Windows Samba Share

# ls -l /finance

This time the file belongs to user2 since that’s the account we used to connect from the Windows client.

Summary

In this article we have explained not only how to set up a Samba server and two clients using different operating systems, but also how to configure the firewalld and SELinux on the server to allow the desired group collaboration capabilities.

Last, but not least, let me recommend the reading of the online man page of smb.conf to explore other configuration directives that may be more suitable for your case than the scenario described in this article.

As always, feel free to drop a comment using the form below if you have any comments or suggestions.

Setting Up NFS Server with Kerberos-based Authentication for Linux Clients – Part 7

In the last article of this series, we reviewed how to set up a Samba share over a network that may consist of multiple types of operating systems. Now, if you need to set up file sharing for a group of Unix-like clients you will automatically think of the Network File System, or NFS for short.

Setting Up NFS Server with Kerberos Authentication

RHCE Series: Setting Up NFS Server with Kerberos Authentication – Part 7

In this article we will walk you through the process of using Kerberos-based authentication for NFS shares. It is assumed that you already have set up a NFS server and a client. If not, please refer to install and configure NFS server – which will list the necessary packages that need to be installed and explain how to perform initial configurations on the server before proceeding further.

In addition, you will want to configure both SELinux and firewalld to allow for file sharing through NFS.

The following example assumes that your NFS share is located in /nfs in box2:

# semanage fcontext -a -t public_content_rw_t "/nfs(/.*)?"
# restorecon -R /nfs
# setsebool -P nfs_export_all_rw on
# setsebool -P nfs_export_all_ro on

(where the -P flag indicates persistence across reboots).

Finally, don’t forget to:

Create NFS Group and Configure NFS Share Directory

1. Create a group called nfs and add the nfsnobody user to it, then change the permissions of the /nfs directory to 0770 and its group owner to nfs. Thus, nfsnobody (which is mapped to the client requests) will have write permissions on the share) and you won’t need to use no_root_squash in the /etc/exports file.

# groupadd nfs
# usermod -a -G nfs nfsnobody
# chmod 0770 /nfs
# chgrp nfs /nfs

2. Modify the exports file (/etc/exports) as follows to only allow access from box1 using Kerberos security (sec=krb5).

Note: that the value of anongid has been set to the GID of the nfs group that we created previously:

exports – Add NFS Share
/nfs box1(rw,sec=krb5,anongid=1004)

3. Re-export (-r) all (-a) the NFS shares. Adding verbosity to the output (-v) is a good idea since it will provide helpful information to troubleshoot the server if something goes wrong:

# exportfs -arv

4. Restart and enable the NFS server and related services. Note that you don’t have to enable nfs-lock and nfs-idmapd because they will be automatically started by the other services on boot:

# systemctl restart rpcbind nfs-server nfs-lock nfs-idmap
# systemctl enable rpcbind nfs-server

Testing Environment and Other Prerequisites

In this guide we will use the following test environment:

  1. Client machine [box1: 192.168.0.18]
  2. NFS / Kerberos server [box2: 192.168.0.20] (also known as Key Distribution Center, or KDC for short).

Note: that Kerberos service is crucial to the authentication scheme.

As you can see, the NFS server and the KDC are hosted in the same machine for simplicity, although you can set them up in separate machines if you have more available. Both machines are members of the mydomain.comdomain.

Last but not least, Kerberos requires at least a basic schema of name resolution and the Network Time Protocolservice to be present in both client and server since the security of Kerberos authentication is in part based upon the timestamps of tickets.

To set up name resolution, we will use the /etc/hosts file in both client and server:

host file – Add DNS for Domain
192.168.0.18    box1.mydomain.com    box1
192.168.0.20    box2.mydomain.com    box2

In RHEL 7chrony is the default software that is used for NTP synchronization:

# yum install chrony
# systemctl start chronyd
# systemctl enable chronyd

To make sure chrony is actually synchronizing your system’s time with time servers you may want to issue the following command two or three times and make sure the offset is getting nearer to zero:

# chronyc tracking

Synchronize Server Time with Chrony

Synchronize Server Time with Chrony

Installing and Configuring Kerberos

To set up the KDC, install the following packages on both server and client (omit the server package in the client):

# yum update && yum install krb5-server krb5-workstation pam_krb5

Once it is installed, edit the configuration files (/etc/krb5.conf and /var/kerberos/krb5kdc/kadm5.acl) and replace all instances of example.com (lowercase and uppercase) with mydomain.com as follows.

Now create the Kerberos database (please note that this may take a while as it requires a some level of entropy in your system. To speed things up, I opened another terminal and ran ping -f localhost for 30-45 seconds):

# kdb5_util create -s

Create Kerberos Database

Create Kerberos Database

Next, enable Kerberos through the firewall and start / enable the related services.

Importantnfs-secure must be started and enabled on the client as well:

# firewall-cmd --permanent --add-service=kerberos
# systemctl start krb5kdc kadmin nfs-secure   
# systemctl enable krb5kdc kadmin nfs-secure       

Next, using the kadmin.local tool, create an admin principal for root:

# kadmin.local
# addprinc root/admin

And add the Kerberos server to the database:

# addprinc -randkey host/box2.mydomain.com

Same with the NFS service for both client (box1) and server (box2). Please note that in the screenshot below I forgot to do it for box1 before quitting:

# addprinc -randkey nfs/box2.mydomain.com
# addprinc -randkey nfs/box1.mydomain.com

And exit by typing quit and pressing Enter:

Add Kerberos to NFS Server

Add Kerberos to NFS Server

Then obtain and cache Kerberos ticket-granting ticket for root/admin:

# kinit root/admin
# klist

Cache Kerberos

Cache Kerberos

The last step before actually using Kerberos is storing into a keytab file (in the server) the principals that are authorized to use Kerberos authentication:

# kadmin.local
# ktadd host/box2.mydomain.com
# ktadd nfs/box2.mydomain.com
# ktadd nfs/box1.mydomain.com

Finally, mount the share and perform a write test:

# mount -t nfs4 -o sec=krb5 box2:/nfs /mnt
# echo "Hello from Tecmint.com" > /mnt/greeting.txt

Mount NFS Share

Mount NFS Share

Let’s now unmount the share, rename the keytab file in the client (to simulate it’s not present) and try to mount the share again:

# umount /mnt
# mv /etc/krb5.keytab /etc/krb5.keytab.orig

Mount Unmount Kerberos NFS Share

Mount Unmount Kerberos NFS Share

Now you can use the NFS share with Kerberos-based authentication.

Summary

In this article we have explained how to set up NFS with Kerberos authentication. Since there is much more to the topic than we can cover in a single guide, feel free to check the online Kerberos documentation and since Kerberos is a bit tricky to say the least, don’t hesitate to drop us a note using the form below if you run into any issue or need help with your testing or implementation.

RHCE Series: Implementing HTTPS through TLS using Network Security Service (NSS) for Apache – Part 8

If you are a system administrator who is in charge of maintaining and securing a web server, you can’t afford to not devote your very best efforts to ensure that data served by or going through your server is protected at all times.

Setup Apache HTTPS Using SSL/TLS

RHCE Series: Implementing HTTPS through TLS using Network Security Service (NSS) for Apache – Part 8

In order to provide more secure communications between web clients and servers, the HTTPS protocol was born as a combination of HTTP and SSL (Secure Sockets Layer) or more recently, TLS (Transport Layer Security).

Due to some serious security breaches, SSL has been deprecated in favor of the more robust TLS. For that reason, in this article we will explain how to secure connections between your web server and clients using TLS.

This tutorial assumes that you have already installed and configured your Apache web server. If not, please refer to following article in this site before proceeding further.

  1. Install LAMP (Linux, MySQL/MariaDB, Apache and PHP) on RHEL/CentOS 7

Installation of OpenSSL and Utilities

First off, make sure that Apache is running and that both http and https are allowed through the firewall:

# systemctl start http
# systemctl enable http
# firewall-cmd --permanent –-add-service=http
# firewall-cmd --permanent –-add-service=https

Then install the necessary packages:

# yum update && yum install openssl mod_nss crypto-utils

Important: Please note that you can replace mod_nss with mod_ssl in the command above if you want to use OpenSSL libraries instead of NSS (Network Security Service) to implement TLS (which one to use is left entirely up to you, but we will use NSS in this article as it is more robust; for example, it supports recent cryptography standards such as PKCS #11).

Finally, uninstall mod_ssl if you chose to use mod_nss, or viceversa.

# yum remove mod_ssl

Configuring NSS (Network Security Service)

After mod_nss is installed, its default configuration file is created as /etc/httpd/conf.d/nss.conf. You should then make sure that all of the Listen and VirtualHost directives point to port 443 (default port for HTTPS):

nss.conf – Configuration File
Listen 443
VirtualHost _default_:443

Then restart Apache and check whether the mod_nss module has been loaded:

# apachectl restart
# httpd -M | grep nss

Check Mod_NSS Module in Apache

Check Mod_NSS Module Loaded in Apache

Next, the following edits should be made in /etc/httpd/conf.d/nss.conf configuration file:

1. Indicate NSS database directory. You can use the default directory or create a new one. In this tutorial we will use the default:

NSSCertificateDatabase /etc/httpd/alias

2. Avoid manual passphrase entry on each system start by saving the password to the database directory in /etc/httpd/nss-db-password.conf:

NSSPassPhraseDialog file:/etc/httpd/nss-db-password.conf

Where /etc/httpd/nss-db-password.conf contains ONLY the following line and mypassword is the password that you will set later for the NSS database:

internal:mypassword

In addition, its permissions and ownership should be set to 0640 and root:apache, respectively:

# chmod 640 /etc/httpd/nss-db-password.conf
# chgrp apache /etc/httpd/nss-db-password.conf

3. Red Hat recommends disabling SSL and all versions of TLS previous to TLSv1.0 due to the POODLE SSLv3vulnerability (more information here).

Make sure that every instance of the NSSProtocol directive reads as follows (you are likely to find only one if you are not hosting other virtual hosts):

NSSProtocol TLSv1.0,TLSv1.1

4. Apache will refuse to restart as this is a self-signed certificate and will not recognize the issuer as valid. For this reason, in this particular case you will have to add:

NSSEnforceValidCerts off

5. Though not strictly required, it is important to set a password for the NSS database:

# certutil -W -d /etc/httpd/alias

Set Password for NSS Database

Set Password for NSS Database

Creating a Apache SSL Self-Signed Certificate

Next, we will create a self-signed certificate that will identify the server to our clients (please note that this method is not the best option for production environments; for such use you may want to consider buying a certificate verified by a 3rd trusted certificate authority, such as DigiCert).

To create a new NSS-compliant certificate for box1 which will be valid for 365 days, we will use the genkeycommand. When this process completes:

# genkey --nss --days 365 box1

Choose Next:

Create Apache SSL Key

Create Apache SSL Key

You can leave the default choice for the key size (2048), then choose Next again:

Select Apache SSL Key Size

Select Apache SSL Key Size

Wait while the system generates random bits:

Generating Random Key Bits

Generating Random Key Bits

To speed up the process, you will be prompted to enter random text in your console, as shown in the following screencast. Please note how the progress bar stops when no input from the keyboard is received. Then, you will be asked to:

1. Whether to send the Certificate Sign Request (CSR) to a Certificate Authority (CA): Choose No, as this is a self-signed certificate.

2. to enter the information for the certificate.

Finally, you will be prompted to enter the password to the NSS certificate that you set earlier:

# genkey --nss --days 365 box1

Apache NSS Certificate Password

Apache NSS Certificate Password

At anytime, you can list the existing certificates with:

# certutil –L –d /etc/httpd/alias

List Apache NSS Certificates

List Apache NSS Certificates

And delete them by name (only if strictly required, replacing box1 by your own certificate name) with:

# certutil -d /etc/httpd/alias -D -n "box1"

if you need to.c

Testing Apache SSL HTTPS Connections

Finally, it’s time to test the secure connection to our web server. When you point your browser to https://<web server IP or hostname>, you will get the well-known message “This connection is untrusted“:

Check Apache SSL Connection

Check Apache SSL Connection

In the above situation, you can click on Add Exception and then Confirm Security Exception – but don’t do it yet. Let’s first examine the certificate to see if its details match the information that we entered earlier (as shown in the screencast).

To do so, click on View… –> Details tab above and you should see this when you select Issuer from the list:

Confirm Apache SSL Certificate Details

Confirm Apache SSL Certificate Details

Now you can go ahead, confirm the exception (either for this time or permanently) and you will be taken to your web server’s DocumentRoot directory via https, where you can inspect the connection details using your browser’s builtin developer tools:

In Firefox you can launch it by right clicking on the screen, and choosing Inspect Element from the context menu, specifically through the Network tab:

Inspect Apache HTTPS Connection

Inspect Apache HTTPS Connection

Please note that this is the same information as displayed before, which was entered during the certificate previously. There’s also a way to test the connection using command line tools:

On the left (testing SSLv3):

# openssl s_client -connect localhost:443 -ssl3

On the right (testing TLS):

# openssl s_client -connect localhost:443 -tls1

Testing Apache SSL and TLS Connections

Testing Apache SSL and TLS Connections

Refer to the screenshot above for more details.

Summary

As I’m sure you already know, the presence of HTTPS inspires trust in visitors who may have to enter personal information in your site (from user names and passwords all the way to financial / bank account information).

In that case, you will want to get a certificate signed by a trusted Certificate Authority as we explained earlier (the steps to set it up are identical with the exception that you will need to send the CSR to a CA, and you will get the signed certificate back); otherwise, a self-signed certificate as the one used in this tutorial will do.

For more details on the use of NSS, please refer to the online help about mod-nss. And don’t hesitate to let us know if you have any questions or comments.

How to Setup Postfix Mail Server (SMTP) using null-client Configuration – Part 9

Regardless of the many online communication methods that are available today, email remains a practical way to deliver messages from one end of the world to another, or to a person sitting in the office next to ours.

The following image illustrates the process of email transport starting with the sender until the message reaches the recipient’s inbox:

How Mail Setup Works

How Mail Setup Works

To make this possible, several things happen behind the scenes. In order for an email message to be delivered from a client application (such as Thunderbird, Outlook, or webmail services such as Gmail or Yahoo! Mail) to a mail server, and from there to the destination server and finally to its intended recipient, a SMTP (Simple Mail Transfer Protocol) service must be in place in each server.

That is the reason why in this article we will explain how to set up a SMTP server in RHEL 7 where emails sent by local users (even to other local users) are forwarded to a central mail server for easier access.

In the exam’s requirements this is called a null-client setup.

Our test environment will consist of an originating mail server and a central mail server or relayhost.

Original Mail Server: (hostname: box1.mydomain.com / IP: 192.168.0.18) 
Central Mail Server: (hostname: mail.mydomain.com / IP: 192.168.0.20)

For name resolution we will use the well-known /etc/hosts file on both boxes:

192.168.0.18    box1.mydomain.com       box1
192.168.0.20    mail.mydomain.com       mail

Installing Postfix and Firewall / SELinux Considerations

To begin, we will need to (in both servers):

1. Install Postfix:

# yum update && yum install postfix

2. Start the service and enable it to run on future reboots:

# systemctl start postfix
# systemctl enable postfix

3. Allow mail traffic through the firewall:

# firewall-cmd --permanent --add-service=smtp
# firewall-cmd --add-service=smtp

Open Mail Server Port in Firewall

Open Mail Server SMTP Port in Firewall

4. Configure Postfix on box1.mydomain.com.

Postfix’s main configuration file is located in /etc/postfix/main.cf. This file itself is a great documentation source as the included comments explain the purpose of the program’s settings.

For brevity, let’s display only the lines that need to be edited (yes, you need to leave mydestination blank in the originating server; otherwise the emails will be stored locally as opposed to in a central mail server which is what we actually want):

Configure Postfix on box1.mydomain.com
myhostname = box1.mydomain.com
mydomain = mydomain.com
myorigin = $mydomain
inet_interfaces = loopback-only
mydestination =
relayhost = 192.168.0.20

5. Configure Postfix on mail.mydomain.com.

Configure Postfix on mail.mydomain.com
myhostname = mail.mydomain.com
mydomain = mydomain.com
myorigin = $mydomain
inet_interfaces = all
mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain
mynetworks = 192.168.0.0/24, 127.0.0.0/8

And set the related SELinux boolean to true permanently if not already done:

# setsebool -P allow_postfix_local_write_mail_spool on

Set Postfix SELinux Permission

Set Postfix SELinux Permission

The above SELinux boolean will allow Postfix to write to the mail spool in the central server.

5. Restart the service on both servers for the changes to take effect:

# systemctl restart postfix

If Postfix does not start correctly, you can use following commands to troubleshoot.

# systemctl –l status postfix
# journalctl –xn
# postconf –n

Testing the Postfix Mail Servers

To test the mail servers, you can use any Mail User Agent (most commonly known as MUA for short) such as mail or mutt.

Since mutt is a personal favorite, I will use it in box1 to send an email to user tecmint using an existing file (mailbody.txt) as message body:

# mutt -s "Part 9-RHCE series" tecmint@mydomain.com < mailbody.txt

Test Postfix Mail Server

Test Postfix Mail Server

Now go to the central mail server (mail.mydomain.com), log on as user tecmint, and check whether the email was received:

# su – tecmint
# mail

Check Postfix Mail Server Delivery

Check Postfix Mail Server Delivery

If the email was not received, check root’s mail spool for a warning or error notification. You may also want to make sure that the SMTP service is running on both servers and that port 25 is open in the central mail server using nmap command:

# nmap -PN 192.168.0.20

Troubleshoot Postfix Mail Server

Troubleshoot Postfix Mail Server

Summary

Setting up a mail server and a relay host as shown in this article is an essential skill that every system administrator must have, and represents the foundation to understand and install a more complex scenario such as a mail server hosting a live domain for several (even hundreds or thousands) of email accounts.

(Please note that this kind of setup requires a DNS server, which is out of the scope of this guide), but you can use following article to setup DNS Server:

  1. Setup Cache only DNS Server in CentOS/RHEL 07

Finally, I highly recommend you become familiar with Postfix’s configuration file (main.cf) and the program’s man page. If in doubt, don’t hesitate to drop us a line using the form below or using our forum, Linuxsay.com, where you will get almost immediate help from Linux experts from all around the world.

Install and Configure Caching-Only DNS Server in RHEL/CentOS 7 – Part 10

DNS servers comes in several types such as master, slave, forwarding and cache, to name a few examples, with cache-only DNS being the one that is easier to setup. Since DNS uses the UDP protocol, it improves the query time because it does not require an acknowledgement.

Setup Cache-Only DNS in RHEL and CentOS 7

RHCE Series: Setup Cache-Only DNS in RHEL and CentOS 7 – Part 11

The cache-only DNS server is also known as resolver, which will query DNS records and fetch all the DNS details from other servers, and keep each query request in its cache for later use so that when we perform the same request in the future, it will serve from its cache, thus reducing the response time even more.

If you’re looking to setup DNS Caching-Only Server in CentOS/RHEL 6, follow this guide here:

Setting Up Caching-Only DNS Name Server in CentOS/RHEL 6

My Testing Environment

DNS server		:	dns.tecmintlocal.com (Red Hat Enterprise Linux 7.1)
Server IP Address	:	192.168.0.18
Client			:	node1.tecmintlocal.com (CentOS 7.1)
Client IP Address	:	192.168.0.29

Step 1: Installing Cache-Only DNS Server in RHEL/CentOS 7

1. The Cache-Only DNS server, can be installed via the bind package. If you don’t remember the package name, you can do a quick search for the package name using the command below.

# yum search bind

Search DNS Bind Package

Search DNS Bind Package

2. In the above result, you will see several packages. From those, we need to choose and install only bind and bind-utils packages using following yum command.

# yum install bind bind-utils -y

Install DNS Bind in RHEL/CentOS 7

Install DNS Bind in RHEL/CentOS 7

Step 2: Configure Cache-Only DNS in RHEL/CentOS 7

3. Once DNS packages are installed we can go ahead and configure DNS. Open and edit /etc/named.confusing your preferred text editor. Make the changes suggested below (or you can use your settings as per your requirements).

listen-on port 53 { 127.0.0.1; any; };
allow-query     { localhost; any; };
allow-query-cache       { localhost; any; };

Configure Cache-Only DNS in CentOS and RHEL 7

Configure Cache-Only DNS in CentOS and RHEL 7

These directives instruct the DNS server to listen on UDP port 53, and to allow queries and caches responses from localhost and any other machine that reaches the server.

4. It is important to note that the ownership of this file must be set to root:named and also if SELinux is enabled, after editing the configuration file we need to make sure that its context is set to named_conf_t as shown in Fig. 4 (same thing for the auxiliary file /etc/named.rfc1912.zones):

# ls -lZ /etc/named.conf
# ls -lZ /etc/named.rfc1912.zones

Otherwise, configure the SELinux context before proceeding:

# semanage fcontext -a -t named_conf_t /etc/named.conf
# semanage fcontext -a -t named_conf_t /etc/named.rfc1912.zones

5. Additionally, we need to test the DNS configuration now for some syntax error before starting the bind service:

# named-checkconf /etc/named.conf

6. After the syntax verification results seems perfect, restart the named service to take new changes into effect and also make the service to auto start across system boots, and then check its status:

# systemctl restart named
# systemctl enable named
# systemctl status named

Configure and Start DNS Named Service

Configure and Start DNS Named Service

7. Next, open the port 53 on the firewall.

# firewall-cmd --add-port=53/udp
# firewall-cmd --add-port=53/udp --permanent

Open DNS Port 53 on Firewall

Open DNS Port 53 on Firewall

Step 3: Chroot Cache-Only DNS Server in RHEL and CentOS 7

8. If you wish to deploy the Cache-only DNS server within chroot environment, you need to have the package chroot installed on the system and no further configuration is needed as it by default hard-link to chroot.

# yum install bind-chroot -y

Once chroot package has been installed, you can restart named to take the new changes into effect:

# systemctl restart named

9. Next, create a symbolic link (also named /etc/named.conf) inside /var/named/chroot/etc/:

# ln -s /etc/named.conf /var/named/chroot/etc/named.conf

Step 4: Configure DNS on Client Machine

10. Add the DNS Cache servers IP 192.168.0.18 as resolver to the client machine. Edit /etc/sysconfig/network-scripts/ifcfg-enp0s3 as shown in the following figure:

DNS=192.168.0.18

Configure DNS on Client Machine

Configure DNS on Client Machine

And /etc/resolv.conf as follows:

nameserver 192.168.0.18

11. Finally it’s time to check our cache server. To do this, you can use dig utility or nslookup command.

Choose any website and query it twice (we will use facebook.com as an example). Note that with dig the second time the query is completed much faster because it is being served from the cache.

# dig facebook.com

Check Cache only DNS Queries

Check Cache only DNS Queries

You can also use nslookup to verify that the DNS server is working as expected.

# nslookup facebook.com

Checking DNS Query with nslookup

Checking DNS Query with nslookup

Summary

In this article we have explained how to set up a DNS Cache-only server in Red Hat Enterprise Linux 7 and CentOS 7, and tested it in a client machine. Feel free to let us know if you have any questions or suggestions using the form below.

How to Setup and Configure Network Bonding or Teaming in RHEL/CentOS 7 – Part 11

When a system administrator wants to increase the bandwidth available and provide redundancy and load balancing for data transfers, a kernel feature known as network bonding allows to get the job done in a cost-effective way.

Read more about how to increase or bandwidth throttling in Linux

View image on Twitter

In simple words, bonding means aggregating two or more physical network interfaces (called slaves) into a single, logical one (called master). If a specific NIC (Network Interface Card) experiences a problem, communications are not affected significantly as long as the other(s) remain active.

Read more about network bonding in Linux systems here:

  1. Network Teaming or NiC Bondin in RHEL/CentOS 6/5
  2. Network NIC Bonding or Teaming on Debian based Systems
  3. How to Configure Network Bonding or Teaming in Ubuntu

Enabling and Configuring Network Bonding or Teaming

By default, the bonding kernel module is not enabled. Thus, we will need to load it and ensure it is persistent across boots. When used with the --first-time option, modprobe will alert us if loading the module fails:

# modprobe --first-time bonding

The above command will load the bonding module for the current session. In order to ensure persistency, create a .conf file inside /etc/modules-load.d with a descriptive name, such as /etc/modules-load.d/bonding.conf:

# echo "# Load the bonding kernel module at boot" > /etc/modules-load.d/bonding.conf
# echo "bonding" >> /etc/modules-load.d/bonding.conf

Now reboot your server and once it restarts, make sure the bonding module is loaded automatically, as seen in Fig. 1:

Check Network Bonding Module Loaded in Kernel

Check Network Bonding Module Loaded in Kernel

In this article we will use 3 interfaces (enp0s3enp0s8, and enp0s9) to create a bond, named conveniently bond0.

To create bond0, we can either use nmtui, the text interface for controlling NetworkManager. When invoked without arguments from the command line, nmtui brings up a text interface that allows you to edit an existing connection, activate a connection, or set the system hostname.

Choose Edit connection –> Add –> Bond as illustrated in Fig. 2:

Create Network Bonding Channel

Create Network Bonding Channel

In the Edit Connection screen, add the slave interfaces (enp0s3enp0s8, and enp0s9 in our case) and give them a descriptive (Profile) name (for example, NIC #1NIC #2, and NIC #3, respectively).

In addition, you will need to set a name and device for the bond (TecmintBond and bond0 in Fig. 3, respectively) and an IP address for bond0, enter a gateway address, and the IPs of DNS servers.

Note that you do not need to enter the MAC address of each interface since nmtui will do that for you. You can leave all other settings as default. See Fig. 3 for more details.

Network Bonding Teaming Configuration

Network Bonding Teaming Configuration

When you’re done, go to the bottom of the screen and choose OK (see Fig. 4):

Configuration of bond0

Configuration of bond0

And you’re done. Now you can exit the text interface and return to the command line, where you will enable the newly created interface using ip command:

# ip link set dev bond0 up

After that, you can see that bond0 is UP and is assigned 192.168.0.200, as seen in Fig. 5:

# ip addr show bond0

Check Network Bond Interface Status

Check Network Bond Interface Status

Testing Network Bonding or Teaming in Linux

To verify that bond0 actually works, you can either ping its IP address from another machine, or what’s even better, watch the kernel interface table in real time (well, the refresh time in seconds is given by the -n option) to see how network traffic is distributed between the three network interfaces, as shown in Fig. 6.

The -d option is used to highlight changes when they occur:

# watch -d -n1 netstat -i

Check Kernel Interface Table

Check Kernel Interface Table

It is important to note that there are several bonding modes, each with its distinguishing characteristics. They are documented in section 4.5 of the Red Hat Enterprise Linux 7 Network Administration guide. Depending on your needs, you will choose one or the other.

In our current setup, we chose the Round-robin mode (see Fig. 3), which ensures packets are transmitted beginning with the first slave in sequential order, ending with the last slave, and starting with the first again.

The Round-robin alternative is also called mode 0, and provides load balancing and fault tolerance. To change the bonding mode, you can use nmtui as explained before (see also Fig. 7):

Changing Bonding Mode Using nmtui

Changing Bonding Mode Using nmtui

If we change it to Active Backup, we will be prompted to choose a slave that will the only one active interface at a given time. If such card fails, one of the remaining slaves will take its place and becomes active.

Let’s choose enp0s3 to be the primary slave, bring bond0 down and up again, restart the network, and display the kernel interface table (see Fig. 8).

Note how data transfers (TX-OK and RX-OK) are now being made over enp0s3 only:

# ip link set dev bond0 down
# ip link set dev bond0 up
# systemctl restart network

Bond Acting in Active Backup Mode

Bond Acting in Active Backup Mode

Alternatively, you can view the bond as the kernel sees it (see Fig. 9):

# cat /proc/net/bonding/bond0

Check Network Bond as Kernel

Check Network Bond as Kernel

Summary

In this chapter we have discussed how to set up and configure bonding in Red Hat Enterprise Linux 7 (also works on CentOS 7 and Fedora 22+) in order to increase bandwidth along with load balancing and redundancy for data transfers.

As you take the time to explore other bonding modes, you will come to master the concepts and practice related with this topic of the certification.

If you have questions about this article, or suggestions to share with the rest of the community, feel free to let us know using the comment form below.

Create Centralized Secure Storage using iSCSI Target / Initiator on RHEL/CentOS 7 – Part 12

iSCSI is a block level Protocol for managing storage devices over TCP/IP Networks, specially over long distances. iSCSI target is a remote hard disk presented from an remote iSCSI server (or) target. On the other hand, the iSCSI client is called the Initiator, and will access the storage that is shared in the Target machine.

The following machines have been used in this article:

Server (Target):

Operating System – Red Hat Enterprise Linux 7
iSCSI Target IP – 192.168.0.29
Ports Used : TCP 860, 3260

Client (Initiator):

Operating System – Red Hat Enterprise Linux 7
iSCSI Target IP – 192.168.0.30
Ports Used : TCP 3260

Step 1: Installing Packages on iSCSI Target

To install the packages needed for the target (we will deal with the client later), do:

# yum install targetcli -y

When the installation completes, we will start and enable the service as follows:

# systemctl start target
# systemctl enable target

Finally, we need to allow the service in firewalld:

# firewall-cmd --add-service=iscsi-target
# firewall-cmd --add-service=iscsi-target --permanent

And last but not least, we must not forget to allow the iSCSI target discovery:

# firewall-cmd --add-port=860/tcp
# firewall-cmd --add-port=860/tcp --permanent
# firewall-cmd --reload

Step 2: Defining LUNs in Target Server

Before proceeding to defining LUNs in the Target, we need to create two logical volumes as explained in Part 6 of RHCSA series (“Configuring system storage”).

This time we will name them vol_projects and vol_backups and place them inside a volume group called vg00, as shown in Fig. 1. Feel free to choose the space allocated to each LV:

Two Logical Volumes Named vol_projects and vol_backups

Fig 1: Two Logical Volumes Named vol_projects and vol_backups

After creating the LVs, we are ready to define the LUNs in the Target in order to make them available for the client machine.

As shown in Fig. 2, we will open a targetcli shell and issue the following commands, which will create two block backstores (local storage resources that represent the LUN the initiator will actually use) and an Iscsi Qualified Name (IQN), a method of addressing the target server.

Please refer to Page 32 of RFC 3720 for more details on the structure of the IQN. In particular, the text after the colon character (:tgt1) specifies the name of the target, while the text before (server:) indicates the hostname of the target inside the domain.

# targetcli
# cd backstores
# cd block
# create server.backups /dev/vg00/vol_backups
# create server.projects /dev/vg00/vol_projects
# cd /iscsi
# create iqn.2016-02.com.tecmint.server:tgt1

Define LUNs in Target Server

Fig 2: Define LUNs in Target Server

With the above step, a new TPG (Target Portal Group) was created along with the default portal (a pair consisting of an IP address and a port which is the way initiators can reach the target) listening on port 3260 of all IP addresses.

If you want to bind your portal to a specific IP (the Target’s main IP, for example), delete the default portal and create a new one as follows (otherwise, skip the following targetcli commands. Note that for simplicity we have skipped them as well):

# cd /iscsi/iqn.2016-02.com.tecmint.server:tgt1/tpg1/portals
# delete 0.0.0.0 3260
# create 192.168.0.29 3260

Now we are ready to proceed with the creation of LUNs. Note that we are using the backstores we previously created (server.backups and server.projects). This process is illustrated in Fig. 3:

# cd iqn.2016-02.com.tecmint.server:tgt1/tpg1/luns
# create /backstores/block/server.backups
# create /backstores/block/server.projects

Create LUNs in iSCSI Target Server

Fig 3: Create LUNs in iSCSI Target Server

The last part in the Target configuration consists of creating an Access Control List to restrict access on a per-initiator basis. Since our client machine is named “client”, we will append that text to the IQN. Refer to Fig. 4 for details:

# cd ../acls
# create iqn.2016-02.com.tecmint.server:client

Create Access Control List for Initiator

Fig 4: Create Access Control List for Initiator

At this point we can the targetcli shell to show all configured resources, as we can see in Fig. 5:

# targetcli
# cd /
# ls

User targetcli to Check Configured Resources

Fig 5: User targetcli to Check Configured Resources

To quit the targetcli shell, simply type exit and press Enter. The configuration will be saved automatically to /etc/target/saveconfig.json.

As you can see in Fig. 5 above, we have a portal listening on port 3260 of all IP addresses as expected. We can verify that using netstat command (see Fig. 6):

# netstat -npltu | grep 3260

Verify iSCSI Target Server Port Listening

Fig 6: Verify iSCSI Target Server Port Listening

This concludes the Target configuration. Feel free to restart the system and verify that all settings survive a reboot. If not, make sure to open the necessary ports in the firewall configuration and to start the target service on boot. We are now ready to set up the Initiator and to connect to the client.

Step 3: Setting up the Client Initiator

In the client we will need to install the iscsi-initiator-utils package, which provides the server daemon for the iSCSI protocol (iscsid) as well as iscsiadm, the administration utility:

# yum update && yum install iscsi-initiator-utils

Once the installation completes, open /etc/iscsi/initiatorname.iscsi and replace the default initiator name (commented in Fig. 7) with the name that was previously set in the ACL on the server (iqn.2016-02.com.tecmint.server:client).

Then save the file and run iscsiadm in discovery mode pointing to the target. If successful, this command will return the target information as shown in Fig. 7:

# iscsiadm -m discovery -t st -p 192.168.0.29

Setting Up Client Initiator

Fig 7: Setting Up Client Initiator

The next step consists in restarting and enabling the iscsid service:

# systemctl start iscsid
# systemctl enable iscsid

and contacting the target in node mode. This should result in kernel-level messages, which when captured through dmesg show the device identification that the remote LUNs have been given in the local system (sdeand sdf in Fig. 8):

# iscsiadm -m node -T iqn.2016-02.com.tecmint.server:tgt1 -p 192.168.0.29 -l
# dmesg | tail

Connecting to iSCSCI Target Server in Node Mode

Fig 8: Connecting to iSCSCI Target Server in Node Mode

From this point on, you can create partitions, or even LVs (and filesystems on top of them) as you would do with any other storage device. For simplicity, we will create a primary partition on each disk that will occupy its entire available space, and format it with ext4.

Finally, let’s mount /dev/sde1 and /dev/sdf1 on /projects and /backups, respectively (note that these directories must be created first):

# mount /dev/sde1 /projects
# mount /dev/sdf1 /backups

Additionally, you can add two entries in /etc/fstab in order for both filesystems to be mounted automatically at boot using each filesystem’s UUID as returned by blkid.

Note that the _netdev mount option must be used in order to defer the mounting of these filesystems until the network service has been started:

Find Filesystem UUID

Fig 9: Find Filesystem UUID

You can now use these devices as you would with any other storage media.

Summary

In this article we have covered how to set up and configure an iSCSI Target and an Initiator in RHEL/CentOS 7disitributions. Although the first task is not part of the required competencies of the EX300 (RHCE) exam, it is needed in order to implement the second topic.

Don’t hesitate to let us know if you have any questions or comments about this article – feel free to drop us a line using the comment form below.

Looking to setup iSCSI Target and Client Initiator on RHEL/CentOS 6, follow this guide: Setting Up Centralized iSCSI Storage with Client Initiator.

Setting Up “NTP (Network Time Protocol) Server” in RHEL/CentOS 7

Network Time Protocol – NTP- is a protocol which runs over port 123 UDP at Transport Layer and allows computers to synchronize time over networks for an accurate time. While time is passing by, computers internal clocks tend to drift which can lead to inconsistent time issues, especially on servers and clients logs files or if you want to replicate servers resources or databases.

NTP Server Install in CentOS

NTP Server Installation in CentOS and RHEL 7

Requirements:

  1. CentOS 7 Installation Procedure
  2. RHEL 7 Installation Procedure

Additional Requirements:

  1. Register and Enbale RHEL 7 Subscription for Updates
  2. Configure Static IP Address on CentOS/Rhel 7
  3. Disable and Remove Unwanted Services in CentOS/RHEL 7

This tutorial will demonstrate how you can install and configure NTP server on CentOS/RHEL 7 and automatically synchronize time with the closest geographically peers available for your server location by using NTP Public Pool Time Servers list.

Step 1: Install and configure NTP daemon

1. NTP server package is provided by default from official CentOS /RHEL 7 repositories and can be installed by issuing the following command.

# yum install ntp

Install NTP in CentOS

Install NTP Server

2. After the server is installed, first go to official NTP Public Pool Time Servers, choose your Continent area where the server physically is located, then search for your Country location and a list of NTP servers should appear.

NTP Pool Server

NTP Pool Server

3. Then open NTP daemon main configuration file for editing, comment the default list of Public Servers from pool.ntp.org project and replace it with the list provided for your country like in the screenshot below.

Configure NTP Server in CentOS

Configure NTP Server

4. Further, you need to allow clients from your networks to synchronize time with this server. To accomplish this, add the following line to NTP configuration file, where restrict statement controls, what network is allowed to query and sync time – replace network IPs accordingly.

restrict 192.168.1.0 netmask 255.255.255.0 nomodify notrap

The nomodify notrap statements suggest that your clients are not allowed to configure the server or be used as peers for time sync.

5. If you need additional information for troubleshooting in case there are problems with your NTP daemon add a log file statement which will record all NTP server issues into one dedicated log file.

logfile /var/log/ntp.log

Enable NTP Logs in CentOS

Enable NTP Logs

6. After you have edited the file with all configuration explained above save and close ntp.conf file. Your final configuration should look like in the screenshot below.

NTP Server Configuration in CentOS

NTP Server Configuration

Step 2: Add Firewall Rules and Start NTP Daemon

7. NTP service uses UDP port 123 on OSI transport layer (layer 4). It is designed particularly to resist the effects of variable latency (jitter). To open this port on RHEL/CentOS 7 run the following commands against Firewalld service.

# firewall-cmd --add-service=ntp --permanent
# firewall-cmd --reload

Open NTP Port in Firewall

Open NTP Port in Firewall

8. After you have opened Firewall port 123, start NTP server and make sure you enable it system-wide. Use the following commands to manage the service.

# systemctl start ntpd
# systemctl enable ntpd
# systemctl status ntpd

Start NTP Service

Start NTP Service

Step 3: Verify Server Time Sync

9. After NTP daemon has been started, wait a few minutes for the server to synchronize time with its pool list servers, then run the following commands to verify NTP peers synchronization status and your system time.

# ntpq -p
# date -R

Verify NTP Server Time

Verify NTP Time Sync

10. If you want to query and synchronize against a pool of your choice use ntpdate command, followed by the server or servers addresses, as suggested in the following command line example.

# ntpdate -q  0.ro.pool.ntp.org  1.ro.pool.ntp.org

Synchronize NTP Time

Synchronize NTP Time

Step 4: Setup Windows NTP Client

11. If your windows machine is not a part of a Domain Controller you can configure Windows to synchronize time with your NTP server by going to Time from the right side of Taskbar -> Change Date and Time Settings -> Internet Time tab -> Change Settings -> Check Synchronize with an Internet time server -> put your server’s IP or FQDN on Server filed -> Update now -> OK.

Synchronize Windows Time with NTP

Synchronize Windows Time with NTP

That’s all! Setting up a local NTP Server on your network ensures that all your servers and clients have the same time set in case of an Internet connectivity failure and they all are synchronized with each other.

Source

RHCSA (Red Hat Certified System Administrator)

RHCSA Series: Reviewing Essential Commands & System Documentation – Part 1

RHCSA (Red Hat Certified System Administrator) is a certification exam from Red Hat company, which provides an open source operating system and software to the enterprise community, It also provides support, training and consulting services for the organizations.

RHCSA Exam Guide

RHCSA Exam Preparation Guide

RHCSA exam is the certification obtained from Red Hat Inc, after passing the exam (codename EX200). RHCSA exam is an upgrade to the RHCT (Red Hat Certified Technician) exam, and this upgrade is compulsory as the Red Hat Enterprise Linux was upgraded. The main variation between RHCT and RHCSA is that RHCT exam based on RHEL 5, whereas RHCSA certification is based on RHEL 6 and 7, the courseware of these two certifications are also vary to a certain level.

This Red Hat Certified System Administrator (RHCSA) is essential to perform the following core system administration tasks needed in Red Hat Enterprise Linux environments:

  1. Understand and use necessary tools for handling files, directories, command-environments line, and system-wide / packages documentation.
  2. Operate running systems, even in different run levels, identify and control processes, start and stop virtual machines.
  3. Set up local storage using partitions and logical volumes.
  4. Create and configure local and network file systems and its attributes (permissions, encryption, and ACLs).
  5. Setup, configure, and control systems, including installing, updating and removing software.
  6. Manage system users and groups, along with use of a centralized LDAP directory for authentication.
  7. Ensure system security, including basic firewall and SELinux configuration.

To view fees and register for an exam in your country, check the RHCSA Certification page.

In this 15-article RHCSA series, titled Preparation for the RHCSA (Red Hat Certified System Administrator) exam, we will going to cover the following topics on the latest releases of Red Hat Enterprise Linux 7.

Part 1Reviewing Essential Commands & System Documentation

In this Part 1 of the RHCSA series, we will explain how to enter and execute commands with the correct syntax in a shell prompt or terminal, and explained how to find, inspect, and use system documentation.

RHCSA: Reviewing Essential Linux Commands – Part 1

RHCSA: Reviewing Essential Linux Commands – Part 1

Prerequisites:

At least a slight degree of familiarity with basic Linux commands such as:

  1. cd command (change directory)
  2. ls command (list directory)
  3. cp command (copy files)
  4. mv command (move or rename files)
  5. touch command (create empty files or update the timestamp of existing ones)
  6. rm command (delete files)
  7. mkdir command (make directory)

The correct usage of some of them are anyway exemplified in this article, and you can find further information about each of them using the suggested methods in this article.

Though not strictly required to start, as we will be discussing general commands and methods for information search in a Linux system, you should try to install RHEL 7 as explained in the following article. It will make things easier down the road.

  1. Red Hat Enterprise Linux (RHEL) 7 Installation Guide

Interacting with the Linux Shell

If we log into a Linux box using a text-mode login screen, chances are we will be dropped directly into our default shell. On the other hand, if we login using a graphical user interface (GUI), we will have to open a shell manually by starting a terminal. Either way, we will be presented with the user prompt and we can start typing and executing commands (a command is executed by pressing the Enter key after we have typed it).

Commands are composed of two parts:

  1. the name of the command itself, and
  2. arguments

Certain arguments, called options (usually preceded by a hyphen), alter the behavior of the command in a particular way while other arguments specify the objects upon which the command operates.

The type command can help us identify whether another certain command is built into the shell or if it is provided by a separate package. The need to make this distinction lies in the place where we will find more information about the command. For shell built-ins we need to look in the shell’s man page, whereas for other binaries we can refer to its own man page.

Check Shell built in Commands

Check Shell built in Commands

In the examples above, cd and type are shell built-ins, while top and less are binaries external to the shell itself (in this case, the location of the command executable is returned by type).

Other well-known shell built-ins include:

  1. echo command: Displays strings of text.
  2. pwd command: Prints the current working directory.

More Built in Shell Commands

More Built in Shell Commands

exec command

Runs an external program that we specify. Note that in most cases, this is better accomplished by just typing the name of the program we want to run, but the exec command has one special feature: rather than create a new process that runs alongside the shell, the new process replaces the shell, as can verified by subsequent.

# ps -ef | grep [original PID of the shell process]

When the new process terminates, the shell terminates with it. Run exec top and then hit the q key to quit top. You will notice that the shell session ends when you do, as shown in the following screencast:

export command

Exports variables to the environment of subsequently executed commands.

history Command

Displays the command history list with line numbers. A command in the history list can be repeated by typing the command number preceded by an exclamation sign. If we need to edit a command in history list before executing it, we can press Ctrl + r and start typing the first letters associated with the command. When we see the command completed automatically, we can edit it as per our current need:

This list of commands is kept in our home directory in a file called .bash_history. The history facility is a useful resource for reducing the amount of typing, especially when combined with command line editing. By default, bash stores the last 500 commands you have entered, but this limit can be extended by using the HISTSIZEenvironment variable:

Linux history Command

Linux history Command

But this change as performed above, will not be persistent on our next boot. In order to preserve the change in the HISTSIZE variable, we need to edit the .bashrc file by hand:

# for setting history length see HISTSIZE and HISTFILESIZE in bash(1)
HISTSIZE=1000

Important: Keep in mind that these changes will not take effect until we restart our shell session.

alias command

With no arguments or with the -p option prints the list of aliases in the form alias name=value on standard output. When arguments are provided, an alias is defined for each name whose value is given.

With alias, we can make up our own commands or modify existing ones by including desired options. For example, suppose we want to alias ls to ls –color=auto so that the output will display regular files, directories, symlinks, and so on, in different colors:

# alias ls='ls --color=auto'

Linux alias Command

Linux alias Command

Note: That you can assign any name to your “new command” and enclose as many commands as desired between single quotes, but in that case you need to separate them by semicolons, as follows:

# alias myNewCommand='cd /usr/bin; ls; cd; clear'
exit command

The exit and logout commands both terminate the shell. The exit command terminates any shell, but the logoutcommand terminates only login shells—that is, those that are launched automatically when you initiate a text-mode login.

If we are ever in doubt as to what a program does, we can refer to its man page, which can be invoked using the man command. In addition, there are also man pages for important files (inittab, fstab, hosts, to name a few), library functions, shells, devices, and other features.

Examples:
  1. man uname (print system information, such as kernel name, processor, operating system type, architecture, and so on).
  2. man inittab (init daemon configuration).

Another important source of information is provided by the info command, which is used to read info documents. These documents often provide more information than the man page. It is invoked by using the info keyword followed by a command name, such as:

# info ls
# info cut

In addition, the /usr/share/doc directory contains several subdirectories where further documentation can be found. They either contain plain-text files or other friendly formats.

Make sure you make it a habit to use these three methods to look up information for commands. Pay special and careful attention to the syntax of each of them, which is explained in detail in the documentation.

Converting Tabs into Spaces with expand Command

Sometimes text files contain tabs but programs that need to process the files don’t cope well with tabs. Or maybe we just want to convert tabs into spaces. That’s where the expand tool (provided by the GNU coreutils package) comes in handy.

For example, given the file NumbersList.txt, let’s run expand against it, changing tabs to one space, and display on standard output.

# expand --tabs=1 NumbersList.txt

Linux expand Command

Linux expand Command

The unexpand command performs the reverse operation (converts spaces into tabs).

Display the first lines of a file with head and the last lines with tail

By default, the head command followed by a filename, will display the first 10 lines of the said file. This behavior can be changed using the -n option and specifying a certain number of lines.

# head -n3 /etc/passwd
# tail -n3 /etc/passwd

Linux head and tail Command

Linux head and tail Command

One of the most interesting features of tail is the possibility of displaying data (last lines) as the input file grows (tail -f my.log, where my.log is the file under observation). This is particularly useful when monitoring a log to which data is being continually added.

Read More: Manage Files Effectively using head and tail Commands

Merging Lines with paste

The paste command merges files line by line, separating the lines from each file with tabs (by default), or another delimiter that can be specified (in the following example the fields in the output are separated by an equal sign).

# paste -d= file1 file2

Merge Files in Linux

Merge Files in Linux

Breaking a file into pieces using split command

The split command is used split a file into two (or more) separate files, which are named according to a prefix of our choosing. The splitting can be defined by size, chunks, or number of lines, and the resulting files can have a numeric or alphabetic suffixes. In the following example, we will split bash.pdf into files of size 50 KB (-b 50KB), using numeric suffixes (-d):

# split -b 50KB -d bash.pdf bash_

Split Files in Linux

Split Files in Linux

You can merge the files to recreate the original file with the following command:

# cat bash_00 bash_01 bash_02 bash_03 bash_04 bash_05 > bash.pdf
Translating characters with tr command

The tr command can be used to translate (change) characters on a one-by-one basis or using character ranges. In the following example we will use the same file2 as previously, and we will change:

  1. lowercase o’s to uppercase,
  2. and all lowercase to uppercase
# cat file2 | tr o O
# cat file2 | tr [a-z] [A-Z]

Linux tr Command Examples

Translate Characters in Linux

Reporting or deleting duplicate lines with uniq and sort command

The uniq command allows us to report or remove duplicate lines in a file, writing to stdout by default. We must note that uniq does not detect repeated lines unless they are adjacent. Thus, uniq is commonly used along with a preceding sort (which is used to sort lines of text files).

By default, sort takes the first field (separated by spaces) as key field. To specify a different key field, we need to use the -k option. Please note how the output returned by sort and uniq change as we change the key field in the following example:

# cat file3
# sort file3 | uniq
# sort -k2 file3 | uniq
# sort -k3 file3 | uniq

Remove Duplicate Lines in Linux

Remove Duplicate Lines in Linux

Extracting text with cut command

The cut command extracts portions of input lines (from stdin or files) and displays the result on standard output, based on number of bytes (-b), characters (-c), or fields (-f).

When using cut based on fields, the default field separator is a tab, but a different separator can be specified by using the -d option.

# cut -d: -f1,3 /etc/passwd # Extract specific fields: 1 and 3 in this case
# cut -d: -f2-4 /etc/passwd # Extract range of fields: 2 through 4 in this example

Extract Text From a File in Linux

Extract Text From a File in Linux

Note that the output of the two examples above was truncated for brevity.

Reformatting files with fmt command

fmt is used to “clean up” files with a great amount of content or lines, or with varying degrees of indentation. The new paragraph formatting defaults to no more than 75 characters wide. You can change this with the -w (width) option, which set the line length to the specified number of characters.

For example, let’s see what happens when we use fmt to display the /etc/passwd file setting the width of each line to 100 characters. Once again, output has been truncated for brevity.

# fmt -w100 /etc/passwd

Linux fmt Command Examples

File Reformatting in Linux

Formatting content for printing with pr command

pr paginates and displays in columns one or more files for printing. In other words, pr formats a file to make it look better when printed. For example, the following command:

# ls -a /etc | pr -n --columns=3 -h "Files in /etc"

Shows a listing of all the files found in /etc in a printer-friendly format (3 columns) with a custom header (indicated by the -h option), and numbered lines (-n).

Linux pr Command Examples

File Formatting in Linux

Summary

In this article we have discussed how to enter and execute commands with the correct syntax in a shell prompt or terminal, and explained how to find, inspect, and use system documentation. As simple as it seems, it’s a large first step in your way to becoming a RHCSA.

If you would like to add other commands that you use on a periodic basis and that have proven useful to fulfill your daily responsibilities, feel free to share them with the world by using the comment form below. Questions are also welcome. We look forward to hearing from you!

RHCSA Series: How to Perform File and Directory Management – Part 2

In this article, RHCSA Part 2: File and directory management, we will review some essential skills that are required in the day-to-day tasks of a system administrator.

RHCSA: Perform File and Directory Management – Part 2

RHCSA: Perform File and Directory Management – Part 2

Create, Delete, Copy, and Move Files and Directories

File and directory management is a critical competence that every system administrator should possess. This includes the ability to create / delete text files from scratch (the core of each program’s configuration) and directories (where you will organize files and other directories), and to find out the type of existing files.

The touch command can be used not only to create empty files, but also to update the access and modification times of existing files.

touch command example

touch command example

You can use file [filename] to determine a file’s type (this will come in handy before launching your preferred text editor to edit it).

file command example

file command example

and rm [filename] to delete it.

Linux rm command examples

rm command example

As for directories, you can create directories inside existing paths with mkdir [directory] or create a full path with mkdir -p [/full/path/to/directory].

mkdir command example

mkdir command example

When it comes to removing directories, you need to make sure that they’re empty before issuing the rmdir [directory] command, or use the more powerful (handle with care!) rm -rf [directory]. This last option will force remove recursively the [directory] and all its contents – so use it at your own risk.

Input and Output Redirection and Pipelining

The command line environment provides two very useful features that allows to redirect the input and output of commands from and to files, and to send the output of a command to another, called redirection and pipelining, respectively.

To understand those two important concepts, we must first understand the three most important types of I/O (Input and Output) streams (or sequences) of characters, which are in fact special files, in the *nix sense of the word.

  1. Standard input (aka stdin) is by default attached to the keyboard. In other words, the keyboard is the standard input device to enter commands to the command line.
  2. Standard output (aka stdout) is by default attached to the screen, the device that “receives” the output of commands and display them on the screen.
  3. Standard error (aka stderr), is where the status messages of a command is sent to by default, which is also the screen.

In the following example, the output of ls /var is sent to stdout (the screen), as well as the result of ls /tecmint. But in the latter case, it is stderr that is shown.

Linux input output redirect

Input and Output Example

To more easily identify these special files, they are each assigned a file descriptor, an abstract representation that is used to access them. The essential thing to understand is that these files, just like others, can be redirected. What this means is that you can capture the output from a file or script and send it as input to another file, command, or script. This will allow you to store on disk, for example, the output of commands for later processing or analysis.

To redirect stdin (fd 0), stdout (fd 1), or stderr (fd 2), the following operators are available.

Redirection Operator Effect
> Redirects standard output to a file containing standard output. If the destination file exists, it will be overwritten.
>> Appends standard output to a file.
2> Redirects standard error to a file containing standard output. If the destination file exists, it will be overwritten.
2>> Appends standard error to the existing file.
&> Redirects both standard output and standard error to a file; if the specified file exists, it will be overwritten.
< Uses the specified file as standard input.
<> The specified file is used for both standard input and standard output.

As opposed to redirection, pipelining is performed by adding a vertical bar (|) after a command and before another one.

Remember:

  1. Redirection is used to send the output of a command to a file, or to send a file as input to a command.
  2. Pipelining is used to send the output of a command to another command as input.

Examples Of Redirection and Pipelining

Example 1: Redirecting the output of a command to a file

There will be times when you will need to iterate over a list of files. To do that, you can first save that list to a file and then read that file line by line. While it is true that you can iterate over the output of ls directly, this example serves to illustrate redirection.

# ls -1 /var/mail > mail.txt

Redirect output of command tot a file

Redirect output of command tot a file

Example 2: Redirecting both stdout and stderr to /dev/null

In case we want to prevent both stdout and stderr to be displayed on the screen, we can redirect both file descriptors to /dev/null. Note how the output changes when the redirection is implemented for the same command.

# ls /var /tecmint
# ls /var/ /tecmint &> /dev/null

Redirecting stdout and stderr ouput to /dev/null

Redirecting stdout and stderr ouput to /dev/null

Example 3: Using a file as input to a command

While the classic syntax of the cat command is as follows.

# cat [file(s)]

You can also send a file as input, using the correct redirection operator.

# cat < mail.txt

Linux cat command examples

cat command example

Example 4: Sending the output of a command as input to another

If you have a large directory or process listing and want to be able to locate a certain file or process at a glance, you will want to pipeline the listing to grep.

Note that we use to pipelines in the following example. The first one looks for the required keyword, while the second one will eliminate the actual grep command from the results. This example lists all the processes associated with the apache user.

# ps -ef | grep apache | grep -v grep

Send output of command as input to another

Send output of command as input to another

Archiving, Compressing, Unpacking, and Uncompressing Files

If you need to transport, backup, or send via email a group of files, you will use an archiving (or grouping) tool such as tar, typically used with a compression utility like gzipbzip2, or xz.

Your choice of a compression tool will be likely defined by the compression speed and rate of each one. Of these three compression tools, gzip is the oldest and provides the least compression, bzip2 provides improved compression, and xz is the newest and provides the best compression. Typically, files compressed with these utilities have .gz.bz2, or .xz extensions, respectively.

Command Abbreviation Description
–create c Creates a tar archive
–concatenate A Appends tar files to an archive
–append r Appends non-tar files to an archive
–update u Appends files that are newer than those in an archive
–diff or –compare d Compares an archive to files on disk
–list t Lists the contents of a tarball
–extract or –get x Extracts files from an archive
Operation modifier Abbreviation Description
—directory dir C Changes to directory dir before performing operations
—same-permissions and —same-owner p Preserves permissions and ownership information, respectively.
–verbose v Lists all files as they are read or extracted; if combined with –list, it also displays file sizes, ownership, and timestamps
—exclude file Excludes file from the archive. In this case, file can be an actual file or a pattern.
—gzip or —gunzip z Compresses an archive through gzip
–bzip2 j Compresses an archive through bzip2
–xz J Compresses an archive through xz
Example 5: Creating a tarball and then compressing it using the three compression utilities

You may want to compare the effectiveness of each tool before deciding to use one or another. Note that while compressing small files, or a few files, the results may not show much differences, but may give you a glimpse of what they have to offer.

# tar cf ApacheLogs-$(date +%Y%m%d).tar /var/log/httpd/*        # Create an ordinary tarball
# tar czf ApacheLogs-$(date +%Y%m%d).tar.gz /var/log/httpd/*    # Create a tarball and compress with gzip
# tar cjf ApacheLogs-$(date +%Y%m%d).tar.bz2 /var/log/httpd/*   # Create a tarball and compress with bzip2
# tar cJf ApacheLogs-$(date +%Y%m%d).tar.xz /var/log/httpd/*    # Create a tarball and compress with xz

Linux tar command examples

tar command examples

Example 6: Preserving original permissions and ownership while archiving and when

If you are creating backups from users’ home directories, you will want to store the individual files with the original permissions and ownership instead of changing them to that of the user account or daemon performing the backup. The following example preserves these attributes while taking the backup of the contents in the /var/log/httpd directory:

# tar cJf ApacheLogs-$(date +%Y%m%d).tar.xz /var/log/httpd/* --same-permissions --same-owner

Create Hard and Soft Links

In Linux, there are two types of links to files: hard links and soft (aka symbolic) links. Since a hard link represents another name for an existing file and is identified by the same inode, it then points to the actual data, as opposed to symbolic links, which point to filenames instead.

In addition, hard links do not occupy space on disk, while symbolic links do take a small amount of space to store the text of the link itself. The downside of hard links is that they can only be used to reference files within the filesystem where they are located because inodes are unique inside a filesystem. Symbolic links save the day, in that they point to another file or directory by name rather than by inode, and therefore can cross filesystem boundaries.

The basic syntax to create links is similar in both cases:

# ln TARGET LINK_NAME               # Hard link named LINK_NAME to file named TARGET
# ln -s TARGET LINK_NAME            # Soft link named LINK_NAME to file named TARGET
Example 7: Creating hard and soft links

There is no better way to visualize the relation between a file and a hard or symbolic link that point to it, than to create those links. In the following screenshot you will see that the file and the hard link that points to it share the same inode and both are identified by the same disk usage of 466 bytes.

On the other hand, creating a hard link results in an extra disk usage of 5 bytes. Not that you’re going to run out of storage capacity, but this example is enough to illustrate the difference between a hard link and a soft link.

Difference between a hard link and a soft link

Difference between a hard link and a soft link

A typical usage of symbolic links is to reference a versioned file in a Linux system. Suppose there are several programs that need access to file fooX.Y, which is subject to frequent version updates (think of a library, for example). Instead of updating every single reference to fooX.Y every time there’s a version update, it is wiser, safer, and faster, to have programs look to a symbolic link named just foo, which in turn points to the actual fooX.Y.

Thus, when X and Y change, you only need to edit the symbolic link foo with a new destination name instead of tracking every usage of the destination file and updating it.

Summary

In this article we have reviewed some essential file and directory management skills that must be a part of every system administrator’s tool-set. Make sure to review other parts of this series as well in order to integrate these topics with the content covered in this tutorial.

Feel free to let us know if you have any questions or comments. We are always more than glad to hear from our readers.

RHCSA Series: How to Manage Users and Groups in RHEL 7 – Part 3

Managing a RHEL 7 server, as it is the case with any other Linux server, will require that you know how to add, edit, suspend, or delete user accounts, and grant users the necessary permissions to files, directories, and other system resources to perform their assigned tasks.

User and Group Management in Linux

RHCSA: User and Group Management – Part 3

Managing User Accounts

To add a new user account to a RHEL 7 server, you can run either of the following two commands as root:

# adduser [new_account]
# useradd [new_account]

When a new user account is added, by default the following operations are performed.

  1. His/her home directory is created (/home/username unless specified otherwise).
  2. These .bash_logout.bash_profile and .bashrc hidden files are copied inside the user’s home directory, and will be used to provide environment variables for his/her user session. You can explore each of them for further details.
  3. A mail spool directory is created for the added user account.
  4. A group is created with the same name as the new user account.

The full account summary is stored in the /etc/passwd file. This file holds a record per system user account and has the following format (fields are separated by a colon):

[username]:[x]:[UID]:[GID]:[Comment]:[Home directory]:[Default shell]
  1. These two fields [username] and [Comment] are self explanatory.
  2. The second filed ‘x’ indicates that the account is secured by a shadowed password (in /etc/shadow), which is used to logon as [username].
  3. The fields [UID] and [GID] are integers that shows the User IDentification and the primary Group IDentification to which [username] belongs, equally.

Finally,

  1. The [Home directory] shows the absolute location of [username]’s home directory, and
  2. [Default shell] is the shell that is commit to this user when he/she logins into the system.

Another important file that you must become familiar with is /etc/group, where group information is stored. As it is the case with /etc/passwd, there is one record per line and its fields are also delimited by a colon:

[Group name]:[Group password]:[GID]:[Group members]

where,

  1. [Group name] is the name of group.
  2. Does this group use a group password? (An “x” means no).
  3. [GID]: same as in /etc/passwd.
  4. [Group members]: a list of users, separated by commas, that are members of each group.

After adding an account, at anytime, you can edit the user’s account information using usermod, whose basic syntax is:

# usermod [options] [username]

Read Also:
15 ‘useradd’ Command Examples
15 ‘usermod’ Command Examples

EXAMPLE 1: Setting the expiry date for an account

If you work for a company that has some kind of policy to enable account for a certain interval of time, or if you want to grant access to a limited period of time, you can use the --expiredate flag followed by a date in YYYY-MM-DD format. To verify that the change has been applied, you can compare the output of

# chage -l [username]

before and after updating the account expiry date, as shown in the following image.

Change User Account Information

Change User Account Information

EXAMPLE 2: Adding the user to supplementary groups

Besides the primary group that is created when a new user account is added to the system, a user can be added to supplementary groups using the combined -aG, or –append –groups options, followed by a comma separated list of groups.

EXAMPLE 3: Changing the default location of the user’s home directory and / or changing its shell

If for some reason you need to change the default location of the user’s home directory (other than /home/username), you will need to use the -d, or –home options, followed by the absolute path to the new home directory.

If a user wants to use another shell other than bash (for example, sh), which gets assigned by default, use usermod with the –shell flag, followed by the path to the new shell.

EXAMPLE 4: Displaying the groups an user is a member of

After adding the user to a supplementary group, you can verify that it now actually belongs to such group(s):

# groups [username]
# id [username]

The following image depicts Examples 2 through 4:

Adding User to Supplementary Group

Adding User to Supplementary Group

In the example above:

# usermod --append --groups gacanepa,users --home /tmp --shell /bin/sh tecmint

To remove a user from a group, omit the --append switch in the command above and list the groups you want the user to belong to following the --groups flag.

EXAMPLE 5: Disabling account by locking password

To disable an account, you will need to use either the -L (lowercase L) or the –lock option to lock a user’s password. This will prevent the user from being able to log on.

EXAMPLE 6: Unlocking password

When you need to re-enable the user so that he can log on to the server again, use the -U or the –unlock option to unlock a user’s password that was previously blocked, as explained in Example 5 above.

# usermod --unlock tecmint

The following image illustrates Examples 5 and 6:

Lock Unlock User Account

Lock Unlock User Account

EXAMPLE 7: Deleting a group or an user account

To delete a group, you’ll want to use groupdel, whereas to delete a user account you will use userdel (add the –rswitch if you also want to delete the contents of its home directory and mail spool):

# groupdel [group_name]        # Delete a group
# userdel -r [user_name]       # Remove user_name from the system, along with his/her home directory and mail spool

If there are files owned by group_name, they will not be deleted, but the group owner will be set to the GID of the group that was deleted.

Listing, Setting and Changing Standard ugo/rwx Permissions

The well-known ls command is one of the best friends of any system administrator. When used with the -l flag, this tool allows you to view a list a directory’s contents in long (or detailed) format.

However, this command can also be applied to a single file. Either way, the first 10 characters in the output of ls -l represent each file’s attributes.

The first char of this 10-character sequence is used to indicate the file type:

  1.  (hyphen): a regular file
  2. d: a directory
  3. l: a symbolic link
  4. c: a character device (which treats data as a stream of bytes, i.e. a terminal)
  5. b: a block device (which handles data in blocks, i.e. storage devices)

The next nine characters of the file attributes, divided in groups of three from left to right, are called the file mode and indicate the read (r), write(w), and execute (x) permissions granted to the file’s owner, the file’s group owner, and the rest of the users (commonly referred to as “the world”), respectively.

While the read permission on a file allows the same to be opened and read, the same permission on a directory allows its contents to be listed if the execute permission is also set. In addition, the execute permission in a file allows it to be handled as a program and run.

File permissions are changed with the chmod command, whose basic syntax is as follows:

# chmod [new_mode] file

where new_mode is either an octal number or an expression that specifies the new permissions. Feel free to use the mode that works best for you in each case. Or perhaps you already have a preferred way to set a file’s permissions – so feel free to use the method that works best for you.

The octal number can be calculated based on the binary equivalent, which can in turn be obtained from the desired file permissions for the owner of the file, the owner group, and the world.The presence of a certain permission equals a power of 2 (r=22, w=21, x=20), while its absence means 0. For example:

File Permissions

File Permissions

To set the file’s permissions as indicated above in octal form, type:

# chmod 744 myfile

Please take a minute to compare our previous calculation to the actual output of ls -l after changing the file’s permissions:

Long List Format

Long List Format

EXAMPLE 8: Searching for files with 777 permissions

As a security measure, you should make sure that files with 777 permissions (read, write, and execute for everyone) are avoided like the plague under normal circumstances. Although we will explain in a later tutorial how to more effectively locate all the files in your system with a certain permission set, you can -by now- combine ls with grep to obtain such information.

In the following example, we will look for file with 777 permissions in the /etc directory only. Note that we will use pipelining as explained in Part 2: File and Directory Management of this RHCSA series:

# ls -l /etc | grep rwxrwxrwx

Find All Files with 777 Permission

Find All Files with 777 Permission

EXAMPLE 9: Assigning a specific permission to all users

Shell scripts, along with some binaries that all users should have access to (not just their corresponding owner and group), should have the execute bit set accordingly (please note that we will discuss a special case later):

# chmod a+x script.sh

Note: That we can also set a file’s mode using an expression that indicates the owner’s rights with the letter u, the group owner’s rights with the letter g, and the rest with o. All of these rights can be represented at the same time with the letter a. Permissions are granted (or revoked) with the + or - signs, respectively.

Set Execute Permission on File

Set Execute Permission on File

A long directory listing also shows the file’s owner and its group owner in the first and second columns, respectively. This feature serves as a first-level access control method to files in a system:

Check File Owner and Group

Check File Owner and Group

To change file ownership, you will use the chown command. Note that you can change the file and group ownership at the same time or separately:

# chown user:group file

Note: That you can change the user or group, or the two attributes at the same time, as long as you don’t forget the colon, leaving user or group blank if you want to update the other attribute, for example:

# chown :group file              # Change group ownership only
# chown user: file               # Change user ownership only
EXAMPLE 10: Cloning permissions from one file to another

If you would like to “clone” ownership from one file to another, you can do so using the –reference flag, as follows:

# chown --reference=ref_file file

where the owner and group of ref_file will be assigned to file as well:

Clone File Ownership

Clone File Ownership

Setting Up SETGID Directories for Collaboration

Should you need to grant access to all the files owned by a certain group inside a specific directory, you will most likely use the approach of setting the setgid bit for such directory. When the setgid bit is set, the effective GID of the real user becomes that of the group owner.

Thus, any user can access a file under the privileges granted to the group owner of such file. In addition, when the setgid bit is set on a directory, newly created files inherit the same group as the directory, and newly created subdirectories will also inherit the setgid bit of the parent directory.

# chmod g+s [filename]

To set the setgid in octal form, prepend the number 2 to the current (or desired) basic permissions.

# chmod 2755 [directory]

Conclusion

A solid knowledge of user and group management, along with standard and special Linux permissions, when coupled with practice, will allow you to quickly identify and troubleshoot issues with file permissions in your RHEL 7 server.

I assure you that as you follow the steps outlined in this article and use the system documentation (as explained in Part 1: Reviewing Essential Commands & System Documentation of this series) you will master this essential competence of system administration.

Feel free to let us know if you have any questions or comments using the form below.

RHCSA Series: Editing Text Files with Nano and Vim / Analyzing text with grep and regexps – Part 4

Every system administrator has to deal with text files as part of his daily responsibilities. That includes editing existing files (most likely configuration files), or creating new ones. It has been said that if you want to start a holy war in the Linux world, you can ask sysadmins what their favorite text editor is and why. We are not going to do that in this article, but will present a few tips that will be helpful to use two of the most widely used text editors in RHEL 7: nano (due to its simplicity and easiness of use, specially to new users), and vi/m (due to its several features that convert it into more than a simple editor). I am sure that you can find many more reasons to use one or the other, or perhaps some other editor such as emacs or pico. It’s entirely up to you.

Learn Nano and vi Editors

RHCSA: Editing Text Files with Nano and Vim – Part 4

Editing Files with Nano Editor

To launch nano, you can either just type nano at the command prompt, optionally followed by a filename (in this case, if the file exists, it will be opened in edition mode). If the file does not exist, or if we omit the filename, nano will also be opened in edition mode but will present a blank screen for us to start typing:

Nano Editor

Nano Editor

As you can see in the previous image, nano displays at the bottom of the screen several functions that are available via the indicated shortcuts (^, aka caret, indicates the Ctrl key). To name a few of them:

  1. Ctrl + G: brings up the help menu with a complete list of functions and descriptions:Ctrl + X: exits the current file. If changes have not been saved, they are discarded.
  2. Ctrl + R: lets you choose a file to insert its contents into the present file by specifying a full path.

Nano Editor Help Menu

Nano Editor Help Menu

  1. Ctrl + O: saves changes made to a file. It will let you save the file with the same name or a different one. Then press Enter to confirm.

Nano Editor Save Changes Mode

Nano Editor Save Changes Mode

  1. Ctrl + X: exits the current file. If changes have not been saved, they are discarded.
  2. Ctrl + R: lets you choose a file to insert its contents into the present file by specifying a full path.

Nano: Insert File Content to Parent File

Nano: Insert File Content to Parent File

will insert the contents of /etc/passwd into the current file.

  1. Ctrl + K: cuts the current line.
  2. Ctrl + U: paste.
  3. Ctrl + C: cancels the current operation and places you at the previous screen.

To easily navigate the opened file, nano provides the following features:

  1. Ctrl + F and Ctrl + B move the cursor forward or backward, whereas Ctrl + P and Ctrl + N move it up or down one line at a time, respectively, just like the arrow keys.
  2. Ctrl + space and Alt + space move the cursor forward and backward one word at a time.

Finally,

  1. Ctrl + _ (underscore) and then entering X,Y will take you precisely to Line X, column Y, if you want to place the cursor at a specific place in the document.

Navigate to Line Numbers in Nano

Navigate to Line Numbers in Nano

The example above will take you to line 15column 14 in the current document.

If you can recall your early Linux days, specially if you came from Windows, you will probably agree that starting off with nano is the best way to go for a new user.

Editing Files with Vim Editor

Vim is an improved version of vi, a famous text editor in Linux that is available on all POSIX-compliant *nix systems, such as RHEL 7. If you have the chance and can install vim, go ahead; if not, most (if not all) the tips given in this article should also work.

One of vim’s distinguishing features is the different modes in which it operates:

  1. Command mode will allow you to browse through the file and enter commands, which are brief and case-sensitive combinations of one or more letters. If you need to repeat one of them a certain number of times, you can prefix it with a number (there are only a few exceptions to this rule). For example, yy (or Y, short for yank) copies the entire current line, whereas 4yy (or 4Y) copies the entire current line along with the next three lines (4 lines in total).
  2. In ex mode, you can manipulate files (including saving a current file and running outside programs or commands). To enter ex mode, we must type a colon (:) starting from command mode (or in other words, Esc + :), directly followed by the name of the ex-mode command that you want to use.
  3. In insert mode, which is accessed by typing the letter i, we simply enter text. Most keystrokes result in text appearing on the screen.
  4. We can always enter command mode (regardless of the mode we’re working on) by pressing the Esc key.

Let’s see how we can perform the same operations that we outlined for nano in the previous section, but now with vim. Don’t forget to hit the Enter key to confirm the vim command!

To access vim’s full manual from the command line, type :help while in command mode and then press Enter:

vim Edito Help Menu

vim Edito Help Menu

The upper section presents an index list of contents, with defined sections dedicated to specific topics about vim. To navigate to a section, place the cursor over it and press Ctrl + ] (closing square bracket). Note that the bottom section displays the current file.

1. To save changes made to a file, run any of the following commands from command mode and it will do the trick:

:wq!
:x!
ZZ (yes, double Z without the colon at the beginning)

2. To exit discarding changes, use :q!. This command will also allow you to exit the help menu described above, and return to the current file in command mode.

3. Cut N number of lines: type Ndd while in command mode.

4. Copy M number of lines: type Myy while in command mode.

5. Paste lines that were previously cutted or copied: press the P key while in command mode.

6. To insert the contents of another file into the current one:

:r filename

For example, to insert the contents of /etc/fstab, do:

Insert Content of File in vi Editor

Insert Content of File in vi Editor

7. To insert the output of a command into the current document:

:r! command

For example, to insert the date and time in the line below the current position of the cursor:

Insert Time an Date in vi Editor

Insert Time an Date in vi Editor

In another article that I wrote for, (Part 2 of the LFCS series), I explained in greater detail the keyboard shortcuts and functions available in vim. You may want to refer to that tutorial for further examples on how to use this powerful text editor.

Analyzing Text with Grep and Regular Expressions

By now you have learned how to create and edit files using nano or vim. Say you become a text editor ninja, so to speak – now what? Among other things, you will also need how to search for regular expressions inside text.

A regular expression (also known as “regex” or “regexp“) is a way of identifying a text string or pattern so that a program can compare the pattern against arbitrary text strings. Although the use of regular expressions along with grep would deserve an entire article on its own, let us review the basics here:

1. The simplest regular expression is an alphanumeric string (i.e., the word “svm”) or two (when two are present, you can use the | (OR) operator):

# grep -Ei 'svm|vmx' /proc/cpuinfo

The presence of either of those two strings indicate that your processor supports virtualization:

Regular Expression Example

Regular Expression Example

2. A second kind of a regular expression is a range list, enclosed between square brackets.

For example, c[aeiou]t matches the strings cat, cet, cit, cot, and cut, whereas [a-z] and [0-9] match any lowercase letter or decimal digit, respectively. If you want to repeat the regular expression X certain number of times, type {X} immediately following the regexp.

For example, let’s extract the UUIDs of storage devices from /etc/fstab:

# grep -Ei '[0-9a-f]{8}-([0-9a-f]{4}-){3}[0-9a-f]{12}' -o /etc/fstab

Extract String from a File in Linux

Extract String from a File

The first expression in brackets [0-9a-f] is used to denote lowercase hexadecimal characters, and {8} is a quantifier that indicates the number of times that the preceding match should be repeated (the first sequence of characters in an UUID is a 8-character long hexadecimal string).

The parentheses, the {4} quantifier, and the hyphen indicate that the next sequence is a 4-character long hexadecimal string, and the quantifier that follows ({3}) denote that the expression should be repeated 3 times.

Finally, the last sequence of 12-character long hexadecimal string in the UUID is retrieved with [0-9a-f]{12}, and the -o option prints only the matched (non-empty) parts of the matching line in /etc/fstab.

3. POSIX character classes.

Character Class Matches…
 [[:alnum:]]  Any alphanumeric [a-zA-Z0-9] character
 [[:alpha:]]  Any alphabetic [a-zA-Z] character
 [[:blank:]]  Spaces or tabs
 [[:cntrl:]]  Any control characters (ASCII 0 to 32)
 [[:digit:]]  Any numeric digits [0-9]
 [[:graph:]]  Any visible characters
 [[:lower:]]  Any lowercase [a-z] character
 [[:print:]]  Any non-control characters
 [[:space:]]  Any whitespace
 [[:punct:]]  Any punctuation marks
 [[:upper:]]  Any uppercase [A-Z] character
 [[:xdigit:]]  Any hex digits [0-9a-fA-F]
 [:word:]  Any letters, numbers, and underscores [a-zA-Z0-9_]

For example, we may be interested in finding out what the used UIDs and GIDs (refer to Part 2 of this series to refresh your memory) are for real users that have been added to our system. Thus, we will search for sequences of 4 digits in /etc/passwd:

# grep -Ei [[:digit:]]{4} /etc/passwd

Search For a String in File

Search For a String in File

The above example may not be the best case of use of regular expressions in the real world, but it clearly illustrates how to use POSIX character classes to analyze text along with grep.

Conclusion

In this article we have provided some tips to make the most of nano and vim, two text editors for the command-line users. Both tools are supported by extensive documentation, which you can consult in their respective official web sites (links given below) and using the suggestions given in Part 1 of this series.

Reference Links

http://www.nano-editor.org/
http://www.vim.org/

RHCSA Series: Process Management in RHEL 7: Boot, Shutdown, and Everything in Between – Part 5

We will start this article with an overall and brief revision of what happens since the moment you press the Power button to turn on your RHEL 7 server until you are presented with the login screen in a command line interface.

RHEL 7 Boot Process

Linux Boot Process

Please note that:

1. the same basic principles apply, with perhaps minor modifications, to other Linux distributions as well, and
2. the following description is not intended to represent an exhaustive explanation of the boot process, but only the fundamentals.

Linux Boot Process

1. The POST (Power On Self Test) initializes and performs hardware checks.

2. When the POST finishes, the system control is passed to the first stage boot loader, which is stored on either the boot sector of one of the hard disks (for older systems using BIOS and MBR), or a dedicated (U)EFI partition.

3. The first stage boot loader then loads the second stage boot loader, most usually GRUB (GRand Unified Boot Loader), which resides inside /boot, which in turn loads the kernel and the initial RAM–based file system (also known as initramfs, which contains programs and binary files that perform the necessary actions needed to ultimately mount the actual root filesystem).

4. We are presented with a splash screen that allows us to choose an operating system and kernel to boot:

RHEL 7 Boot Screen

Boot Menu Screen

5. The kernel sets up the hardware attached to the system and once the root filesystem has been mounted, launches process with PID 1, which in turn will initialize other processes and present us with a login prompt.

Note: That if we wish to do so at a later time, we can examine the specifics of this process using the dmesg command and filtering its output using the tools that we have explained in previous articles of this series.

Login Screen and Process PID

Login Screen and Process PID

In the example above, we used the well-known ps command to display a list of current processes whose parent process (or in other words, the process that started them) is systemd (the system and service manager that most modern Linux distributions have switched to) during system startup:

# ps -o ppid,pid,uname,comm --ppid=1

Remember that the -o flag (short for –format) allows you to present the output of ps in a customized format to suit your needs using the keywords specified in the STANDARD FORMAT SPECIFIERS section in man ps.

Another case in which you will want to define the output of ps instead of going with the default is when you need to find processes that are causing a significant CPU and / or memory load, and sort them accordingly:

# ps aux --sort=+pcpu              # Sort by %CPU (ascending)
# ps aux --sort=-pcpu              # Sort by %CPU (descending)
# ps aux --sort=+pmem              # Sort by %MEM (ascending)
# ps aux --sort=-pmem              # Sort by %MEM (descending)
# ps aux --sort=+pcpu,-pmem        # Combine sort by %CPU (ascending) and %MEM (descending)

Customize ps Command Output

Customize ps Command Output

An Introduction to SystemD

Few decisions in the Linux world have caused more controversies than the adoption of systemd by major Linux distributions. Systemd’s advocates name as its main advantages the following facts:

Read Also: The Story Behind ‘init’ and ‘systemd’

1. Systemd allows more processing to be done in parallel during system startup (as opposed to older SysVinit, which always tends to be slower because it starts processes one by one, checks if one depends on another, and then waits for daemons to launch so more services can start), and

2. It works as a dynamic resource management in a running system. Thus, services are started when needed (to avoid consuming system resources if they are not being used) instead of being launched without a valid reason during boot.

3. Backwards compatibility with SysVinit scripts.

Systemd is controlled by the systemctl utility. If you come from a SysVinit background, chances are you will be familiar with:

  1. the service tool, which -in those older systems- was used to manage SysVinit scripts, and
  2. the chkconfig utility, which served the purpose of updating and querying runlevel information for system services.
  3. shutdown, which you must have used several times to either restart or halt a running system.

The following table shows the similarities between the use of these legacy tools and systemctl:

Legacy tool Systemctl equivalent Description
service name start systemctl start name Start name (where name is a service)
service name stop systemctl stop name Stop name
service name condrestart systemctl try-restart name Restarts name (if it’s already running)
service name restart systemctl restart name Restarts name
service name reload systemctl reload name Reloads the configuration for name
service name status systemctl status name Displays the current status of name
service –status-all systemctl Displays the status of all current services
chkconfig name on systemctl enable name Enable name to run on startup as specified in the unit file (the file to which the symlink points). The process of enabling or disabling a service to start automatically on boot consists in adding or removing symbolic links inside the /etc/systemd/system directory.
chkconfig name off systemctl disable name Disables name to run on startup as specified in the unit file (the file to which the symlink points)
chkconfig –list name systemctl is-enabled name Verify whether name (a specific service) is currently enabled
chkconfig –list systemctl –type=service Displays all services and tells whether they are enabled or disabled
shutdown -h now systemctl poweroff Power-off the machine (halt)
shutdown -r now systemctl reboot Reboot the system

Systemd also introduced the concepts of units (which can be either a service, a mount point, a device, or a network socket) and targets (which is how systemd manages to start several related process at the same time, and can be considered -though not equal- as the equivalent of runlevels in SysVinit-based systems.

Summing Up

Other tasks related with process management include, but may not be limited to, the ability to:

1. Adjust the execution priority as far as the use of system resources is concerned of a process:

This is accomplished through the renice utility, which alters the scheduling priority of one or more running processes. In simple terms, the scheduling priority is a feature that allows the kernel (present in versions => 2.6) to allocate system resources as per the assigned execution priority (aka niceness, in a range from -20 through 19) of a given process.

The basic syntax of renice is as follows:

# renice [-n] priority [-gpu] identifier

In the generic command above, the first argument is the priority value to be used, whereas the other argument can be interpreted as process IDs (which is the default setting), process group IDs, user IDs, or user names. A normal user (other than root) can only modify the scheduling priority of a process he or she owns, and only increase the niceness level (which means taking up less system resources).

Renice Process in Linux

Process Scheduling Priority

2. Kill (or interrupt the normal execution) of a process as needed:

In more precise terms, killing a process entitles sending it a signal to either finish its execution gracefully (SIGTERM=15) or immediately (SIGKILL=9) through the kill or pkill commands.

The difference between these two tools is that the former is used to terminate a specific process or a process group altogether, while the latter allows you to do the same based on name and other attributes.

In addition, pkill comes bundled with pgrep, which shows you the PIDs that will be affected should pkill be used. For example, before running:

# pkill -u gacanepa

It may be useful to view at a glance which are the PIDs owned by gacanepa:

# pgrep -l -u gacanepa

Find PIDs of User

Find PIDs of User

By default, both kill and pkill send the SIGTERM signal to the process. As we mentioned above, this signal can be ignored (while the process finishes its execution or for good), so when you seriously need to stop a running process with a valid reason, you will need to specify the SIGKILL signal on the command line:

# kill -9 identifier               # Kill a process or a process group
# kill -s SIGNAL identifier        # Idem
# pkill -s SIGNAL identifier       # Kill a process by name or other attributes 

Conclusion

In this article we have explained the basics of the boot process in a RHEL 7 system, and analyzed some of the tools that are available to help you with managing processes using common utilities and systemd-specific commands.

Note that this list is not intended to cover all the bells and whistles of this topic, so feel free to add your own preferred tools and commands to this article using the comment form below. Questions and other comments are also welcome.

RHCSA Series: Using ‘Parted’ and ‘SSM’ to Configure and Encrypt System Storage – Part 6

In this article we will discuss how to set up and configure local system storage in Red Hat Enterprise Linux 7using classic tools and introducing the System Storage Manager (also known as SSM), which greatly simplifies this task.

Configure and Encrypt System Storage

RHCSA: Configure and Encrypt System Storage – Part 6

Please note that we will present this topic in this article but will continue its description and usage on the next one (Part 7) due to vastness of the subject.

Creating and Modifying Partitions in RHEL 7

In RHEL 7, parted is the default utility to work with partitions, and will allow you to:

  1. Display the current partition table
  2. Manipulate (increase or decrease the size of) existing partitions
  3. Create partitions using free space or additional physical storage devices

It is recommended that before attempting the creation of a new partition or the modification of an existing one, you should ensure that none of the partitions on the device are in use (umount /dev/partition), and if you’re using part of the device as swap you need to disable it (swapoff -v /dev/partition) during the process.

The easiest way to do this is to boot RHEL in rescue mode using an installation media such as a RHEL 7installation DVD or USB (Troubleshooting  Rescue a Red Hat Enterprise Linux system) and Select Skip when you’re prompted to choose an option to mount the existing Linux installation, and you will be presented with a command prompt where you can start typing the same commands as shown as follows during the creation of an ordinary partition in a physical device that is not being used.

RHEL 7 Rescue Mode

RHEL 7 Rescue Mode

To start parted, simply type.

# parted /dev/sdb

Where /dev/sdb is the device where you will create the new partition; next, type print to display the current drive’s partition table:

Creat New Partition

Creat New Partition

As you can see, in this example we are using a virtual drive of 5 GB. We will now proceed to create a 4 GBprimary partition and then format it with the xfs filesystem, which is the default in RHEL 7.

You can choose from a variety of file systems. You will need to manually create the partition with mkpart and then format it with mkfs.fstype as usual because mkpart does not support many modern filesystems out-of-the-box.

In the following example we will set a label for the device and then create a primary partition (p) on /dev/sdb, which starts at the 0% percentage of the device and ends at 4000 MB (4 GB):

Set Partition Name in Linux

Label Partition Name

Next, we will format the partition as xfs and print the partition table again to verify that changes were applied:

# mkfs.xfs /dev/sdb1
# parted /dev/sdb print

Format Partition in Linux

Format Partition as XFS Filesystem

For older filesystems, you could use the resize command in parted to resize a partition. Unfortunately, this only applies to ext2, fat16, fat32, hfs, linux-swap, and reiserfs (if libreiserfs is installed).

Thus, the only way to resize a partition is by deleting it and creating it again (so make sure you have a good backup of your data!). No wonder the default partitioning scheme in RHEL 7 is based on LVM.

To remove a partition with parted:

# parted /dev/sdb print
# parted /dev/sdb rm 1

Remove Partition in Linux

Remove or Delete Partition

The Logical Volume Manager (LVM)

Once a disk has been partitioned, it can be difficult or risky to change the partition sizes. For that reason, if we plan on resizing the partitions on our system, we should consider the possibility of using LVM instead of the classic partitioning system, where several physical devices can form a volume group that will host a defined number of logical volumes, which can be expanded or reduced without any hassle.

In simple terms, you may find the following diagram useful to remember the basic architecture of LVM.

Basic Architecture of LVM

Basic Architecture of LVM

Creating Physical Volumes, Volume Group and Logical Volumes

Follow these steps in order to set up LVM using classic volume management tools. Since you can expand this topic reading the LVM series on this site, I will only outline the basic steps to set up LVM, and then compare them to implementing the same functionality with SSM.

Note: That we will use the whole disks /dev/sdb and /dev/sdc as PVs (Physical Volumes) but it’s entirely up to you if you want to do the same.

1. Create partitions /dev/sdb1 and /dev/sdc1 using 100% of the available disk space in /dev/sdb and /dev/sdc:

# parted /dev/sdb print
# parted /dev/sdc print

Create New Partitions

Create New Partitions

2. Create 2 physical volumes on top of /dev/sdb1 and /dev/sdc1, respectively.

# pvcreate /dev/sdb1
# pvcreate /dev/sdc1

Create Two Physical Volumes

Create Two Physical Volumes

Remember that you can use pvdisplay /dev/sd{b,c}1 to show information about the newly created PVs.

3. Create a VG on top of the PV that you created in the previous step:

# vgcreate tecmint_vg /dev/sd{b,c}1

Create Volume Group in Linux

Create Volume Group

Remember that you can use vgdisplay tecmint_vg to show information about the newly created VG.

4. Create three logical volumes on top of VG tecmint_vg, as follows:

# lvcreate -L 3G -n vol01_docs tecmint_vg		[vol01_docs → 3 GB]
# lvcreate -L 1G -n vol02_logs tecmint_vg		[vol02_logs → 1 GB]
# lvcreate -l 100%FREE -n vol03_homes tecmint_vg	[vol03_homes → 6 GB]	

Create Logical Volumes in LVM

Create Logical Volumes

Remember that you can use lvdisplay tecmint_vg to show information about the newly created LVs on top of VG tecmint_vg.

5. Format each of the logical volumes with xfs (do NOT use xfs if you’re planning on shrinking volumes later!):

# mkfs.xfs /dev/tecmint_vg/vol01_docs
# mkfs.xfs /dev/tecmint_vg/vol02_logs
# mkfs.xfs /dev/tecmint_vg/vol03_homes

6. Finally, mount them:

# mount /dev/tecmint_vg/vol01_docs /mnt/docs
# mount /dev/tecmint_vg/vol02_logs /mnt/logs
# mount /dev/tecmint_vg/vol03_homes /mnt/homes

Removing Logical Volumes, Volume Group and Physical Volumes

7. Now we will reverse the LVM implementation and remove the LVs, the VG, and the PVs:

# lvremove /dev/tecmint_vg/vol01_docs
# lvremove /dev/tecmint_vg/vol02_logs
# lvremove /dev/tecmint_vg/vol03_homes
# vgremove /dev/tecmint_vg
# pvremove /dev/sd{b,c}1

8. Now let’s install SSM and we will see how to perform the above in ONLY 1 STEP!

# yum update && yum install system-storage-manager

We will use the same names and sizes as before:

# ssm create -s 3G -n vol01_docs -p tecmint_vg --fstype ext4 /mnt/docs /dev/sd{b,c}1
# ssm create -s 1G -n vol02_logs -p tecmint_vg --fstype ext4 /mnt/logs /dev/sd{b,c}1
# ssm create -n vol03_homes -p tecmint_vg --fstype ext4 /mnt/homes /dev/sd{b,c}1

Yes! SSM will let you:

  1. initialize block devices as physical volumes
  2. create a volume group
  3. create logical volumes
  4. format LVs, and
  5. mount them using only one command

9. We can now display the information about PVsVGs, or LVs, respectively, as follows:

# ssm list dev
# ssm list pool
# ssm list vol

Check Information of PVs, VGs, or LVs

Check Information of PVs, VGs, or LVs

10. As we already know, one of the distinguishing features of LVM is the possibility to resize (expand or decrease) logical volumes without downtime.

Say we are running out of space in vol02_logs but have plenty of space in vol03_homes. We will resize vol03_homes to 4 GB and expand vol02_logs to use the remaining space:

# ssm resize -s 4G /dev/tecmint_vg/vol03_homes

Run ssm list pool again and take note of the free space in tecmint_vg:

Check Volume Size

Check Volume Size

Then do:

# ssm resize -s+1.99 /dev/tecmint_vg/vol02_logs

Note: that the plus sign after the -s flag indicates that the specified value should be added to the present value.

11. Removing logical volumes and volume groups is much easier with ssm as well. A simple,

# ssm remove tecmint_vg

will return a prompt asking you to confirm the deletion of the VG and the LVs it contains:

Remove Logical Volume and Volume Group

Remove Logical Volume and Volume Group

Managing Encrypted Volumes

SSM also provides system administrators with the capability of managing encryption for new or existing volumes. You will need the cryptsetup package installed first:

# yum update && yum install cryptsetup

Then issue the following command to create an encrypted volume. You will be prompted to enter a passphraseto maximize security:

# ssm create -s 3G -n vol01_docs -p tecmint_vg --fstype ext4 --encrypt luks /mnt/docs /dev/sd{b,c}1
# ssm create -s 1G -n vol02_logs -p tecmint_vg --fstype ext4 --encrypt luks /mnt/logs /dev/sd{b,c}1
# ssm create -n vol03_homes -p tecmint_vg --fstype ext4 --encrypt luks /mnt/homes /dev/sd{b,c}1

Our next task consists in adding the corresponding entries in /etc/fstab in order for those logical volumes to be available on boot. Rather than using the device identifier (/dev/something).

We will use each LV’s UUID (so that our devices will still be uniquely identified should we add other logical volumes or devices), which we can find out with the blkid utility:

# blkid -o value UUID /dev/tecmint_vg/vol01_docs
# blkid -o value UUID /dev/tecmint_vg/vol02_logs
# blkid -o value UUID /dev/tecmint_vg/vol03_homes

In our case:

Find Logical Volume UUID

Find Logical Volume UUID

Next, create the /etc/crypttab file with the following contents (change the UUIDs for the ones that apply to your setup):

docs UUID=ba77d113-f849-4ddf-8048-13860399fca8 none
logs UUID=58f89c5a-f694-4443-83d6-2e83878e30e4 none
homes UUID=92245af6-3f38-4e07-8dd8-787f4690d7ac none

And insert the following entries in /etc/fstab. Note that device_name (/dev/mapper/device_name) is the mapper identifier that appears in the first column of /etc/crypttab.

# Logical volume vol01_docs:
/dev/mapper/docs    	/mnt/docs   	ext4	defaults    	0   	2
# Logical volume vol02_logs
/dev/mapper/logs    	/mnt/logs   	ext4	defaults    	0   	2
# Logical volume vol03_homes
/dev/mapper/homes    	/mnt/homes   	ext4	defaults    	0   	2

Now reboot (systemctl reboot) and you will be prompted to enter the passphrase for each LV. Afterwards you can confirm that the mount operation was successful by checking the corresponding mount points:

Verify Logical Volume Mount Points

Verify Logical Volume Mount Points

Conclusion

In this tutorial we have started to explore how to set up and configure system storage using classic volume management tools and SSM, which also integrates filesystem and encryption capabilities in one package. This makes SSM an invaluable tool for any sysadmin.

Let us know if you have any questions or comments – feel free to use the form below to get in touch with us!

RHCSA Series: Using ACLs (Access Control Lists) and Mounting Samba / NFS Shares – Part 7

In the last article (RHCSA series Part 6) we started explaining how to set up and configure local system storage using parted and ssm.

Configure ACL's and Mounting NFS / Samba Shares

RHCSA Series:: Configure ACL’s and Mounting NFS / Samba Shares – Part 7

We also discussed how to create and mount encrypted volumes with a password during system boot. In addition, we warned you to avoid performing critical storage management operations on mounted filesystems. With that in mind we will now review the most used file system formats in Red Hat Enterprise Linux 7 and then proceed to cover the topics of mounting, using, and unmounting both manually and automatically network filesystems (CIFS and NFS), along with the implementation of access control lists for your system.

Prerequisites

Before proceeding further, please make sure you have a Samba server and a NFS server available (note that NFSv2 is no longer supported in RHEL 7).

During this guide we will use a machine with IP 192.168.0.10 with both services running in it as server, and a RHEL 7 box as client with IP address 192.168.0.18. Later in the article we will tell you which packages you need to install on the client.

File System Formats in RHEL 7

Beginning with RHEL 7XFS has been introduced as the default file system for all architectures due to its high performance and scalability. It currently supports a maximum filesystem size of 500 TB as per the latest tests performed by Red Hat and its partners for mainstream hardware.

Also, XFS enables user_xattr (extended user attributes) and acl (POSIX access control lists) as default mount options, unlike ext3 or ext4 (ext2 is considered deprecated as of RHEL 7), which means that you don’t need to specify those options explicitly either on the command line or in /etc/fstab when mounting a XFS filesystem (if you want to disable such options in this last case, you have to explicitly use no_acl and no_user_xattr).

Keep in mind that the extended user attributes can be assigned to files and directories for storing arbitrary additional information such as the mime type, character set or encoding of a file, whereas the access permissions for user attributes are defined by the regular file permission bits.

Access Control Lists

As every system administrator, either beginner or expert, is well acquainted with regular access permissions on files and directories, which specify certain privileges (readwrite, and execute) for the owner, the group, and “the world” (all others). However, feel free to refer to Part 3 of the RHCSA series if you need to refresh your memory a little bit.

However, since the standard ugo/rwx set does not allow to configure different permissions for different users, ACLs were introduced in order to define more detailed access rights for files and directories than those specified by regular permissions.

In fact, ACL-defined permissions are a superset of the permissions specified by the file permission bits. Let’s see how all of this translates is applied in the real world.

1. There are two types of ACLsaccess ACLs, which can be applied to either a specific file or a directory), and default ACLs, which can only be applied to a directory. If files contained therein do not have a ACL set, they inherit the default ACL of their parent directory.

2. To begin, ACLs can be configured per user, per group, or per an user not in the owning group of a file.

3. ACLs are set (and removed) using setfacl, with either the -m or -x options, respectively.

For example, let us create a group named tecmint and add users johndoe and davenull to it:

# groupadd tecmint
# useradd johndoe
# useradd davenull
# usermod -a -G tecmint johndoe
# usermod -a -G tecmint davenull

And let’s verify that both users belong to supplementary group tecmint:

# id johndoe
# id davenull

Verify Users

Verify Users

Let’s now create a directory called playground within /mnt, and a file named testfile.txt inside. We will set the group owner to tecmint and change its default ugo/rwx permissions to 770 (read, write, and execute permissions granted to both the owner and the group owner of the file):

# mkdir /mnt/playground
# touch /mnt/playground/testfile.txt
# chmod 770 /mnt/playground/testfile.txt

Then switch user to johndoe and davenull, in that order, and write to the file:

echo "My name is John Doe" > /mnt/playground/testfile.txt
echo "My name is Dave Null" >> /mnt/playground/testfile.txt

So far so good. Now let’s have user gacanepa write to the file – and the write operation will fail, which was to be expected.

But what if we actually need user gacanepa (who is not a member of group tecmint) to have write permissions on /mnt/playground/testfile.txt? The first thing that may come to your mind is adding that user account to group tecmint. But that will give him write permissions on ALL files were the write bit is set for the group, and we don’t want that. We only want him to be able to write to /mnt/playground/testfile.txt.

# touch /mnt/playground/testfile.txt
# chown :tecmint /mnt/playground/testfile.txt
# chmod 777 /mnt/playground/testfile.txt
# su johndoe
$ echo "My name is John Doe" > /mnt/playground/testfile.txt
$ su davenull
$ echo "My name is Dave Null" >> /mnt/playground/testfile.txt
$ su gacanepa
$ echo "My name is Gabriel Canepa" >> /mnt/playground/testfile.txt

Manage User Permissions

Manage User Permissions

Let’s give user gacanepa read and write access to /mnt/playground/testfile.txt.

Run as root,

# setfacl -R -m u:gacanepa:rwx /mnt/playground

and you’ll have successfully added an ACL that allows gacanepa to write to the test file. Then switch to user gacanepa and try to write to the file again:

$ echo "My name is Gabriel Canepa" >> /mnt/playground/testfile.txt

To view the ACLs for a specific file or directory, use getfacl:

# getfacl /mnt/playground/testfile.txt

Check ACLs of Files

Check ACLs of Files

To set a default ACL to a directory (which its contents will inherit unless overwritten otherwise), add d: before the rule and specify a directory instead of a file name:

# setfacl -m d:o:r /mnt/playground

The ACL above will allow users not in the owner group to have read access to the future contents of the /mnt/playground directory. Note the difference in the output of getfacl /mnt/playground before and after the change:

Set Default ACL in Linux

Set Default ACL in Linux

Chapter 20 in the official RHEL 7 Storage Administration Guide provides more ACL examples, and I highly recommend you take a look at it and have it handy as reference.

Mounting NFS Network Shares

To show the list of NFS shares available in your server, you can use the showmount command with the -eoption, followed by the machine name or its IP address. This tool is included in the nfs-utils package:

# yum update && yum install nfs-utils

Then do:

# showmount -e 192.168.0.10

and you will get a list of the available NFS shares on 192.168.0.10:

Check Available NFS Shares

Check Available NFS Shares

To mount NFS network shares on the local client using the command line on demand, use the following syntax:

# mount -t nfs -o [options] remote_host:/remote/directory /local/directory

which, in our case, translates to:

# mount -t nfs 192.168.0.10:/NFS-SHARE /mnt/nfs

If you get the following error message: “Job for rpc-statd.service failed. See “systemctl status rpc-statd.service” and “journalctl -xn” for details.”, make sure the rpcbind service is enabled and started in your system first:

# systemctl enable rpcbind.socket
# systemctl restart rpcbind.service

and then reboot. That should do the trick and you will be able to mount your NFS share as explained earlier. If you need to mount the NFS share automatically on system boot, add a valid entry to the /etc/fstab file:

remote_host:/remote/directory /local/directory nfs options 0 0

The variables remote_host/remote/directory/local/directory, and options (which is optional) are the same ones used when manually mounting an NFS share from the command line. As per our previous example:

192.168.0.10:/NFS-SHARE /mnt/nfs nfs defaults 0 0

Mounting CIFS (Samba) Network Shares

Samba represents the tool of choice to make a network share available in a network with *nix and Windows machines. To show the Samba shares that are available, use the smbclient command with the -L flag, followed by the machine name or its IP address. This tool is included in the samba-client package:

You will be prompted for root’s password in the remote host:

# smbclient -L 192.168.0.10

Check Samba Shares

Check Samba Shares

To mount Samba network shares on the local client you will need to install first the cifs-utils package:

# yum update && yum install cifs-utils

Then use the following syntax on the command line:

# mount -t cifs -o credentials=/path/to/credentials/file //remote_host/samba_share /local/directory

which, in our case, translates to:

# mount -t cifs -o credentials=~/.smbcredentials //192.168.0.10/gacanepa /mnt/samba

where smbcredentials:

username=gacanepa
password=XXXXXX

is a hidden file inside root’s home (/root/) with permissions set to 600, so that no one else but the owner of the file can read or write to it.

Please note that the samba_share is the name of the Samba share as returned by smbclient -L remote_host as shown above.

Now, if you need the Samba share to be available automatically on system boot, add a valid entry to the /etc/fstab file as follows:

//remote_host:/samba_share /local/directory cifs options 0 0

The variables remote_host/samba_share/local/directory, and options (which is optional) are the same ones used when manually mounting a Samba share from the command line. Following the definitions given in our previous example:

//192.168.0.10/gacanepa /mnt/samba	cifs credentials=/root/smbcredentials,defaults 0 0

Conclusion

In this article we have explained how to set up ACLs in Linux, and discussed how to mount CIFS and NFSnetwork shares in a RHEL 7 client.

I recommend you to practice these concepts and even mix them (go ahead and try to set ACLs in mounted network shares) until you feel comfortable. If you have questions or comments feel free to use the form below to contact us anytime. Also, feel free to share this article through your social networks.

RHCSA Series: Securing SSH, Setting Hostname and Enabling Network Services – Part 8

As a system administrator you will often have to log on to remote systems to perform a variety of administration tasks using a terminal emulator. You will rarely sit in front of a real (physical) terminal, so you need to set up a way to log on remotely to the machines that you will be asked to manage.

In fact, that may be the last thing that you will have to do in front of a physical terminal. For security reasons, using Telnet for this purpose is not a good idea, as all traffic goes through the wire in unencrypted, plain text.

In addition, in this article we will also review how to configure network services to start automatically at boot and learn how to set up network and hostname resolution statically or dynamically.

RHCSA: Secure SSH and Enable Network Services

RHCSA: Secure SSH and Enable Network Services – Part 8

Installing and Securing SSH Communication

For you to be able to log on remotely to a RHEL 7 box using SSH, you will have to install the opensshopenssh-clients and openssh-servers packages. The following command not only will install the remote login program, but also the secure file transfer tool, as well as the remote file copy utility:

# yum update && yum install openssh openssh-clients openssh-servers

Note that it’s a good idea to install the server counterparts as you may want to use the same machine as both client and server at some point or another.

After installation, there is a couple of basic things that you need to take into account if you want to secure remote access to your SSH server. The following settings should be present in the /etc/ssh/sshd_configfile.

1. Change the port where the sshd daemon will listen on from 22 (the default value) to a high port (2000 or greater), but first make sure the chosen port is not being used.

For example, let’s suppose you choose port 2500. Use netstat in order to check whether the chosen port is being used or not:

# netstat -npltu | grep 2500

If netstat does not return anything, you can safely use port 2500 for sshd, and you should change the Port setting in the configuration file as follows:

Port 2500

2. Only allow protocol 2:

Protocol 2

3. Configure the authentication timeout to 2 minutes, do not allow root logins, and restrict to a minimum the list of users which are allowed to login via ssh:

LoginGraceTime 2m
PermitRootLogin no
AllowUsers gacanepa

4. If possible, use key-based instead of password authentication:

PasswordAuthentication no
RSAAuthentication yes
PubkeyAuthentication yes

This assumes that you have already created a key pair with your user name on your client machine and copied it to your server as explained here.

  1. Enable SSH Passwordless Login

Configuring Networking and Name Resolution

1. Every system administrator should be well acquainted with the following system-wide configuration files:

  1. /etc/hosts is used to resolve names <—> IPs in small networks.

Every line in the /etc/hosts file has the following structure:

IP address - Hostname - FQDN

For example,

192.168.0.10	laptop	laptop.gabrielcanepa.com.ar

2. /etc/resolv.conf specifies the IP addresses of DNS servers and the search domain, which is used for completing a given query name to a fully qualified domain name when no domain suffix is supplied.

Under normal circumstances, you don’t need to edit this file as it is managed by the system. However, should you want to change DNS servers, be advised that you need to stick to the following structure in each line:

nameserver - IP address

For example,

nameserver 8.8.8.8

3. 3. /etc/host.conf specifies the methods and the order by which hostnames are resolved within a network. In other words, tells the name resolver which services to use, and in what order.

Although this file has several options, the most common and basic setup includes a line as follows:

order bind,hosts

Which indicates that the resolver should first look in the nameservers specified in resolv.conf and then to the /etc/hosts file for name resolution.

4. /etc/sysconfig/network contains routing and global host information for all network interfaces. The following values may be used:

NETWORKING=yes|no
HOSTNAME=value

Where value should be the Fully Qualified Domain Name (FQDN).

GATEWAY=XXX.XXX.XXX.XXX

Where XXX.XXX.XXX.XXX is the IP address of the network’s gateway.

GATEWAYDEV=value

In a machine with multiple NICs, value is the gateway device, such as enp0s3.

5. Files inside /etc/sysconfig/network-scripts (network adapters configuration files).

Inside the directory mentioned previously, you will find several plain text files named.

ifcfg-name

Where name is the name of the NIC as returned by ip link show:

Check Network Link Status

Check Network Link Status

For example:

Network Files

Network Files

Other than for the loopback interface, you can expect a similar configuration for your NICs. Note that some variables, if set, will override those present in /etc/sysconfig/network for this particular interface. Each line is commented for clarification in this article but in the actual file you should avoid comments:

HWADDR=08:00:27:4E:59:37 # The MAC address of the NIC
TYPE=Ethernet # Type of connection
BOOTPROTO=static # This indicates that this NIC has been assigned a static IP. If this variable was set to dhcp, the NIC will be assigned an IP address by a DHCP server and thus the next two lines should not be present in that case.
IPADDR=192.168.0.18
NETMASK=255.255.255.0
GATEWAY=192.168.0.1
NM_CONTROLLED=no # Should be added to the Ethernet interface to prevent NetworkManager from changing the file.
NAME=enp0s3
UUID=14033805-98ef-4049-bc7b-d4bea76ed2eb
ONBOOT=yes # The operating system should bring up this NIC during boot

Setting Hostnames

In Red Hat Enterprise Linux 7, the hostnamectl command is used to both query and set the system’s hostname.

To display the current hostname, type:

# hostnamectl status

Check System hostname in CentOS 7

Check System Hostname

To change the hostname, use

# hostnamectl set-hostname [new hostname]

For example,

# hostnamectl set-hostname cinderella

For the changes to take effect you will need to restart the hostnamed daemon (that way you will not have to log off and on again in order to apply the change):

# systemctl restart systemd-hostnamed

Set System Hostname in CentOS 7

Set System Hostname

In addition, RHEL 7 also includes the nmcli utility that can be used for the same purpose. To display the hostname, run:

# nmcli general hostname

and to change it:

# nmcli general hostname [new hostname]

For example,

# nmcli general hostname rhel7

Set Hostname Using nmcli Command

Set Hostname Using nmcli Command

Starting Network Services on Boot

To wrap up, let us see how we can ensure that network services are started automatically on boot. In simple terms, this is done by creating symlinks to certain files specified in the [Install] section of the service configuration files.

In the case of firewalld (/usr/lib/systemd/system/firewalld.service):

[Install]
WantedBy=basic.target
Alias=dbus-org.fedoraproject.FirewallD1.service

To enable the service:

# systemctl enable firewalld

On the other hand, disabling firewalld entitles removing the symlinks:

# systemctl disable firewalld

Enable Service at System Boot

Enable Service at System Boot

Conclusion

In this article we have summarized how to install and secure connections via SSH to a RHEL server, how to change its name, and finally how to ensure that network services are started on boot. If you notice that a certain service has failed to start properly, you can use systemctl status -l [service] and journalctl -xn to troubleshoot it.

Feel free to let us know what you think about this article using the comment form below. Questions are also welcome. We look forward to hearing from you!

RHCSA Series: Installing, Configuring and Securing a Web and FTP Server – Part 9

A web server (also known as a HTTP server) is a service that handles content (most commonly web pages, but other types of documents as well) over to a client in a network.

A FTP server is one of the oldest and most commonly used resources (even to this day) to make files available to clients on a network in cases where no authentication is necessary since FTP uses username and passwordwithout encryption.

The web server available in RHEL 7 is version 2.4 of the Apache HTTP Server. As for the FTP server, we will use the Very Secure Ftp Daemon (aka vsftpd) to establish connections secured by TLS.

Configuring and Securing Apache and FTP Server

RHCSA: Installing, Configuring and Securing Apache and FTP – Part 9

In this article we will explain how to install, configure, and secure a web server and a FTP server in RHEL 7.

Installing Apache and FTP Server

In this guide we will use a RHEL 7 server with a static IP address of 192.168.0.18/24. To install Apache and VSFTPD, run the following command:

# yum update && yum install httpd vsftpd

When the installation completes, both services will be disabled initially, so we need to start them manually for the time being and enable them to start automatically beginning with the next boot:

# systemctl start httpd
# systemctl enable httpd
# systemctl start vsftpd
# systemctl enable vsftpd

In addition, we have to open ports 80 and 21, where the web and ftp daemons are listening, respectively, in order to allow access to those services from the outside:

# firewall-cmd --zone=public --add-port=80/tcp --permanent
# firewall-cmd --zone=public --add-service=ftp --permanent
# firewall-cmd --reload

To confirm that the web server is working properly, fire up your browser and enter the IP of the server. You should see the test page:

Confirm Apache Web Server

Confirm Apache Web Server

As for the ftp server, we will have to configure it further, which we will do in a minute, before confirming that it’s working as expected.

Configuring and Securing Apache Web Server

The main configuration file for Apache is located in /etc/httpd/conf/httpd.conf, but it may rely on other files present inside /etc/httpd/conf.d.

Although the default configuration should be sufficient for most cases, it’s a good idea to become familiar with all the available options as described in the official documentation.

As always, make a backup copy of the main configuration file before editing it:

# cp /etc/httpd/conf/httpd.conf /etc/httpd/conf/httpd.conf.$(date +%Y%m%d)

Then open it with your preferred text editor and look for the following variables:

  1. ServerRoot: the directory where the server’s configuration, error, and log files are kept.
  2. Listen: instructs Apache to listen on specific IP address and / or ports.
  3. Include: allows the inclusion of other configuration files, which must exist. Otherwise, the server will fail, as opposed to the IncludeOptional directive, which is silently ignored if the specified configuration files do not exist.
  4. User and Group: the name of the user/group to run the httpd service as.
  5. DocumentRoot: The directory out of which Apache will serve your documents. By default, all requests are taken from this directory, but symbolic links and aliases may be used to point to other locations.
  6. ServerName: this directive sets the hostname (or IP address) and port that the server uses to identify itself.

The first security measure will consist of creating a dedicated user and group (i.e. tecmint/tecmint) to run the web server as and changing the default port to a higher one (9000 in this case):

ServerRoot "/etc/httpd"
Listen 192.168.0.18:9000
User tecmint
Group tecmint
DocumentRoot "/var/www/html"
ServerName 192.168.0.18:9000

You can test the configuration file with.

# apachectl configtest

and if everything is OK, then restart the web server.

# systemctl restart httpd

and don’t forget to enable the new port (and disable the old one) in the firewall:

# firewall-cmd --zone=public --remove-port=80/tcp --permanent
# firewall-cmd --zone=public --add-port=9000/tcp --permanent
# firewall-cmd --reload

Note that, due to SELinux policies, you can only use the ports returned by

# semanage port -l | grep -w '^http_port_t'

for the web server.

If you want to use another port (i.e. TCP port 8100), you will have to add it to SELinux port context for the httpdservice:

# semanage port -a -t http_port_t -p tcp 8100

Add Apache Port to SELinux Policies

Add Apache Port to SELinux Policies

To further secure your Apache installation, follow these steps:

1. The user Apache is running as should not have access to a shell:

# usermod -s /sbin/nologin tecmint

2. Disable directory listing in order to prevent the browser from displaying the contents of a directory if there is no index.html present in that directory.

Edit /etc/httpd/conf/httpd.conf (and the configuration files for virtual hosts, if any) and make sure that the Options directive, both at the top and at Directory block levels, is set to None:

Options None

3. Hide information about the web server and the operating system in HTTP responses. Edit /etc/httpd/conf/httpd.conf as follows:

ServerTokens Prod 
ServerSignature Off

Now you are ready to start serving content from your /var/www/html directory.

Configuring and Securing FTP Server

As in the case of Apache, the main configuration file for Vsftpd (/etc/vsftpd/vsftpd.conf) is well commented and while the default configuration should suffice for most applications, you should become acquainted with the documentation and the man page (man vsftpd.conf) in order to operate the ftp server more efficiently (I can’t emphasize that enough!).

In our case, these are the directives used:

anonymous_enable=NO
local_enable=YES
write_enable=YES
local_umask=022
dirmessage_enable=YES
xferlog_enable=YES
connect_from_port_20=YES
xferlog_std_format=YES
chroot_local_user=YES
allow_writeable_chroot=YES
listen=NO
listen_ipv6=YES
pam_service_name=vsftpd
userlist_enable=YES
tcp_wrappers=YES

By using chroot_local_user=YES, local users will be (by default) placed in a chroot’ed jail in their home directory right after login. This means that local users will not be able to access any files outside their corresponding home directories.

Finally, to allow ftp to read files in the user’s home directory, set the following SELinux boolean:

# setsebool -P ftp_home_dir on

You can now connect to the ftp server using a client such as Filezilla:

Check FTP Connection

Check FTP Connection

Note that the /var/log/xferlog log records downloads and uploads, which concur with the above directory listing:

Monitor FTP Download and Upload

Monitor FTP Download and Upload

Read AlsoLimit FTP Network Bandwidth Used by Applications in a Linux System with Trickle

Summary

In this tutorial we have explained how to set up a web and a ftp server. Due to the vastness of the subject, it is not possible to cover all the aspects of these topics (i.e. virtual web hosts). Thus, I recommend you also check other excellent articles in this website about Apache.

RHCSA Series: Yum Package Management, Automating Tasks with Cron and Monitoring System Logs – Part 10

In this article we will review how to install, update, and remove packages in Red Hat Enterprise Linux 7. We will also cover how to automate tasks using cron, and will finish this guide explaining how to locate and interpret system logs files with the focus of teaching you why all of these are essential skills for every system administrator.

Yum Package Management Cron Jobs Log Monitoring Linux

RHCSA: Yum Package Management, Cron Job Scheduling and Log Monitoring – Part 10

Managing Packages Via Yum

To install a package along with all its dependencies that are not already installed, you will use:

# yum -y install package_name(s)

Where package_name(s) represent at least one real package name.

For example, to install httpd and mlocate (in that order), type.

# yum -y install httpd mlocate

Note: That the letter y in the example above bypasses the confirmation prompts that yum presents before performing the actual download and installation of the requested programs. You can leave it out if you want.

By default, yum will install the package with the architecture that matches the OS architecture, unless overridden by appending the package architecture to its name.

For example, on a 64 bit system, yum install package will install the x86_64 version of package, whereas yum install package.x86 (if available) will install the 32-bit one.

There will be times when you want to install a package but don’t know its exact name. The search all or searchoptions can search the currently enabled repositories for a certain keyword in the package name and/or in its description as well, respectively.

For example,

# yum search log

will search the installed repositories for packages with the word log in their names and summaries, whereas

# yum search all log

will look for the same keyword in the package description and url fields as well.

Once the search returns a package listing, you may want to display further information about some of them before installing. That is when the info option will come in handy:

# yum info logwatch

Search Package Information

Search Package Information

You can regularly check for updates with the following command:

# yum check-update

The above command will return all the installed packages for which an update is available. In the example shown in the image below, only rhel-7-server-rpms has an update available:

Check For Package Updates

Check For Package Updates

You can then update that package alone with,

# yum update rhel-7-server-rpms

If there are several packages that can be updated, yum update will update all of them at once.

Now what happens when you know the name of an executable, such as ps2pdf, but don’t know which package provides it? You can find out with yum whatprovides “*/[executable]”:

# yum whatprovides “*/ps2pdf”

Find Package Belongs to Which Package

Find Package Belongs to Which Package

Now, when it comes to removing a package, you can do so with yum remove package. Easy, huh? This goes to show that yum is a complete and powerful package manager.

# yum remove httpd

Read Also: 20 Yum Commands to Manage RHEL 7 Package Management

Good Old Plain RPM

RPM (aka RPM Package Manager, or originally RedHat Package Manager) can also be used to install or update packages when they come in form of standalone .rpm packages.

It is often utilized with the -Uvh flags to indicate that it should install the package if it’s not already present or attempt to update it if it’s installed (-U), producing a verbose output (-v) and a progress bar with hash marks (-h) while the operation is being performed. For example,

# rpm -Uvh package.rpm

Another typical use of rpm is to produce a list of currently installed packages with code>rpm -qa (short for query all):

# rpm -qa

Query All RPM Packages

Query All RPM Packages

Read Also: 20 RPM Commands to Install Packages in RHEL 7

Scheduling Tasks using Cron

Linux and other Unix-like operating systems include a tool called cron that allows you to schedule tasks (i.e. commands or shell scripts) to run on a periodic basis. Cron checks every minute the /var/spool/cron directory for files which are named after accounts in /etc/passwd.

When executing commands, any output is mailed to the owner of the crontab (or to the user specified in the MAILTO environment variable in the /etc/crontab, if it exists).

Crontab files (which are created by typing crontab -e and pressing Enter) have the following format:

Crontab Entries

Crontab Entries

Thus, if we want to update the local file database (which is used by locate to find files by name or pattern) every second day of the month at 2:15 am, we need to add the following crontab entry:

15 02 2 * * /bin/updatedb

The above crontab entry reads, “Run /bin/updatedb on the second day of the month, every month of the year, regardless of the day of the week, at 2:15 am”. As I’m sure you already guessed, the star symbol is used as a wildcard character.

After adding a cron job, you can see that a file named root was added inside /var/spool/cron, as we mentioned earlier. That file lists all the tasks that the crond daemon should run:

# ls -l /var/spool/cron

Check All Cron Jobs

Check All Cron Jobs

In the above image, the current user’s crontab can be displayed either using cat /var/spool/cron/root or,

# crontab -l

If you need to run a task on a more fine-grained basis (for example, twice a day or three times each month), cron can also help you to do that.

For example, to run /my/script on the 1st and 15th of each month and send any output to /dev/null, you can add two crontab entries as follows:

01 00 1 * * /myscript > /dev/null 2>&1
01 00 15 * * /my/script > /dev/null 2>&1

But in order for the task to be easier to maintain, you can combine both entries into one:

01 00 1,15 * *  /my/script > /dev/null 2>&1

Following the previous example, we can run /my/other/script at 1:30 am on the first day of the month every three months:

30 01 1 1,4,7,10 * /my/other/script > /dev/null 2>&1

But when you have to repeat a certain task every “x” minutes, hours, days, or months, you can divide the right position by the desired frequency. The following crontab entry has the exact same meaning as the previous one:

30 01 1 */3 * /my/other/script > /dev/null 2>&1

Or perhaps you need to run a certain job on a fixed frequency or after the system boots, for example. You can use one of the following string instead of the five fields to indicate the exact time when you want your job to run:

@reboot    	Run when the system boots.
@yearly    	Run once a year, same as 00 00 1 1 *.
@monthly   	Run once a month, same as 00 00 1 * *.
@weekly    	Run once a week, same as 00 00 * * 0.
@daily     	Run once a day, same as 00 00 * * *.
@hourly    	Run once an hour, same as 00 * * * *.

Read Also: 11 Commands to Schedule Cron Jobs in RHEL 7

Locating and Checking Logs

System logs are located (and rotated) inside the /var/log directory. According to the Linux Filesystem Hierarchy Standard, this directory contains miscellaneous log files, which are written to it or an appropriate subdirectory (such as audithttpd, or samba in the image below) by the corresponding daemons during system operation:

# ls /var/log

Linux Log Files Location

Linux Log Files Location

Other interesting logs are dmesg (contains all messages from kernel ring buffer), secure (logs connection attempts that require user authentication), messages (system-wide messages) and wtmp (records of all user logins and logouts).

Logs are very important in that they allow you to have a glimpse of what is going on at all times in your system, and what has happened in the past. They represent a priceless tool to troubleshoot and monitor a Linux server, and thus are often used with the tail -f command to display events, in real time, as they happen and are recorded in a log.

For example, if you want to display kernel-related events, type the following command:

# tail -f /var/log/dmesg

Same if you want to view access to your web server:

# tail -f /var/log/httpd/access.log

Summary

If you know how to efficiently manage packages, schedule tasks, and where to look for information about the current and past operation of your system you can rest assure that you will not run into surprises very often. I hope this article has helped you learn or refresh your knowledge about these basic skills.

Don’t hesitate to drop us a line using the contact form below if you have any questions or comments.

RHCSA Series: Firewall Essentials and Network Traffic Control Using FirewallD and Iptables – Part 11

In simple words, a firewall is a security system that controls the incoming and outgoing traffic in a network based on a set of predefined rules (such as the packet destination / source or type of traffic, for example).

Control Network Traffic with FirewallD and Iptables

RHCSA: Control Network Traffic with FirewallD and Iptables – Part 11

In this article we will review the basics of firewalld, the default dynamic firewall daemon in Red Hat Enterprise Linux 7, and iptables service, the legacy firewall service for Linux, with which most system and network administrators are well acquainted, and which is also available in RHEL 7.

A Comparison Between FirewallD and Iptables

Under the hood, both firewalld and the iptables service talk to the netfilter framework in the kernel through the same interface, not surprisingly, the iptables command. However, as opposed to the iptables service, firewalld can change the settings during normal system operation without existing connections being lost.

Firewalld should be installed by default in your RHEL system, though it may not be running. You can verify with the following commands (firewall-config is the user interface configuration tool):

# yum info firewalld firewall-config

Check FirewallD Information

Check FirewallD Information

and,

# systemctl status -l firewalld.service

Check FirewallD Status

Check FirewallD Status

On the other hand, the iptables service is not included by default, but can be installed through.

# yum update && yum install iptables-services

Both daemons can be started and enabled to start on boot with the usual systemd commands:

# systemctl start firewalld.service | iptables-service.service
# systemctl enable firewalld.service | iptables-service.service

Read Also: Useful Commands to Manage Systemd Services

As for the configuration files, the iptables service uses /etc/sysconfig/iptables (which will not exist if the package is not installed in your system). On a RHEL 7 box used as a cluster node, this file looks as follows:

Iptables Firewall Configuration

Iptables Firewall Configuration

Whereas firewalld store its configuration across two directories, /usr/lib/firewalld and /etc/firewalld:

# ls /usr/lib/firewalld /etc/firewalld

FirewallD Configuration

FirewallD Configuration

We will examine these configuration files further later in this article, after we add a few rules here and there. By now it will suffice to remind you that you can always find more information about both tools with.

# man firewalld.conf
# man firewall-cmd
# man iptables

Other than that, remember to take a look at Reviewing Essential Commands & System Documentation – Part 1of the current series, where I described several sources where you can get information about the packages installed on your RHEL 7 system.

Using Iptables to Control Network Traffic

You may want to refer to Configure Iptables Firewall – Part 8 of the Linux Foundation Certified Engineer (LFCE) series to refresh your memory about iptables internals before proceeding further. Thus, we will be able to jump in right into the examples.

Example 1: Allowing both incoming and outgoing web traffic

TCP ports 80 and 443 are the default ports used by the Apache web server to handle normal (HTTP) and secure (HTTPS) web traffic. You can allow incoming and outgoing web traffic through both ports on the enp0s3interface as follows:

# iptables -A INPUT -i enp0s3 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
# iptables -A OUTPUT -o enp0s3 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT
# iptables -A INPUT -i enp0s3 -p tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT
# iptables -A OUTPUT -o enp0s3 -p tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT
Example 2: Block all (or some) incoming connections from a specific network

There may be times when you need to block all (or some) type of traffic originating from a specific network, say 192.168.1.0/24 for example:

# iptables -I INPUT -s 192.168.1.0/24 -j DROP

will drop all packages coming from the 192.168.1.0/24 network, whereas,

# iptables -A INPUT -s 192.168.1.0/24 --dport 22 -j ACCEPT

will only allow incoming traffic through port 22.

Example 3: Redirect incoming traffic to another destination

If you use your RHEL 7 box not only as a software firewall, but also as the actual hardware-based one, so that it sits between two distinct networks, IP forwarding must have been already enabled in your system. If not, you need to edit /etc/sysctl.conf and set the value of net.ipv4.ip_forward to 1, as follows:

net.ipv4.ip_forward = 1

then save the change, close your text editor and finally run the following command to apply the change:

# sysctl -p /etc/sysctl.conf

For example, you may have a printer installed at an internal box with IP 192.168.0.10, with the CUPS service listening on port 631 (both on the print server and on your firewall). In order to forward print requests from clients on the other side of the firewall, you should add the following iptables rule:

# iptables -t nat -A PREROUTING -i enp0s3 -p tcp --dport 631 -j DNAT --to 192.168.0.10:631

Please keep in mind that iptables reads its rules sequentially, so make sure the default policies or later rules do not override those outlined in the examples above.

Getting Started with FirewallD

One of the changes introduced with firewalld are zones. This concept allows to separate networks into different zones level of trust the user has decided to place on the devices and traffic within that network.

To list the active zones:

# firewall-cmd --get-active-zones

In the example below, the public zone is active, and the enp0s3 interface has been assigned to it automatically. To view all the information about a particular zone:

# firewall-cmd --zone=public --list-all

List all FirewallD Zones

List all FirewallD Zones

Since you can read more about zones in the RHEL 7 Security guide, we will only list some specific examples here.

Example 4: Allowing services through the firewall

To get a list of the supported services, use.

# firewall-cmd --get-services

List All Supported Services

List All Supported Services

To allow http and https web traffic through the firewall, effective immediately and on subsequent boots:

# firewall-cmd --zone=MyZone --add-service=http
# firewall-cmd --zone=MyZone --permanent --add-service=http
# firewall-cmd --zone=MyZone --add-service=https
# firewall-cmd --zone=MyZone --permanent --add-service=https
# firewall-cmd --reload

If code>–zone is omitted, the default zone (you can check with firewall-cmd –get-default-zone) is used.

To remove the rule, replace the word add with remove in the above commands.

Example 5: IP / Port forwarding

First off, you need to find out if masquerading is enabled for the desired zone:

# firewall-cmd --zone=MyZone --query-masquerade

In the image below, we can see that masquerading is enabled for the external zone, but not for public:

Check Masquerading Status in Firewalld

Check Masquerading Status

You can either enable masquerading for public:

# firewall-cmd --zone=public --add-masquerade

or use masquerading in external. Here’s what we would do to replicate Example 3 with firewalld:

# firewall-cmd --zone=external --add-forward-port=port=631:proto=tcp:toport=631:toaddr=192.168.0.10

And don’t forget to reload the firewall.

You can find further examples on Part 9 of the RHCSA series, where we explained how to allow or disable the ports that are usually used by a web server and a ftp server, and how to change the corresponding rule when the default port for those services are changed. In addition, you may want to refer to the firewalld wiki for further examples.

Read Also: Useful FirewallD Examples to Configure Firewall in RHEL 7

Conclusion

In this article we have explained what a firewall is, what are the available services to implement one in RHEL 7, and provided a few examples that can help you get started with this task. If you have any comments, suggestions, or questions, feel free to let us know using the form below. Thank you in advance!

RHCSA Series: Automate RHEL 7 Installations Using ‘Kickstart’ – Part 12

Linux servers are rarely standalone boxes. Whether it is in a datacenter or in a lab environment, chances are that you have had to install several machines that will interact one with another in some way. If you multiply the time that it takes to install Red Hat Enterprise Linux 7 manually on a single server by the number of boxes that you need to set up, this can lead to a rather lengthy effort that can be avoided through the use of an unattended installation tool known as kickstart.

In this article we will show what you need to use kickstart utility so that you can forget about babysitting servers during the installation process.

Automatic Kickstart Installation of RHEL 7

RHCSA: Automatic Kickstart Installation of RHEL 7

Introducing Kickstart and Automated Installations

Kickstart is an automated installation method used primarily by Red Hat Enterprise Linux (and other Fedora spin-offs, such as CentOS, Oracle Linux, etc.) to execute unattended operating system installation and configuration. Thus, kickstart installations allow system administrators to have identical systems, as far as installed package groups and system configuration are concerned, while sparing them the hassle of having to manually install each of them.

Preparing for a Kickstart Installation

To perform a kickstart installation, we need to follow these steps:

1. Create a Kickstart file, a plain text file with several predefined configuration options.

2. Make the Kickstart file available on removable media, a hard drive or a network location. The client will use the rhel-server-7.0-x86_64-boot.iso file, whereas you will need to make the full ISO image (rhel-server-7.0-x86_64-dvd.iso) available from a network resource, such as a HTTP of FTP server (in our present case, we will use another RHEL 7 box with IP 192.168.0.18).

3. Start the Kickstart installation

To create a kickstart file, login to your Red Hat Customer Portal account, and use the Kickstart configuration tool to choose the desired installation options. Read each one of them carefully before scrolling down, and choose what best fits your needs:

Kickstart Configuration Tool

Kickstart Configuration Tool

If you specify that the installation should be performed either through HTTPFTP, or NFS, make sure the firewall on the server allows those services.

Although you can use the Red Hat online tool to create a kickstart file, you can also create it manually using the following lines as reference. You will notice, for example, that the installation process will be in English, using the latin american keyboard layout and the America/Argentina/San_Luis time zone:

lang en_US
keyboard la-latin1
timezone America/Argentina/San_Luis --isUtc
rootpw $1$5sOtDvRo$In4KTmX7OmcOW9HUvWtfn0 --iscrypted
#platform x86, AMD64, or Intel EM64T
text
url --url=http://192.168.0.18//kickstart/media
bootloader --location=mbr --append="rhgb quiet crashkernel=auto"
zerombr
clearpart --all --initlabel
autopart
auth --passalgo=sha512 --useshadow
selinux --enforcing
firewall --enabled
firstboot --disable
%packages
@base
@backup-server
@print-server
%end

In the online configuration tool, use 192.168.0.18 for HTTP Server and /kickstart/tecmint.bin for HTTP Directory in the Installation section after selecting HTTP as installation source. Finally, click the Downloadbutton at the right top corner to download the kickstart file.

In the kickstart sample file above, you need to pay careful attention to.

url --url=http://192.168.0.18//kickstart/media

That directory is where you need to extract the contents of the DVD or ISO installation media. Before doing that, we will mount the ISO installation file in /media/rhel as a loop device:

# mount -o loop /var/www/html/kickstart/rhel-server-7.0-x86_64-dvd.iso /media/rhel

Mount RHEL ISO Image

Mount RHEL ISO Image

Next, copy all the contents of /media/rhel to /var/www/html/kickstart/media:

# cp -R /media/rhel /var/www/html/kickstart/media

When you’re done, the directory listing and disk usage of /var/www/html/kickstart/media should look as follows:

Kickstart Media Files

Kickstart Media Files

Now we’re ready to kick off the kickstart installation.

Regardless of how you choose to create the kickstart file, it’s always a good idea to check its syntax before proceeding with the installation. To do that, install the pykickstart package.

# yum update && yum install pykickstart

And then use the ksvalidator utility to check the file:

# ksvalidator /var/www/html/kickstart/tecmint.bin

If the syntax is correct, you will not get any output, whereas if there’s an error in the file, you will get a warning notice indicating the line where the syntax is not correct or unknown.

Performing a Kickstart Installation

To start, boot your client using the rhel-server-7.0-x86_64-boot.iso file. When the initial screen appears, select Install Red Hat Enterprise Linux 7.0 and press the Tab key to append the following stanza and press Enter:

# inst.ks=http://192.168.0.18/kickstart/tecmint.bin

RHEL Kickstart Installation

RHEL Kickstart Installation

Where tecmint.bin is the kickstart file created earlier.

When you press Enter, the automated installation will begin, and you will see the list of packages that are being installed (the number and the names will differ depending on your choice of programs and package groups):

Automatic Kickstart Installation of RHEL 7

Automatic Kickstart Installation of RHEL 7

When the automated process ends, you will be prompted to remove the installation media and then you will be able to boot into your newly installed system:

RHEL 7 Boot Screen

RHEL 7 Boot Screen

Although you can create your kickstart files manually as we mentioned earlier, you should consider using the recommended approach whenever possible. You can either use the online configuration tool, or the anaconda-ks.cfg file that is created by the installation process in root’s home directory.

This file actually is a kickstart file, so you may want to install the first box manually with all the desired options (maybe modify the logical volumes layout or the file system on top of each one) and then use the resulting anaconda-ks.cfg file to automate the installation of the rest.

In addition, using the online configuration tool or the anaconda-ks.cfg file to guide future installations will allow you to perform them using an encrypted root password out-of-the-box.

Conclusion

Now that you know how to create kickstart files and how to use them to automate the installation of Red Hat Enterprise Linux 7 servers, you can forget about babysitting the installation process. This will give you time to do other things, or perhaps some leisure time if you’re lucky.

Either way, let us know what you think about this article using the form below. Questions are also welcome!

Read AlsoAutomated Installations of Multiple RHEL/CentOS 7 Distributions using PXE and Kickstart

RHCSA Series: Mandatory Access Control Essentials with SELinux in RHEL 7 – Part 13

During this series we have explored in detail at least two access control methods: standard ugo/rwxpermissions (Manage Users and Groups – Part 3) and access control lists (Configure ACL’s on File Systems – Part 7).

RHCSA Exam: SELinux Essentials and Control FileSystem Access

RHCSA Exam: SELinux Essentials and Control FileSystem Access

Although necessary as first level permissions and access control mechanisms, they have some limitations that are addressed by Security Enhanced Linux (aka SELinux for short).

One of such limitations is that a user can expose a file or directory to a security breach through a poorly elaborated chmod command and thus cause an unexpected propagation of access rights. As a result, any process started by that user can do as it pleases with the files owned by the user, where finally a malicious or otherwise compromised software can achieve root-level access to the entire system.

With those limitations in mind, the United States National Security Agency (NSA) first devised SELinux, a flexible mandatory access control method, to restrict the ability of processes to access or perform other operations on system objects (such as files, directories, network ports, etc) to the least permission model, which can be modified later as needed. In few words, each element of the system is given only the access required to function.

In RHEL 7SELinux is incorporated into the kernel itself and is enabled in Enforcing mode by default. In this article we will explain briefly the basic concepts associated with SELinux and its operation.

SELinux Modes

SELinux can operate in three different ways:

  1. Enforcing: SELinux denies access based on SELinux policy rules, a set of guidelines that control the security engine.
  2. Permissive: SELinux does not deny access, but denials are logged for actions that would have been denied if running in enforcing mode.
  3. Disabled (self-explanatory).

The getenforce command displays the current mode of SELinux, whereas setenforce (followed by a 1 or a 0) is used to change the mode to Enforcing or Permissive, respectively, during the current session only.

In order to achieve persistence across logouts and reboots, you will need to edit the /etc/selinux/configfile and set the SELINUX variable to either enforcingpermissive, or disabled:

# getenforce
# setenforce 0
# getenforce
# setenforce 1
# getenforce
# cat /etc/selinux/config

Set SELinux Mode

Set SELinux Mode

Typically you will use setenforce to toggle between SELinux modes (enforcing to permissive and back) as a first troubleshooting step. If SELinux is currently set to enforcing while you’re experiencing a certain problem, and the same goes away when you set it to permissive, you can be confident you’re looking at a SELinux permissions issue.

SELinux Contexts

A SELinux context consists of an access control environment where decisions are made based on SELinux user, role, and type (and optionally a level):

  1. A SELinux user complements a regular Linux user account by mapping it to a SELinux user account, which in turn is used in the SELinux context for processes in that session, in order to explicitly define their allowed roles and levels.
  2. The concept of role acts as an intermediary between domains and SELinux users in that it defines which process domains and file types can be accessed. This will shield your system against vulnerability to privilege escalation attacks.
  3. A type defines an SELinux file type or an SELinux process domain. Under normal circumstances, processes are prevented from accessing files that other processes use, and and from accessing other processes, thus access is only allowed if a specific SELinux policy rule exists that allows it.

Let’s see how all of that works through the following examples.

EXAMPLE 1: Changing the default port for the sshd daemon

In Securing SSH – Part 8 we explained that changing the default port where sshd listens on is one of the first security measures to secure your server against external attacks. Let’s edit the /etc/ssh/sshd_config file and set the port to 9999:

Port 9999

Save the changes, and restart sshd:

# systemctl restart sshd
# systemctl status sshd

Change SSH Port

Restart SSH Service

As you can see, sshd has failed to start. But what happened?

A quick inspection of /var/log/audit/audit.log indicates that sshd has been denied permissions to start on port 9999 (SELinux log messages include the word “AVC” so that they might be easily identified from other messages) because that is a reserved port for the JBoss Management service:

# cat /var/log/audit/audit.log | grep AVC | tail -1

Inspect SSH Logs

Inspect SSH Logs

At this point you could disable SELinux (but don’t!) as explained earlier and try to start sshd again, and it should work. However, the semanage utility can tell us what we need to change in order for us to be able to start sshd in whatever port we choose without issues.

Run,

# semanage port -l | grep ssh

to get a list of the ports where SELinux allows sshd to listen on.

Semanage Tool

Semanage Tool

So let’s change the port in /etc/ssh/sshd_config to Port 9998, add the port to the ssh_port_t context, and then restart the service:

# semanage port -a -t ssh_port_t -p tcp 9998
# systemctl restart sshd
# systemctl is-active sshd

Semanage Add Port

Semanage Add Port

As you can see, the service was started successfully this time. This example illustrates the fact that SELinux controls the TCP port number to its own port type internal definitions.

EXAMPLE 2: Allowing httpd to send access sendmail

This is an example of SELinux managing a process accessing another process. If you were to implement mod_security and mod_evasive along with Apache in your RHEL 7 server, you need to allow httpd to access sendmail in order to send a mail notification in the wake of a (D)DoS attack. In the following command, omit the -P flag if you do not want the change to be persistent across reboots.

# semanage boolean -1 | grep httpd_can_sendmail
# setsebool -P httpd_can_sendmail 1
# semanage boolean -1 | grep httpd_can_sendmail

Allow Apache to Send Mails

Allow Apache to Send Mails

As you can tell from the above example, SELinux boolean settings (or just booleans) are true / false rules embedded into SELinux policies. You can list all the booleans with semanage boolean -l, and alternatively pipe it to grep in order to filter the output.

EXAMPLE 3: Serving a static site from a directory other than the default one

Suppose you are serving a static website using a different directory than the default one (/var/www/html), say /websites (this could be the case if you’re storing your web files in a shared network drive, for example, and need to mount it at /websites).

a). Create an index.html file inside /websites with the following contents:

<html>
<h2>SELinux test</h2>
</html>

If you do,

# ls -lZ /websites/index.html

you will see that the index.html file has been labeled with the default_t SELinux type, which Apache can’t access:

Check SELinux File Permission

Check SELinux File Permission

b). Change the DocumentRoot directive in /etc/httpd/conf/httpd.conf to /websites and don’t forget to update the corresponding Directory block. Then, restart Apache.

c). Browse to http://<web server IP address>, and you should get a 503 Forbidden HTTP response.

d). Next, change the label of /websites, recursively, to the httpd_sys_content_t type in order to grant Apache read-only access to that directory and its contents:

# semanage fcontext -a -t httpd_sys_content_t "/websites(/.*)?"

e). Finally, apply the SELinux policy created in d):

# restorecon -R -v /websites

Now restart Apache and browse to http://<web server IP address> again and you will see the html file displayed correctly:

Verify Apache Page

Verify Apache Page

Summary

In this article we have gone through the basics of SELinux. Note that due to the vastness of the subject, a full detailed explanation is not possible in a single article, but we believe that the principles outlined in this guide will help you to move on to more advanced topics should you wish to do so.

If I may, let me recommend two essential resources to start with: the NSA SELinux page and the RHEL 7 SELinux User’s and Administrator’s guide.

Don’t hesitate to let us know if you have any questions or comments.

RHCSA Series: Setting Up LDAP-based Authentication in RHEL 7 – Part 14

We will begin this article by outlining some LDAP basics (what it is, where it is used and why) and show how to set up a LDAP server and configure a client to authenticate against it using Red Hat Enterprise Linux 7 systems.

Setup LDAP Server and Client Authentication

RHCSA Series: Setup LDAP Server and Client Authentication – Part 14

As we will see, there are several other possible application scenarios, but in this guide we will focus entirely on LDAP-based authentication. In addition, please keep in mind that due to the vastness of the subject, we will only cover its basics here, but you can refer to the documentation outlined in the summary for more in-depth details.

For the same reason, you will note that I have decided to leave out several references to man pages of LDAP tools for the sake of brevity, but the corresponding explanations are at a fingertip’s distance (man ldapadd, for example).

That said, let’s get started.

Our Testing Environment

Our test environment consists of two RHEL 7 boxes:

Server: 192.168.0.18. FQDN: rhel7.mydomain.com
Client: 192.168.0.20. FQDN: ldapclient.mydomain.com

If you want, you can use the machine installed in Part 12: Automate RHEL 7 installations using Kickstart as client.

What is LDAP?

LDAP stands for Lightweight Directory Access Protocol and consists in a set of protocols that allows a client to access, over a network, centrally stored information (such as a directory of login shells, absolute paths to home directories, and other typical system user information, for example) that should be accessible from different places or available to a large number of end users (another example would be a directory of home addresses and phone numbers of all employees in a company).

Keeping such (and more) information centrally means it can be more easily maintained and accessed by everyone who has been granted permissions to use it.

The following diagram offers a simplified diagram of LDAP, and is described below in greater detail:

LDAP Diagram

LDAP Diagram

Explanation of above diagram in detail.

  1. An entry in a LDAP directory represents a single unit or information and is uniquely identified by what is called a Distinguished Name.
  2. An attribute is a piece of information associated with an entry (for example, addresses, available contact phone numbers, and email addresses).
  3. Each attribute is assigned one or more values consisting in a space-separated list. A value that is unique per entry is called a Relative Distinguished Name.

That being said, let’s proceed with the server and client installations.

Installing and Configuring a LDAP Server and Client

In RHEL 7, LDAP is implemented by OpenLDAP. To install the server and client, use the following commands, respectively:

# yum update && yum install openldap openldap-clients openldap-servers
# yum update && yum install openldap openldap-clients nss-pam-ldapd

Once the installation is complete, there are some things we look at. The following steps should be performed on the server alone, unless explicitly noted:

1. Make sure SELinux does not get in the way by enabling the following booleans persistently, both on the server and the client:

# setsebool -P allow_ypbind=0 authlogin_nsswitch_use_ldap=0

Where allow_ypbind is required for LDAP-based authentication, and authlogin_nsswitch_use_ldap may be needed by some applications.

2. Enable and start the service:

# systemctl enable slapd.service
# systemctl start slapd.service

Keep in mind that you can also disable, restart, or stop the service with systemctl as well:

# systemctl disable slapd.service
# systemctl restart slapd.service
# systemctl stop slapd.service

3. Since the slapd service runs as the ldap user (which you can verify with ps -e -o pid,uname,comm | grep slapd), such user should own the /var/lib/ldap directory in order for the server to be able to modify entries created by administrative tools that can only be run as root (more on this in a minute).

Before changing the ownership of this directory recursively, copy the sample database configuration file for slapd into it:

# cp /usr/share/openldap-servers/DB_CONFIG.example /var/lib/ldap/DB_CONFIG
# chown -R ldap:ldap /var/lib/ldap

4. Set up an OpenLDAP administrative user and assign a password:

# slappasswd

as shown in the next image:

Set LDAP Admin Password

Set LDAP Admin Password

and create an LDIF file (ldaprootpasswd.ldif) with the following contents:

dn: olcDatabase={0}config,cn=config
changetype: modify
add: olcRootPW
olcRootPW: {SSHA}PASSWORD

where:

  1. PASSWORD is the hashed string obtained earlier.
  2. cn=config indicates global config options.
  3. olcDatabase indicates a specific database instance name and can be typically found inside /etc/openldap/slapd.d/cn=config.

Referring to the theoretical background provided earlier, the ldaprootpasswd.ldif file will add an entry to the LDAP directory. In that entry, each line represents an attribute: value pair (where dn, changetype, add, and olcRootPW are the attributes and the strings to the right of each colon are their corresponding values).

You may want to keep this in mind as we proceed further, and please note that we are using the same Common Names (cn=) throughout the rest of this article, where each step depends on the previous one.

5. Now, add the corresponding LDAP entry by specifying the URI referring to the ldap server, where only the protocol/host/port fields are allowed.

# ldapadd -H ldapi:/// -f ldaprootpasswd.ldif 

The output should be similar to:

LDAP Configuration

LDAP Configuration

and import some basic LDAP definitions from the /etc/openldap/schema directory:

# for def in cosine.ldif nis.ldif inetorgperson.ldif; do ldapadd -H ldapi:/// -f /etc/openldap/schema/$def; done

LDAP Definitions

LDAP Definitions

6. Have LDAP use your domain in its database.

Create another LDIF file, which we will call ldapdomain.ldif, with the following contents, replacing your domain (in the Domain Component dc=) and password as appropriate:

dn: olcDatabase={1}monitor,cn=config
changetype: modify
replace: olcAccess
olcAccess: {0}to * by dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth"
  read by dn.base="cn=Manager,dc=mydomain,dc=com" read by * none

dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcSuffix
olcSuffix: dc=mydomain,dc=com

dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcRootDN
olcRootDN: cn=Manager,dc=mydomain,dc=com

dn: olcDatabase={2}hdb,cn=config
changetype: modify
add: olcRootPW
olcRootPW: {SSHA}PASSWORD

dn: olcDatabase={2}hdb,cn=config
changetype: modify
add: olcAccess
olcAccess: {0}to attrs=userPassword,shadowLastChange by
  dn="cn=Manager,dc=mydomain,dc=com" write by anonymous auth by self write by * none
olcAccess: {1}to dn.base="" by * read
olcAccess: {2}to * by dn="cn=Manager,dc=mydomain,dc=com" write by * read

Then load it as follows:

# ldapmodify -H ldapi:/// -f ldapdomain.ldif

LDAP Domain Configuration

LDAP Domain Configuration

7. Now it’s time to add some entries to our LDAP directory. Attributes and values are separated by a colon (:)in the following file, which we’ll name baseldapdomain.ldif:

dn: dc=mydomain,dc=com
objectClass: top
objectClass: dcObject
objectclass: organization
o: mydomain com
dc: mydomain

dn: cn=Manager,dc=mydomain,dc=com
objectClass: organizationalRole
cn: Manager
description: Directory Manager

dn: ou=People,dc=mydomain,dc=com
objectClass: organizationalUnit
ou: People

dn: ou=Group,dc=mydomain,dc=com
objectClass: organizationalUnit
ou: Group

Add the entries to the LDAP directory:

# ldapadd -x -D cn=Manager,dc=mydomain,dc=com -W -f baseldapdomain.ldif

Add LDAP Domain Attributes and Values

Add LDAP Domain Attributes and Values

8. Create a LDAP user called ldapuser (adduser ldapuser), then create the definitions for a LDAP group in ldapgroup.ldif.

# adduser ldapuser
# vi ldapgroup.ldif

Add following content.

dn: cn=Manager,ou=Group,dc=mydomain,dc=com
objectClass: top
objectClass: posixGroup
gidNumber: 1004

where gidNumber is the GID in /etc/group for ldapuser) and load it:

# ldapadd -x -W -D "cn=Manager,dc=mydomain,dc=com" -f ldapgroup.ldif

9. Add a LDIF file with the definitions for user ldapuser (ldapuser.ldif):

dn: uid=ldapuser,ou=People,dc=mydomain,dc=com
objectClass: top
objectClass: account
objectClass: posixAccount
objectClass: shadowAccount
cn: ldapuser
uid: ldapuser
uidNumber: 1004
gidNumber: 1004
homeDirectory: /home/ldapuser
userPassword: {SSHA}fiN0YqzbDuDI0Fpqq9UudWmjZQY28S3M
loginShell: /bin/bash
gecos: ldapuser
shadowLastChange: 0
shadowMax: 0
shadowWarning: 0

and load it:

# ldapadd -x -D cn=Manager,dc=mydomain,dc=com -W -f ldapuser.ldif

LDAP User Configuration

LDAP User Configuration

Likewise, you can delete the user entry you just created:

# ldapdelete -x -W -D cn=Manager,dc=mydomain,dc=com "uid=ldapuser,ou=People,dc=mydomain,dc=com"

10. Allow communication through the firewall:

# firewall-cmd --add-service=ldap

11. Last, but not least, enable the client to authenticate using LDAP.

To help us in this final step, we will use the authconfig utility (an interface for configuring system authentication resources).

Using the following command, the home directory for the requested user is created if it doesn’t exist after the authentication against the LDAP server succeeds:

# authconfig --enableldap --enableldapauth --ldapserver=rhel7.mydomain.com --ldapbasedn="dc=mydomain,dc=com" --enablemkhomedir --update

LDAP Client Configuration

LDAP Client Configuration

Summary

In this article we have explained how to set up basic authentication against a LDAP server. To further configure the setup described in the present guide, please refer to Chapter 13 – LDAP Configuration in the RHEL 7 System administrator’s guide, paying special attention to the security settings using TLS.

Feel free to leave any questions you may have using the comment form below.

RHCSA Series: Essentials of Virtualization and Guest Administration with KVM – Part 15

If you look up the word virtualize in a dictionary, you will find that it means “to create a virtual (rather than actual) version of something”. In computing, the term virtualization refers to the possibility of running multiple operating systems simultaneously and isolated one from another, on top of the same physical (hardware) system, known in the virtualization schema as host.

KVM Virtualization Basics and KVM Guest Administration

RHCSA Series: Essentials of Virtualization and Guest Administration with KVM – Part 15

Through the use of the virtual machine monitor (also known as hypervisor), virtual machines (referred to as guests) are provided virtual resources (i.e. CPU, RAM, storage, network interfaces, to name a few) from the underlying hardware.

With that in mind, it is plain to see that one of the main advantages of virtualization is cost savings (in equipment and network infrastructure and in terms of maintenance effort) and a substantial reduction in the physical space required to accommodate all the necessary hardware.

Since this brief how-to cannot cover all virtualization methods, I encourage you to refer to the documentation listed in the summary for further details on the subject.

Please keep in mind that the present article is intended to be a starting point to learn the basics of virtualization in RHEL 7 using KVM (Kernel-based Virtual Machine) with command-line utilities, and not an in-depth discussion of the topic.

Verifying Hardware Requirements and Installing Packages

In order to set up virtualization, your CPU must support it. You can verify whether your system meets the requirements with the following command:

# grep -E 'svm|vmx' /proc/cpuinfo

In the following screenshot we can see that the current system (with an AMD microprocessor) supports virtualization, as indicated by svm. If we had an Intel-based processor, we would see vmx instead in the results of the above command.

Check KVM Support

Check KVM Support

In addition, you will need to have virtualization capabilities enabled in the firmware of your host (BIOS or UEFI).

Now install the necessary packages:

  1. qemu-kvm is an open source virtualizer that provides hardware emulation for the KVM hypervisor whereas qemu-img provides a command line tool for manipulating disk images.
  2. libvirt includes the tools to interact with the virtualization capabilities of the operating system.
  3. libvirt-python contains a module that permits applications written in Python to use the interface supplied by libvirt.
  4. libguestfs-tools: miscellaneous system administrator command line tools for virtual machines.
  5. virt-install: other command-line utilities for virtual machine administration.
# yum update && yum install qemu-kvm qemu-img libvirt libvirt-python libguestfs-tools virt-install

Once the installation completes, make sure you start and enable the libvirtd service:

# systemctl start libvirtd.service
# systemctl enable libvirtd.service

By default, each virtual machine will only be able to communicate with the rest in the same physical server and with the host itself. To allow the guests to reach other machines inside our LAN and also the Internet, we need to set up a bridge interface in our host (say br0, for example) by,

1. adding the following line to our main NIC configuration (most likely /etc/sysconfig/network-scripts/ifcfg-enp0s3):

BRIDGE=br0

2. creating the configuration file for br0 (/etc/sysconfig/network-scripts/ifcfg-br0) with these contents (note that you may have to change the IP address, gateway address, and DNS information):

DEVICE=br0
TYPE=Bridge
BOOTPROTO=static
IPADDR=192.168.0.18
NETMASK=255.255.255.0
GATEWAY=192.168.0.1
NM_CONTROLLED=no
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=br0
ONBOOT=yes
DNS1=8.8.8.8
DNS2=8.8.4.4

3. finally, enabling packet forwarding by making, in /etc/sysctl.conf,

net.ipv4.ip_forward = 1

and loading the changes to the current kernel configuration:

# sysctl -p

Note that you may also need to tell firewalld that this kind of traffic should be allowed. Remember that you can refer to the article on that topic in this same series (Part 11: Network Traffic Control Using FirewallD and Iptables) if you need help to do that.

Creating VM Images

By default, VM images will be created to /var/lib/libvirt/images and you are strongly advised to not change this unless you really need to, know what you’re doing, and want to handle SELinux settings yourself (such topic is out of the scope of this tutorial but you can refer to Part 13 of the RHCSA series: Mandatory Access Control Essentials with SELinux if you want to refresh your memory).

This means that you need to make sure that you have allocated the necessary space in that filesystem to accommodate your virtual machines.

The following command will create a virtual machine named tecmint-virt01 with 1 virtual CPU, 1 GB (=1024 MB) of RAM, and 20 GB of disk space (represented by /var/lib/libvirt/images/tecmint-virt01.img) using the rhel-server-7.0-x86_64-dvd.iso image located inside /home/gacanepa/ISOs as installation media and the br0 as network bridge:

# virt-install \
--network bridge=br0
--name tecmint-virt01 \
--ram=1024 \
--vcpus=1 \
--disk path=/var/lib/libvirt/images/tecmint-virt01.img,size=20 \
--graphics none \
--cdrom /home/gacanepa/ISOs/rhel-server-7.0-x86_64-dvd.iso
--extra-args="console=tty0 console=ttyS0,115200"

If the installation file was located in a HTTP server instead of an image stored in your disk, you will have to replace the –cdrom flag with –location and indicate the address of the online repository.

As for the –graphics none option, it tells the installer to perform the installation in text-mode exclusively. You can omit that flag if you are using a GUI interface and a VNC window to access the main VM console. Finally, with –extra-args we are passing kernel boot parameters to the installer that set up a serial VM console.

The installation should now proceed as a regular (real) server now. If not, please review the steps listed above.

Managing Virtual Machines

These are some typical administration tasks that you, as a system administrator, will need to perform on your virtual machines. Note that all of the following commands need to be run from your host:

1. List all VMs:

# virsh list --all

From the output of the above command you will have to note the Id for the virtual machine (although it will also return its name and current status) because you will need it for most administration tasks related to a particular VM.

2. Display information about a guest:

# virsh dominfo [VM Id]

3. Start, restart, or stop a guest operating system:

# virsh start | reboot | shutdown [VM Id]

4. Access a VM’s serial console if networking is not available and no X server is running on the host:

# virsh console [VM Id]

Note that this will require that you add the serial console configuration information to the /etc/grub.conf file (refer to the argument passed to the –extra-args option when the VM was created).

5. Modify assigned memory or virtual CPUs:

First, shutdown the guest:

# virsh shutdown [VM Id]

Edit the VM configuration for RAM:

# virsh edit [VM Id]

Then modify

<memory>[Memory size here without brackets]</memory>

Restart the VM with the new settings:

# virsh create /etc/libvirt/qemu/tecmint-virt01.xml

Finally, change the memory dynamically:

# virsh setmem [VM Id] [Memory size here without brackets]

For CPU:

# virsh edit [VM Id]

Then modify

<cpu>[Number of CPUs here without brackets]</cpu>

For further commands and details, please refer to table 26.1 in Chapter 26 of the RHEL 5 Virtualization guide (that guide, though a bit old, includes an exhaustive list of virsh commands used for guest administration).

SUMMARY

In this article we have covered some basic aspects of virtualization with KVM in RHEL 7, which is both a vast and a fascinating topic, and I hope it will be helpful as a starting guide for you to later explore more advanced subjects found in the official RHEL virtualization getting started and deployment / administration guides.

In addition, you can refer to the preceding articles in this KVM series in order to clarify or expand some of the concepts explained here.

Source

LFCE (Linux Foundation Certified Engineer)

LFCE: Installing Network Services and Configuring Automatic Startup at Boot – Part 1

Linux Foundation Certified Engineer (LFCE) is prepared to install, configure, manage, and troubleshoot network services in Linux systems, and is responsible for the design and implementation of system architecture.

Configure Services at System Startup

Linux Foundation Certified Engineer – Part 1

Introducing The Linux Foundation Certification Program.

In this 12-article series, titled Preparation for the LFCE (Linux Foundation Certified Engineer) exam, we will cover the required domains and competencies in Ubuntu, CentOS, and openSUSE:

Part 1Installing Network Services and Configuring Automatic Startup at Boot

Installing Network Services

When it comes to setting up and using any kind of network services, it is hard to imagine a scenario that Linux cannot be a part of. In this article we will show how to install the following network services in Linux (each configuration will be covered in upcoming separate articles):

  1. NFS (Network File System) Server
  2. Apache Web Server
  3. Squid Proxy Server + SquidGuard
  4. Email Server (Postfix + Dovecot), and
  5. Iptables

In addition, we will want to make sure all of those services are automatically started on boot or on-demand.

We must note that even when you can run all of these network services in the same physical machine or virtual private server, one of the first so-called “rules” of network security tells system administrators to avoid doing so to the extent possible. What is the judgement that supports that statement? It’s rather simple: if for some reason a network service is compromised in a machine that runs more than one of them, it can be relatively easy for an attacker to compromise the rest as well.

Now, if you really need to install multiple network services on the same machine (in a test lab, for example), make sure you enable only those that you need at a certain moment, and disable them later.

Before we begin, we need to clarify that the current article (along with the rest in the LFCS and LFCE series) is focused on a performance-based perspective, and thus cannot examine every theoretical detail about the covered topics. We will, however, introduce each topic with the necessary information as a starting point.

In order to use the following network services, you will need to disable the firewall for the time being until we learn how to allow the corresponding traffic through the firewall.

Please note that this is NOT recommended for a production setup, but we will do so for learning purposes only.

In a default Ubuntu installation, the firewall should not be active. In openSUSE and CentOS, you will need to explicitly disable it:

# systemctl stop firewalld
# systemctl disable firewalld 
or
# or systemctl mask firewalld

That being said, let’s get started!

Installing A NFSv4 Server

NFS in itself is a network protocol, whose latest version is NFSv4. This is the version that we will use throughout this series.

A NFS server is the traditional solution that allows remote Linux clients to mount its shares over a network and interact with those file systems as though they are mounted locally, allowing to centralize storage resources for the network.

On CentOS
# yum update && yum install nfs-utils
On Ubuntu
# aptitude update && aptitude install nfs-kernel-server
On OpenSUSE
# zypper refresh && zypper install nfsserver

For more detailed instructions, read our article that tells how to Configuring NFS Server and Client on Linux systems.

Installing Apache Web Server

The Apache web server is a robust and reliable FOSS implementation of a HTTP server. As of the end of October 2014, Apache powers 385 million sites, giving it a 37.45% share of the market. You can use Apache to serve a standalone website or multiple virtual hosts in one machine.

# yum update && yum install httpd		[On CentOS]
# aptitude update && aptitude apache2 		[On Ubuntu]
# zypper refresh && zypper apache2		[On openSUSE]

For more detailed instructions, read our following articles that shows on how to create Ip-based & Name-based Apache virtual hosts and how to secure Apache web server.

  1. Apache IP Based and Name Based Virtual Hosting
  2. Apache Web Server Hardening and Security Tips

Installing Squid and SquidGuard

Squid is a proxy server and web cache daemon and, as such, acts as intermediary between several client computers and the Internet (or a router connected to the Internet), while speeding up frequent requests by caching web contents and DNS resolution at the same time. It can also be used to deny (or grant) access to certain URLs by network segment or based on forbidden keywords, and keeps a log file of all connections made to the outside world on a per-user basis.

Squidguard is a redirector that implements blacklists to enhance squid, and integrates seamlessly with it.

# yum update && yum install squid squidGuard			[On CentOS] 
# aptitude update && aptitude install squid3 squidguard		[On Ubuntu]
# zypper refresh && zypper install squid squidGuard 		[On openSUSE]

Installing Postfix and Dovecot

Postfix is a Mail Transport Agent (MTA). It is the application responsible for routing and delivering email messages from a source to a destination mail servers, whereas dovecot is a widely used IMAP and POP3 email server that fetches messages from the MTA and delivers them to the right user mailbox.

Dovecot plugins for several relational database management systems are also available.

# yum update && yum install postfix dovecot 				[On CentOS] 
# aptitude update && aptitude postfix dovecot-imapd dovecot-pop3d 	[On Ubuntu]
# zypper refresh && zypper postfix dovecot				[On openSUSE]	

About Iptables

In few words, a firewall is a network resource that is used to manage access to or from a private network, and to redirect incoming and outgoing traffic based on certain rules.

Iptables is a tool installed by default in Linux and serves as a frontend to the netfilter kernel module, which is the ultimate responsible for implementing a firewall to perform packet filtering / redirection and network address translation functionalities.

Since iptables is installed in Linux by default, you only have to make sure it is actually running. To do that, we should check that the iptables modules are loaded:

# lsmod | grep ip_tables

If the above command does not return anything, it means the ip_tables module has not been loaded. In that case, run the following command to load the module.

# modprobe -a ip_tables

Read AlsoBasic Guide to Linux Iptables Firewall

Configuring Services Automatic Start on Boot

As discussed in Managing System Startup Process and Services – Part 7 of the 10-article series about the LFCS certification, there are several system and service managers available in Linux. Whatever your choice, you need to know how to start, stop, and restart network services on-demand, and how to enable them to automatically start on boot.

You can check what is your system and service manager by running the following command:

# ps --pid 1

Check Linux Service Manager

Check Linux Service Manager

Depending on the output of the above command, you will use one of the following commands to configure whether each service should start automatically on boot or not:

On systemd-based
----------- Enable Service to Start at Boot -----------
# systemctl enable [service]
----------- Prevent Service from Starting at Boot -----------
# systemctl disable [service] # prevent [service] from starting at boot
On sysvinit-based
----------- Start Service at Boot in Runlevels A and B -----------
# chkconfig --level AB [service] on 
-----------  Don’t Start Service at boot in Runlevels C and D -----------
# chkconfig --level CD service off 
On upstart-based

Make sure the /etc/init/[service].conf script exists and contains the minimal configuration, such as:

# When to start the service
start on runlevel [2345]
# When to stop the service
stop on runlevel [016]
# Automatically restart process in case of crash
respawn
# Specify the process/command (add arguments if needed) to run
exec /absolute/path/to/network/service/binary arg1 arg2

You may also want to check Part 7 of the LFCS series (which we just referred to in the beginning of this section) for other useful commands to manage network services on-demand.

Summary

By now you should have all the network services described in this article installed, and possibly running with the default configuration. In later articles we will explore how to configure them according to our needs, so make sure to stay tuned! And please feel free to share your comments (or post questions, if you have any) on this article using the form below.

Reference Links
  1. About the LFCE
  2. Why get a Linux Foundation Certification?
  3. Register for the LFCE exam

Setting Up Standard Linux File Systems and Configuring NFSv4 Server – Part 2

A Linux Foundation Certified Engineer​ (LFCE)​ is trained to set up, configure, manage, and troubleshoot network services in Linux systems, and is answerable for the design and implementation of system architecture and solving everyday related issues.​

Configuring NFS Server

Linux Foundation Certified Engineer – Part 2

Introducing The Linux Foundation Certification Program (LFCE).

In Part 1 of this series we explained how to install a NFS (Network File System) server, and set the service to start automatically on boot. If you haven’t already done so, please refer to that article and follow the outlined steps before proceeding.

  1. Installing Network Services and Configuring Automatic Startup at Boot – Part 1

I will now show you how to properly configure your NFSv4 server (without authentication security) so that you can set up network shares to use in Linux clients as if those file systems were installed locally. Note that you can use LDAP or NIS for authentication purposes, but both options are out of the scope of the LFCE certification.

Configuring a NFSv4 server

Once the NFS server is up and running, we will focus our attention on:

  1. specifying and configuring the local directories that we want to share over the network, and
  2. mounting those network shares in clients automatically, either through the /etc/fstab file or the automount kernel-based utility (autofs).

We will explain later when to choose one method or the other.

Before we being, we need to make sure that the idmapd daemon is running and configured. This service performs the mapping of NFSv4 names (user@mydomain) to user and group IDs, and is required to implement a NFSv4 server.

Edit /etc/default/nfs-common to enable idmapd.

NEED_IDMAPD=YES

And edit /etc/idmapd.conf with your local domain name (the default is the FQDN of the host).

Domain = yourdomain.com

Then start idmapd.

# service nfs-common start 	[sysvinit / upstart based systems]
# systemctl start nfs-common 	[systemd based systems]

Exporting Network Shares

The /etc/exports file contains the main configuration directives for our NFS server, defines the file systems that will be exported to remote hosts and specifies the available options. In this file, each network share is indicated using a separate line, which has the following structure by default:

/filesystem/to/export client1([options]) clientN([options])

Where /filesystem/to/export is the absolute path to the exported file system, whereas client1 (up to clientN) represents the specific client (hostname or IP address) or network (wildcards are allowed) to which the share is being exported. Finally, options is a list of comma-separated values (options) that are taken into account while exporting the share, respectively. Please note that there are no spaces between each hostname and the parentheses it precedes.

Here is a list of the most-frequent options and their respective description:

  1. ro (short for read-only): Remote clients can mount the exported file systems with read permissions only.
  2. rw (short for read-write): Allows remote hosts to make write changes in the exported file systems.
  3. wdelay (short for write delay): The NFS server delays committing changes to disk if it suspects another related write request is imminent. However, if the NFS server receives multiple small unrelated requests, this option will reduce performance, so the no_wdelay option can be used to turn it off.
  4. sync: The NFS server replies to requests only after changes have been committed to permanent storage (i.e., the hard disk). Its opposite, the async option, may increase performance but at the cost of data loss or corruption after an unclean server restart.
  5. root_squash: Prevents remote root users from having superuser privileges in the server and assigns them the user ID for user nobody. If you want to “squash” all users (and not just root), you can use the all_squashoption.
  6. anonuid / anongid: Explicitly sets the UID and GID of the anonymous account (nobody).
  7. subtree_check: If only a subdirectory of a file system is exported, this option verifies that a requested file is located in that exported subdirectory. On the other hand, if the entire file system is exported, disabling this option with no_subtree_check will speed up transfers. The default option nowadays is no_subtree_check as subtree checking tends to cause more problems than it is worth, according to man 5 exports.
  8. fsid=0 | root (zero or root): Specifies that the specified file system is the root of multiple exported directories (only applies in NFSv4).

In this article we will use the directories /NFS-SHARE and /NFS-SHARE/mydir on 192.168.0.10 (NFS server) as our test file systems.

We can always list the available network shares in a NFS server using the following command:

# showmount -e [IP or hostname]

Check NFS Shares

Check NFS Shares

In the output above, we can see that the /NFS-SHARE and /NFS-SHARE/mydir shares on 192.168.0.10 have been exported to client with IP address 192.168.0.17.

Our initial configuration (refer to the /etc/exports directory on your NFS server) for the exported directory is as follows:

/NFS-SHARE  	192.168.0.17(fsid=0,no_subtree_check,rw,root_squash,sync,anonuid=1000,anongid=1000)
/NFS-SHARE/mydir    	192.168.0.17(ro,sync,no_subtree_check)

After editing the configuration file, we must restart the NFS service:

# service nfs-kernel-server restart 		[sysvinit / upstart based system]
# systemctl restart nfs-server			[systemd based systems]
Mounting exported network shares using autofs

You may want to refer to Part 5 of the LFCS series (“How to Mount/Unmount Local and Network (Samba & NFS) Filesystems in Linux”) for details on mounting remote NFS shares on-demand using the mount command or permanently through the /etc/fstab file.

The downside of mounting a network file system using these methods is that the system must allocate the necessary resources to keep the share mounted at all times, or at least until we decide to unmount them manually. An alternative is to mount the desired file system on-demand automatically (without using the mountcommand) through autofs, which can mount file systems when they are used and unmount them after a period of inactivity.

Autofs reads /etc/auto.master, which has the following format:

[mount point]	[map file]

Where [map file] is used to indicate multiple mount points within [mount point].

This master map file (/etc/auto.master) is then used to determine which mount points are defined, and then starts an automount process with the specified parameters for each mount point.

Mounting exported NFS shares using autofs

Edit your /etc/auto.master as follows:

/media/nfs	/etc/auto.nfs-share	--timeout=60

and create a map file named /etc/auto.nfs-share with the following contents:

writeable_share  -fstype=nfs4 192.168.0.10:/
non_writeable_share  -fstype=nfs4 192.168.0.10:/mydir

Note that the first field in /etc/auto.nfs-share is the name of a subdirectory inside /media/nfs. Each subdirectory is created dynamically by autofs.

Now, restart the autofs service:

# service autofs restart 			[sysvinit / upstart based systems]
# systemctl restart autofs 			[systemd based systems]

and finally, to enable autofs to start on boot, run the following command:

# chkconfig --level 345 autofs on
# systemctl enable autofs 			[systemd based systems]
Examining mounted file systems after starting the autofs daemon

When we restart autofs, the mount command shows us that the map file (/etc/auto.nfs-share) is mounted on the specified directory in /etc/auto.master:

NFS Share Mounted

NFS Share Mounted

Please note that no directories have actually been mounted yet, but will be automatically when we try to access the shares specified in /etc/auto.nfs-share:

Automount NFS Shares

Automount NFS Shares

As we can see, the autofs service “mounts” the map file, so to speak, but waits until a request is made to the file systems to actually mount them.

Performing write tests in exported file systems

The anonuid and anongid options, along with the root_squash as set in the first share, allow us to map requests performed by the root user in the client to a local account in the server.

In other words, when root in the client creates a file in that exported directory, its ownership will be automatically mapped to the user account with UID and GID = 1000, provided that such account exists on the server:

Perform NFS Write Tests

Perform NFS Write Tests

Conclusion

I hope you were able to successfully setup and configure a NFS server fit for your environment using this article as a guide. You may also want to refer to the relevant man pages for further help (man exports and man idmapd.conf, for example).

Feel free to experiment with other options and test cases as outlined earlier and do not hesitate to use the form below to send your comments, suggestions, or questions. We will be glad to hear from you.

How to Setup Encrypted Filesystems and Swap Space Using ‘Cryptsetup’ Tool in Linux – Part 3

LFCE (short for Linux Foundation Certified Engineer​) is trained and has the expertise to install, manage, and troubleshoot network services in Linux systems, and is in charge of the design, implementation and ongoing maintenance of the system architecture.

Linux Hard Disk Encryption

Linux Filesystem Encryption

Introducing The Linux Foundation Certification Program (LFCE).

The idea behind encryption is to allow only trusted persons to access your sensitive data and to protect it from falling into the wrong hands in case of loss or theft of your machine / hard disk.

In simple terms, a key is used to “lock” access to your information, so that it becomes available when the system is running and unlocked by an authorized user. This implies that if a person tries to examine the disk contents (plugging it to his own system or by booting the machine with a LiveCD/DVD/USB), he will only find unreadable data instead of the actual files.

In this article we will discuss how to set up encrypted file systems with dm-crypt (short for device mapper and cryptographic), the standard kernel-level encryption tool. Please note that since dm-crypt is a block-level tool, it can only be used to encrypt full devices, partitions, or loop devices (will not work on regular files or directories).

Preparing A Drive / Partition / Loop Device for Encryption

Since we will wipe all data present in our chosen drive (/dev/sdb), first of all, we need to perform a backup of any important files contained in that partition BEFORE proceeding further.

Wipe all data from /dev/sdb. We are going to use dd command here, but you could also do it with other tools such as shred. Next, we will create a partition on this device, /dev/sdb1, following the explanation in Part 4 – Create Partitions and Filesystems in Linux of the LFCS series.

# dd if=/dev/urandom of=/dev/sdb bs=4096 
Testing for Encryption Support

Before we proceed further, we need to make sure that our kernel has been compiled with encryption support:

# grep -i config_dm_crypt /boot/config-$(uname -r)

Check Encryption Support in Linux

Check Encryption Support

As outlined in the image above, the dm-crypt kernel module needs to be loaded in order to set up encryption.

Installing Cryptsetup

Cryptsetup is a frontend interface for creating, configuring, accessing, and managing encrypted file systems using dm-crypt.

# aptitude update && aptitude install cryptsetup 		[On Ubuntu]
# yum update && yum install cryptsetup 				[On CentOS] 
# zypper refresh && zypper install cryptsetup 			[On openSUSE]

Setting Up an Encrypted Partition

The default operating mode for cryptsetup is LUKS (Linux Unified Key Setup) so we’ll stick with it. We will begin by setting the LUKS partition and the passphrase:

# cryptsetup -y luksFormat /dev/sdb1

Creating an Encrypted Partition

Creating an Encrypted Partition

The command above runs cryptsetup with default parameters, which can be listed with,

# cryptsetup --version

Cryptsetup Parameters

Cryptsetup Parameters

Should you want to change the cipherhash, or key parameters, you can use the –cipher–hash, and –key-sizeflags, respectively, with the values taken from /proc/crypto.

Next, we need to open the LUKS partition (we will be prompted for the passphrase that we entered earlier). If the authentication succeeds, our encrypted partition will be available inside /dev/mapper with the specified name:

# cryptsetup luksOpen /dev/sdb1 my_encrypted_partition

Encrypted Partition

Encrypted Partition

Now, we’ll format out partition as ext4.

# mkfs.ext4 /dev/mapper/my_encrypted_partition

and create a mount point to mount the encrypted partition. Finally, we may want to confirm whether the mount operation succeeded.

# mkdir /mnt/enc
# mount /dev/mapper/my_encrypted_partition /mnt/enc
# mount | grep partition

Mount Encrypted Partition in Linux

Mount Encrypted Partition

When you are done writing to or reading from your encrypted file system, simply unmount it

# umount /mnt/enc

and close the LUKS partition using,

# cryptesetup luksClose my_encrypted_partition
Testing Encryption

Finally, we will check whether our encrypted partition is safe:

1. Open the LUKS partition

# cryptsetup luksOpen /dev/sdb1 my_encrypted_partition

2. Enter your passphrase

3. Mount the partition

# mount /dev/mapper/my_encrypted_partition /mnt/enc

4. Create a dummy file inside the mount point.

# echo “This is Part 3 of a 12-article series about the LFCE certification” > /mnt/enc/testfile.txt

5. Verify that you can access the file that you just created.

# cat /mnt/enc/testfile.txt

6. Unmount the file system.

# umount /mnt/enc

7. Close the LUKS partition.

# cryptsetup luksClose my_encrypted_partition

8. Try to mount the partition as a regular file system. It should indicate an error.

# mount /dev/sdb1 /mnt/enc

Test Encryption on Partition

Test Encryption on Partition

Encryptin the Swap Space for Further Security

The passphrase you entered earlier to use the encrypted partition is stored in RAM memory while it’s open. If someone can get his hands on this key, he will be able to decrypt the data. This is especially easy to do in the case of a laptop, since while hibernating the contents of RAM are kept on the swap partition.

To avoid leaving a copy of your key accessible to a thief, encrypt the swap partition following these steps:

1 Create a partition to be used as swap with the appropriate size (/dev/sdd1 in our case) and encrypt it as explained earlier. Name it just “swap” for convenience.’

2.Set it as swap and activate it.

# mkswap /dev/mapper/swap
# swapon /dev/mapper/swap

3. Next, change the corresponding entry in /etc/fstab.

/dev/mapper/swap none        	swap	sw          	0   	0

4. Finally, edit /etc/crypttab and reboot.

swap               /dev/sdd1         /dev/urandom swap

Once the system has finished booting, you can verify the status of the swap space:

# cryptsetup status swap

Check Swap Encryption Status

Check Swap Encryption Status

Summary

In this article we have explored how to encrypt a partition and swap space. With this setup, your data should be considerably safe. Feel free to experiment and do not hesitate to get back to us if you have questions or comments. Just use the form below – we’ll be more than glad to hear from you!

How to Setup Standalone Apache Server with Name-Based Virtual Hosting with SSL Certificate – Part 4

LFCE (short for Linux Foundation Certified Engineer) is a trained professional who has the expertise to install, manage, and troubleshoot network services in Linux systems, and is in charge of the design, implementation and ongoing maintenance of the system architecture.

In this article we will show you how to configure Apache to serve web content, and how to set up name-based virtual hosts and SSL, including a self-signed certificate.

Setup Apache Virtual Hosts with SSL

Linux Foundation Certified Engineer – Part 4

Introducing The Linux Foundation Certification Program (LFCE).

Note: That this article is not supposed to be a comprehensive guide on Apache, but rather a starting point for self-study about this topic for the LFCE exam. For that reason we are not covering load balancing with Apache in this tutorial either.

You may already know other ways to perform the same tasks, which is OK considering that the Linux Foundation Certification are strictly performance-based. Thus, as long as you ‘get the job done’, you stand good chances of passing the exam.

Requirements

Please refer to Part 1 of the current series (“Installing Network Services and Configuring Automatic Startup at Boot”) for instructions on installing and starting Apache.

By now, you should have the Apache web server installed and running. You can verify this with the following command.

# ps -ef | grep -Ei '(apache|httpd)' | grep -v grep

Note: That the above command checks for the presence of either apache or httpd (the most common names for the web daemon) among the list of running processes. If Apache is running, you will get output similar to the following.

Check Apache Processes

Check Apache Processes

The ultimate method of testing the Apache installation and checking whether it’s running is launching a web browser and point to the IP of the server. We should be presented with the following screen or at least a message confirming that Apache is working.

Check Apache Webpage

Check Apache Webpage

Configuring Apache

The main configuration file for Apache can be located in different directories depending on your distribution.

/etc/apache2/apache2.conf 		[For Ubuntu]
/etc/httpd/conf/httpd.conf		[For CentOS]
/etc/apache2/httpd.conf 		[For openSUSE]

Fortunately for us, the configuration directives are extremely well documented in the Apache project web site. We will refer to some of them throughout this article.

Serving Pages in a Standalone Server with Apache

The most basic usage of Apache is to serve web pages in a standalone server where no virtual hosts have been configured yet. The DocumentRoot directive specifies the directory out of which Apache will serve web pages documents.

Note that by default, all requests are taken from this directory, but you can also use symbolic links and / or aliases may be used to point to other locations as well.

Unless matched by the Alias directive (which allows documents to be stored in the local filesystem instead of under the directory specified by DocumentRoot), the server appends the path from the requested URL to the document root to make the path to the document.

For example, given the following DocumentRoot:

Apache DocumentRoot

Apache DocumentRoot

When the web browser points to [Server IP or hostname]/lfce/tecmint.html, the server will open /var/www/html/lfce/tecmint.html (assuming that such file exists) and save the event to its access log with a 200 (OK) response.

The access log is typically found inside /var/log under a representative name, such as access.log or access_log. You may even find this log (and the error log as well) inside a subdirectory (for example, /var/log/httpd in CentOS). Otherwise, the failed event will still be logged to the access log but with a 404 (Not Found) response.

Apache Access Log

Apache Access Log

In addition, the failed events will be recorded in the error log:

Apache Error Log

Apache Error Log

The format of the access log can be customized according to your needs using the LogFormat directive in the main configuration file, whereas you cannot do the same with the error log.

The default format of the access log is as follows:

LogFormat "%h %l %u %t \"%r\" %>s %b" [nickname]

Where each of the letters preceded by a percent sign indicates the server to log a certain piece of information:

String Description
 %h  Remote hostname or IP address
 %l  Remote log name
 %u  Remote user if the request is authenticated
 %t  Date and time when the request was received
 %r  First line of request to the server
 %>s  Final status of the request
 %b  Size of the response [bytes]

and nickname is an optional alias that can be used to customize other logs without having to enter the whole configuration string again.

You may refer to the LogFormat directive [Custom log formats section] in the Apache docs for further options.

Both log files (access and error) represent a great resource to quickly analyze at a glance what’s happening on the Apache server. Needless to say, they are the first tool a system administrator uses to troubleshoot issues.

Finally, another important directive is Listen, which tells the server to accept incoming requests on the specified port or address/port combination:

If only a port number is defined, the apache will listens to the given port on all network interfaces (the wildcard sign * is used to indicate ‘all network interfaces’).

If both IP address and port is specified, then the apache will listen on the combination of given port and network interface.

Please note (as you will see in the examples below) that multiple Listen directives can be used at the same time to specify multiple addresses and ports to listen to. This option instructs the server to respond to requests from any of the listed addresses and ports.

Setting Up Name-Based Virtual Hosts

The concept of virtual host defines an individual site (or domain) that is served by the same physical machine. Actually, multiple sites / domains can be served off a single “real” server as virtual host. This process is transparent to the end user, to whom it appears that the different sites are being served by distinct web servers.

Name-based virtual hosting allows the server to rely on the client to report the hostname as part of the HTTP headers. Thus, using this technique, many different hosts can share the same IP address.

Each virtual host is configured in a directory within DocumentRoot. For our case, we will use the following dummy domains for the testing setup, each located in the corresponding directory:

  1. ilovelinux.com – /var/www/html/ilovelinux.com/public_html
  2. linuxrocks.org – /var/www/html/linuxrocks.org/public_html

In order for pages to be displayed correctly, we will chmod each VirtualHost’s directory to 755:

# chmod -R 755 /var/www/html/ilovelinux.com/public_html
# chmod -R 755 /var/www/html/linuxrocks.org/public_html

Next, create a sample index.html file inside each public_html directory:

<html>
  <head>
    <title>www.ilovelinux.com</title>
  </head>
  <body>
    <h1>This is the main page of www.ilovelinux.com</h1>
  </body>
</html>

Finally, in CentOS and openSUSE add the following section at the bottom of /etc/httpd/conf/httpd.conf or /etc/apache2/httpd.conf, respectively, or just modify it if it’s already there.

<VirtualHost *:80>
     ServerAdmin admin@ilovelinux.com 
     DocumentRoot /var/www/html/ilovelinux.com/public_html
     ServerName www.ilovelinux.com
     ServerAlias www.ilovelinux.com ilovelinux.com
     ErrorLog /var/www/html/ilovelinux.com/error.log
     LogFormat "%v %l %u %t \"%r\" %>s %b" myvhost
     CustomLog /var/www/html/ilovelinux.com/access.log	myvhost
</VirtualHost>
<VirtualHost *:80>
     ServerAdmin admin@linuxrocks.org 
     DocumentRoot /var/www/html/linuxrocks.org/public_html
     ServerName www.linuxrocks.org
     ServerAlias www.linuxrocks.org linuxrocks.org
     ErrorLog /var/www/html/linuxrocks.org/error.log
     LogFormat "%v %l %u %t \"%r\" %>s %b" myvhost
     CustomLog /var/www/html/linuxrocks.org/access.log	myvhost
</VirtualHost>

Please note that you can also add each virtual host definition in separate files inside the /etc/httpd/conf.ddirectory. If you choose to do so, each configuration file must be named as follows:

/etc/httpd/conf.d/ilovelinux.com.conf
/etc/httpd/conf.d/linuxrocks.org.conf

In other words, you need to add .conf to the site or domain name.

In Ubuntu, each individual configuration file is named /etc/apache2/sites-available/[site name].conf. Each site is then enabled or disabled with the a2ensite or a2dissite commands, respectively, as follows.

# a2ensite /etc/apache2/sites-available/ilovelinux.com.conf
# a2dissite /etc/apache2/sites-available/ilovelinux.com.conf
# a2ensite /etc/apache2/sites-available/linuxrocks.org.conf
# a2dissite /etc/apache2/sites-available/linuxrocks.org.conf

The a2ensite and a2dissite commands create links to the virtual host configuration file and place (or remove) them in the /etc/apache2/sites-enabled directory.

To be able to browse to both sites from another Linux box, you will need to add the following lines in the /etc/hosts file in that machine in order to redirect requests to those domains to a specific IP address.

[IP address of your web server]	www.ilovelinux.com
[IP address of your web server]	www.linuxrocks.org 

As a security measure, SELinux will not allow Apache to write logs to a directory other than the default /var/log/httpd.

You can either disable SELinux, or set the right security context:

# chcon system_u:object_r:httpd_log_t:s0 /var/www/html/xxxxxx/error.log

where xxxxxx is the directory inside /var/www/html where you have defined your Virtual Hosts.

After restarting Apache, you should see the following page at the above addresses:

Check Apache VirtualHosts

Check Apache VirtualHosts

Installing and Configuring SSL with Apache

Finally, we will create and install a self-signed certificate to use with Apache. This kind of setup is acceptable in small environments, such as a private LAN.

However, if your server will expose content to the outside world over the Internet, you will want to install a certificate signed by a 3rd party to corroborate its authenticity. Either way, a certificate will allow you to encrypt the information that is transmitted to, from, or within your site.

In CentOS and openSUSE, you need to install the mod_ssl package.

# yum update && yum install mod_ssl 		[On CentOS]
# zypper refresh && zypper install mod_ssl	[On openSUSE]

Whereas in Ubuntu you’ll have to enable the ssl module for Apache.

# a2enmod ssl

The following steps are explained using a CentOS test server, but your setup should be almost identical in the other distributions (if you run into any kind of issues, don’t hesitate to leave your questions using the comments form).

Step 1 [Optional]: Create a directory to store your certificates.

# mkdir /etc/httpd/ssl-certs

Step 2: Generate your self signed certificate and the key that will protect it.

# openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/httpd/ssl-certs/apache.key -out /etc/httpd/ssl-certs/apache.crt

A brief explanation of the options listed above:

  1. req -X509 indicates we are creating a x509 certificate.
  2. -nodes (NO DES) means “don’t encrypt the key”.
  3. -days 365 is the number of days the certificate will be valid for.
  4. -newkey rsa:2048 creates a 2048-bit RSA key.
  5. -keyout /etc/httpd/ssl-certs/apache.key is the absolute path of the RSA key.
  6. -out /etc/httpd/ssl-certs/apache.crt is the absolute path of the certificate.

Create Apache SSL Certificate

Create Apache SSL Certificate

Step 3: Open your chosen virtual host configuration file (or its corresponding section in /etc/httpd/conf/httpd.conf as explained earlier) and add the following lines to a virtual host declaration listening on port 443.

SSLEngine on
SSLCertificateFile /etc/httpd/ssl-certs/apache.crt
SSLCertificateKeyFile /etc/httpd/ssl-certs/apache.key

Please note that you need to add.

NameVirtualHost *:443

at the top, right below

NameVirtualHost *:80

Both directives instruct apache to listen on ports 443 and 80 of all network interfaces.

The following example is taken from /etc/httpd/conf/httpd.conf:

Apache VirtualHost Directives

Apache VirtualHost Directives

Then restart Apache,

# service apache2 restart 			[sysvinit and upstart based systems]
# systemctl restart httpd.service 		[systemd-based systems]

And point your browser to https://www.ilovelinux.com. You will be presented with the following screen.

Check Apache SSl Certificate

Check Apache SSl Certificate

Go ahead and click on “I understand the risks” and “Add exception”.

Apache Ceritficate Warning

Apache Ceritficate Warning

Finally, check “Permanently store this exception” and click “Confirm Security Exception”.

Add SSl Ceritficate

Add SSl Ceritficate

And you will be redirected to your home page using https.

Apache HTTPS Enabled

Apache HTTPS Enabled

Summary

In this post we have shown how to configure Apache and name-based virtual hosting with SSL to secure data transmission. If for some reason you ran into any issues, feel free to let us know using the comment form below. We will be more than glad to help you perform a successful set up.

Read Also

  1. Apache IP Based and Name Based Virtual Hosting
  2. Creating Apache Virtual Hosts with Enable/Disable Vhosts Options
  3. Monitor “Apache Web Server” Using “Apache GUI” Tool

Configuring Squid Proxy Server with Restricted Access and Setting Up Clients to Use Proxy – Part 5

Linux Foundation Certified Engineer​ is a skilled professional who has the expertise to install, manage, and troubleshoot network services in Linux systems, and is in charge of the design, implementation and ongoing maintenance of the system-wide architecture.

Configuring Squid Proxy Server

Linux Foundation Certified Engineer – Part 5

Introducing The Linux Foundation Certification Program.

In Part 1 of this series, we showed how to install squid, a proxy caching server for web clients. Please refer to that post (link given below) before proceeding if you haven’t installed squid on your system yet.

  1. Part 1 – Install Network Services and Configuring Auto Startup at Boot

In this article, we will show you how to configure the Squid proxy server in order to grant or restrict Internet access, and how to configure an http client, or web browser, to use that proxy server.

My Testing Environment Setup

Squid Server
Operating System :	Debian Wheezy 7.5
IP Address 	 :	192.168.0.15
Hostname	 :	dev2.gabrielcanepa.com.ar
Client Machine 1
Operating System :	Ubuntu 12.04
IP Address 	 :	192.168.0.104 
Hostname	 :	ubuntuOS.gabrielcanepa.com.ar
Client Machine 2
Operating System :	CentOS-7.0-1406
IP Address 	 :	192.168.0.17 
Hostname	 :	dev1.gabrielcanepa.com.ar

Let us remember that, in simple terms, a web proxy server is an intermediary between one (or more) client computers and a certain network resource, the most common being access to the Internet. In other words, the proxy server is connected on one side directly to the Internet (or to a router that is connected to the Internet) and on the other side to a network of client computers that will access the World Wide Web through it.

You may be wondering, why would I want to add yet another piece of software to my network infrastructure?

Here are the top 3 reasons:

1. Squid stores files from previous requests to speed up future transfers. For example, suppose client1downloads CentOS-7.0-1406-x86_64-DVD.iso from the Internet. When client2 requests access to the same file, squid can transfer the file from its cache instead of downloading it again from the Internet. As you can guess, you can use this feature to speed up data transfers in a network of computers that require frequent updates of some kind.

2. ACLs (Access Control Lists) allow us to restrict the access to websites, and / or monitor the access on a per user basis. You can restrict access based on day of week or time of day, or domain, for example.

3. Bypassing web filters is made possible through the use of a web proxy to which requests are made and which returns requested content to a client, instead of having the client request it directly to the Internet.

For example, suppose you are logged on in client1 and want to access www.facebook.com through your company’s router. Since the site may be blocked by your company’s policies, you can instead connect to a web proxy server and have it request access to www.facebook.com. Remote content is then returned to you through the web proxy server again, bypassing your company’s router’s blocking policies.

Configuring Squid – The Basics

The access control scheme of the Squid web proxy server consists of two different components:

  1. The ACL elements are directive lines that begin with the word “acl” and represent types of tests that are performed against any request transaction.
  2. The access list rules consist of an allow or deny action followed by a number of ACL elements, and are used to indicate what action or limitation has to be enforced for a given request. They are checked in order, and list searching terminates as soon as one of the rules is a match. If a rule has multiple ACL elements, it is implemented as a boolean AND operation (all ACL elements of the rule must be a match in order for the rule to be a match).

Squid’s main configuration file is /etc/squid/squid.conf, which is ~5000 lines long since it includes both configuration directives and documentation. For that reason, we will create a new squid.conf file with only the lines that include configuration directives for our convenience, leaving out empty or commented lines. To do so, we will use the following commands.

# mv /etc/squid/squid.conf /etc/squid/squid.conf.bkp

And then,

# grep -Eiv '(^#|^$)' /etc/squid/squid.conf.bkp

OR

# grep -ve ^# -ve ^$ /etc/squid/squid.conf.bkp > /etc/squid/squid.conf

Backup Squid Configuration File

Backup Squid Configuration File

Now, open the newly created squid.conf file, and look for (or add) the following ACL elements and access lists.

acl localhost src 127.0.0.1/32
acl localnet src 192.168.0.0/24

The two lines above represent a basic example of the usage of ACL elements.

  1. The first word, acl, indicates that this is a ACL element directive line.
  2. The second word, localhost or localnet, specify a name for the directive.
  3. The third word, src in this case, is an ACL element type that is used to represent a client IP address or range of addresses, respectively. You can specify a single host by IP (or hostname, if you have some sort of DNS resolution implemented) or by network address.
  4. The fourth parameter is a filtering argument that is “fed” to the directive.

The two lines below are access list rules and represent an explicit implementation of the ACL directives mentioned earlier. In few words, they indicate that http access should be granted if the request comes from the local network (localnet), or from localhost. Specifically what is the allowed local network or local host addresses? The answer is: those specified in the localhost and localnet directives.

http_access allow localnet
http_access allow localhost

Squid ACL Allow Access

Squid ACL Allow Access List

At this point you can restart Squid in order to apply any pending changes.

# service squid restart 		[Upstart / sysvinit-based distributions]
# systemctl restart squid.service 	[systemd-based distributions]

and then configure a client browser in the local network (192.168.0.104 in our case) to access the Internet through your proxy as follows.

In Firefox

1. Go to the Edit menu and choose the Preferences option.

2. Click on Advanced, then on the Network tab, and finally on Settings

3. Check Manual proxy configuration and enter the IP address of the proxy server and the port where it is listening for connections.

Configure Proxy in Firefox

Configure Proxy in Firefox

Note That by default, Squid listens on port 3128, but you can override this behaviour by editing the access listrule that begins with http_port (by default it reads http_port 3128).

4. Click OK to apply the changes and you’re good to go.

Verifying that a Client is Accessing the Internet

You can now verify that your local network client is accessing the Internet through your proxy as follows.

1. In your client, open up a terminal and type,

# ip address show eth0 | grep -Ei '(inet.*eth0)'

That command will display the current IP address of your client (192.168.0.104 in the following image).

2. In your client, use a web browser to open any given web site (www.tecmint.com in this case).

3. In the server, run.

# tail -f /var/log/squid/access.log

and you’ll get a live view of requests being served through Squid.

Check Squid Proxy Browsing

Check Proxy Browsing

Restricting Access By Client

Now suppose you want to explicitly deny access to that particular client IP address, while yet maintaining access for the rest of the local network.

1. Define a new ACL directive as follows (I’ve named it ubuntuOS but you can name it whatever you want).

acl ubuntuOS src 192.168.0.104

2. Add the ACL directive to the localnet access list that is already in place, but prefacing it with an exclamation sign. This means, “Allow Internet access to clients matching the localnet ACL directive except to the one that matches the ubuntuOS directive”.

http_access allow localnet !ubuntuOS

3. Now we need to restart Squid in order to apply changes. Then if we try to browse to any site we will find that access is denied now.

Block Internet Access

Block Internet Access

Configuring Squid – Fine Tuning

Restricting access by domain and / or by time of day / day of week

To restrict access to Squid by domain we will use the dstdomain keyword in a ACL directive, as follows.

acl forbidden dstdomain "/etc/squid/forbidden_domains"

Where forbidden_domains is a plain text file that contains the domains that we desire to deny access to.

Block Domains in Squid

Block Access to Domains

Finally, we must grant access to Squid for requests not matching the directive above.

http_access allow localnet !forbidden

Or maybe we will only want to allow access to those sites during a certain time of the day (10:00 until 11:00 am) only on Monday (M)Wednesday (W), and Friday (F).

acl someDays time MWF 10:00-11:00
http_access allow forbidden someDays
http_access deny forbidden

Otherwise, access to those domains will be blocked.

Restricting access by user authentication

Squid support several authentication mechanisms (Basic, NTLM, Digest, SPNEGO, and Oauth) and helpers (SQL database, LDAP, NIS, NCSA, to name a few). In this tutorial we will use Basic authentication with NCSA.

Add the following lines to your /etc/squid/squid.conf file.

auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/passwd
auth_param basic credentialsttl 30 minutes
auth_param basic casesensitive on
auth_param basic realm Squid proxy-caching web server for Tecmint's LFCE series
acl ncsa proxy_auth REQUIRED
http_access allow ncsa

Note: In CentOS 7, the NCSA plugin for squid can be found in /usr/lib64/squid/basic_nsca_auth, so change accordingly in above line.

Squid NCSA Authentication

Enable NCSA Authentication

A few clarifications:

  1. We need to tell Squid which authentication helper program to use with the auth_param directive by specifying the name of the program (most likely, /usr/lib/squid/ncsa_auth or /usr/lib64/squid/basic_nsca_auth), plus any command line options (/etc/squid/passwd in this case) if necessary.
  2. The /etc/squid/passwd file is created through htpasswd, a tool to manage basic authentication through files. It will allow us to add a list of usernames (and their corresponding passwords) that will be allowed to use Squid.
  3. credentialsttl 30 minutes will require entering your username and password every 30 minutes (you can specify this time interval with hours as well).
  4. casesensitive on indicates that usernames and passwords are case sensitive.
  5. realm represents the text of the authentication dialog that will be used to authenticate to squid.
  6. Finally, access is granted only when proxy authentication (proxy_auth REQUIRED) succeeds.

Run the following command to create the file and to add credentials for user gacanepa (omit the -c flag if the file already exists).

# htpasswd -c /etc/squid/passwd gacanepa

Squid Restrict Users

Restrict Squid Access to Users

Open a web browser in the client machine and try to browse to any given site.

Squid Authentication Configuration

Enable Squid Authentication

If authentication succeeds, access is granted to the requested resource. Otherwise, access will be denied.

Using Cache to Sped Up Data Transfer

One of Squid’s distinguishing features is the possibility of caching resources requested from the web to disk in order to speed up future requests of those objects either by the same client or others.

Add the following directives in your squid.conf file.

cache_dir ufs /var/cache/squid 1000 16 256
maximum_object_size 100 MB
refresh_pattern .*\.(mp4|iso) 2880

A few clarifications of the above directives.

  1. ufs is the Squid storage format.
  2. /var/cache/squid is a top-level directory where cache files will be stored. This directory must exist and be writeable by Squid (Squid will NOT create this directory for you).
  3. 1000 is the amount (in MB) to use under this directory.
  4. 16 is the number of 1st-level subdirectories, whereas 256 is the number of 2nd-level subdirectories within /var/spool/squid.
  5. The maximum_object_size directive specifies the maximum size of allowed objects in the cache.
  6. refresh_pattern tells Squid how to deal with specific file types (.mp4 and .iso in this case) and for how long it should store the requested objects in cache (2880 minutes = 2 days).

The first and second 2880 are lower and upper limits, respectively, on how long objects without an explicit expiry time will be considered recent, and thus will be served by the cache, whereas 0% is the percentage of the objects’ age (time since last modification) that each object without explicit expiry time will be considered recent.

Case study: downloading a .mp4 file from 2 different clients and testing the cache

First client (IP 192.168.0.104) downloads a 71 MB .mp4 file in 2 minutes and 52 seconds.

Setup Squid Caching in Linux

Enable Caching on Squid

Second client (IP 192.168.0.17) downloads the same file in 1.4 seconds!

Verify Squid Caching

Verify Squid Caching

That is because the file was served from the Squid cache (indicated by TCP_HIT/200) in the second case, as opposed to the first instance, when it was downloaded directly from the Internet (represented by TCP_MISS/200).

The HIT and MISS keywords, along with the 200 http response code, indicate that the file was served successfully both times, but the cache was HIT and Missed respectively. When a request cannot be served by the cache for some reason, then Squid attempts to serve it from the Internet.

Squid HTTP Codes

Squid HTTP Codes

Conclusion

In this article we have discussed how to set up a Squid web caching proxy. You can use the proxy server to filter contents using a chosen criteria, and also to reduce latency (since identical incoming requests are served from the cache, which is closer to the client than the web server that is actually serving the content, resulting in faster data transfers) and network traffic as well (reducing the amount of used bandwidth, which saves you money if you’re paying for traffic).

You may want to refer to the Squid web site for further documentation (make sure to also check the wiki), but do not hesitate to contact us if you have any questions or comments. We will be more than glad to hear from you!

Configuring SquidGuard, Enabling Content Rules and Analyzing Squid Logs – Part 6

LFCE (Linux Foundation Certified Engineer)​ is a professional who has the necessary skills to install, manage, and troubleshoot network services in Linux systems, and is in charge of the design, implementation and ongoing maintenance of the system architecture in its entirety.

Configure SquidGuard for Squid

Linux Foundation Certified Engineer – Part 6

Introducing The Linux Foundation Certification Program.

In previous posts we discussed how to install Squid + squidGuard and how to configure squid to properly handle or restrict access requests. Please make sure you go over those two tutorials and install both Squid and squidGuard before proceeding as they set the background and the context for what we will cover in this post: integrating squidguard in a working squid environment to implement blacklist rules and content control over the proxy server.

Requirements

  1. Install Squid and SquidGuard – Part 1
  2. Configuring Squid Proxy Server with Restricted Access – Part 5

What Can / Cannot I use SquidGuard For?

Though squidGuard will certainly boost and enhance Squid’s features, it is important to highlight what it can and what it cannot do.

squidGuard can be used to:

  1. limit the allowed web access for some users to a list of accepted/well known web servers and/or URLs only, while denying access to other blacklisted web servers and/or URLs.
  2. block access to sites (by IP address or domain name) matching a list of regular expressions or words for some users.
  3. require the use of domain names/prohibit the use of IP address in URLs.
  4. redirect blocked URLs to error or info pages.
  5. use distinct access rules based on time of day, day of the week, date etc.
  6. implement different rules for distinct user groups.

However, neither squidGuard nor Squid can be used to:

  1. analyze text inside documents and act in result.
  2. detect or block embedded scripting languages like JavaScript, Python, or VBscript inside HTML code.

BlackLists – The Basics

Blacklists are an essential part of squidGuard. Basically, they are plain text files that will allow you to implement content filters based on specific keywords. There are both freely available and commercial blacklists, and you can find the download links in the squidguard blacklists project’s website.

In this tutorial I will show you how to integrate the blacklists provided by Shalla Secure Services to your squidGuard installation. These blacklists are free for personal / non-commercial use and are updated on a daily basis. They include, as of today, over 1,700,000 entries.

For our convenience, let’s create a directory to download the blacklist package.

# mkdir /opt/3rdparty
# cd /opt/3rdparty 
# wget http://www.shallalist.de/Downloads/shallalist.tar.gz

The latest download link is always available as highlighted below.

Download Squidguard Blacklist for Squid

Download Squidguard Blacklist

After untarring the newly downloaded file, we will browse to the blacklist (BL) folder.

# tar xzf shallalist.tar.gz 
# cd BL
# ls

Squidguard Blacklist Domains for Squid

Squidguard Blacklist Domains

You can think of the directories shown in the output of ls as backlist categories, and their corresponding (optional) subdirectories as subcategories, descending all the way down to specific URLs and domains, which are listed in the files urls and domains, respectively. Refer to the below image for further details.

Squid Blacklist Urls Domains

SquidGuard Blacklist Urls Domains

Installing Blacklists

Installation of the whole blacklist package, or of individual categories, is performed by copying the BL directory, or one of its subdirectories, respectively, to the /var/lib/squidguard/db directory.

Of course you could have downloaded the blacklist tarball to this directory in the first place, but the approach explained earlier gives you more control over what categories should be blocked (or not) at a specific time.

Next, I will show you how to install the anonvpnhacking, and chat blacklists and how to configure squidGuard to use them.

Step 1: Copy recursively the anonvpnhacking, and chat directories from /opt/3rdparty/BL to /var/lib/squidguard/db.

# cp -a /opt/3rdparty/BL/anonvpn /var/lib/squidguard/db
# cp -a /opt/3rdparty/BL/hacking /var/lib/squidguard/db
# cp -a /opt/3rdparty/BL/chat /var/lib/squidguard/db

Step 2: Use the domains and urls files to create squidguard’s database files. Please note that the following command will work for creating .db files for all the installed blacklists – even when a certain category has 2 or more subcategories.

# squidGuard -C all

Step 3: Change the ownership of the /var/lib/squidguard/db/ directory and its contents to the proxy user so that Squid can read the database files.

# chown -R proxy:proxy /var/lib/squidguard/db/

Step 4: Configure Squid to use squidGuard. We will use Squid’s url_rewrite_program directive in /etc/squid/squid.conf to tell Squid to use squidGuard as a URL rewriter / redirector.

Add the following line to squid.conf, making sure that /usr/bin/squidGuard is the right absolute path in your case.

# which squidGuard
# echo "url_rewrite_program $(which squidGuard)" >> /etc/squid/squid.conf
# tail -n 1 /etc/squid/squid.conf

Configure SquidGuard for Squid

Configure Squid to use SquidGuard

Step 5: Add the necessary directives to squidGuard’s configuration file (located in /etc/squidguard/squidGuard.conf).

Please refer to the screenshot above, after the following code for further clarification.

src localnet {
        ip      192.168.0.0/24
}

dest anonvpn {
        domainlist      anonvpn/domains
        urllist         anonvpn/urls
}
dest hacking {
        domainlist      hacking/domains
        urllist         hacking/urls
}
dest chat {
        domainlist      chat/domains
        urllist         chat/urls
}

acl {
        localnet {
                        pass     !anonvpn !hacking !chat !in-addr all
                        redirect http://www.lds.org
                }
        default {
                        pass     local none
        }
}

Step 6: Restart Squid and test.

# service squid restart 		[sysvinit / Upstart-based systems]
# systemctl restart squid.service 	[systemctl-based systems]

Open a web browser in a client within local network and browse to a site found in any of the blacklist files (domains or urls – we will use http://spin.de/ chat in the following example) and you will be redirected to another URL, www.lds.org in this case.

You can verify that the request was made to the proxy server but was denied (301 http response – Moved permanently) and was redirected to www.lds.org instead.

Analyze Squid Logs

Analyze Squid Logs

Removing Restrictions

If for some reason you need to enable a category that has been blocked in the past, remove the corresponding directory from /var/lib/squidguard/db and comment (or delete) the related acl in the squidguard.conf file.

For example, if you want to enable the domains and urls blacklisted by the anonvpn category, you would need to perform the following steps.

# rm -rf /var/lib/squidguard/db/anonvpn

And edit the squidguard.conf file as follows.

Remove Domains from Squid Blacklist

Remove Squid Blacklist

Please note that parts highlighted in yellow under BEFORE have been deleted in AFTER.

Whitelisting Specific Domains and URL’s

On occasions you may want to allow certain URLs or domains, but not an entire blacklisted directory. In that case, you should create a directory named myWhiteLists (or whatever name you choose) and insert the desired URLs and domains under /var/lib/squidguard/db/myWhiteLists in files named urls and domains, respectively.

Then, initialize the new content rules as before,

# squidGuard -C all

and modify the squidguard.conf as follows.

Remove Domains Urls in Squid Blacklist

Remove Domains Urls in Squid Blacklist

As before, the parts highlighted in yellow indicate the changes that need to be added. Note that the myWhiteLists string needs to be first in the row that starts with pass.

Finally, remember to restart Squid in order to apply changes.

Conclusion

After following the steps outlined in this tutorial you should have a powerful content filter and URL redirector working hand in hand with your Squid proxy. If you experience any issues during your installation / configuration process or have any questions or comments, you may want to refer to squidGuard’s web documentation but always feel free to drop us a line using the form below and we will get back to you as soon as possible.

Setting Up Email Services (SMTP, Imap and Imaps) and Restricting Access to SMTP – Part 7

LFCE (Linux Foundation Certified Engineer​) is a trained professional who has the skills to install, manage, and troubleshoot network services in Linux systems, and is in charge of the design, implementation and ongoing maintenance of the system architecture and user administration.

Setting Up Postfix Mail Server

Linux Foundation Certified Engineer – Part 7

Introducing The Linux Foundation Certification Program.

In a previous tutorial we discussed how to install the necessary components of a mail service. If you haven’t installed Postfix and Dovecot yet, please refer to Part 1 of this series for instructions to do so before proceeding.

Requirement

  1. Install Postfix Mail Server and Dovecot – Part 1

In this post, I will show you how to configure your mail server and how to perform the following tasks:

  1. Configure email aliases
  2. Configure an IMAP and IMAPS service
  3. Configure an smtp service
  4. Restrict access to an smtp server

Note: That our setup will only cover a mail server for a local area network where the machines belong to the same domain. Sending email messages to other domains require a more complex setup, including domain name resolution capabilities, that is out of the scope of the LFCE certification.

But first off, let’s start with a few definitions.

Components Of a Mail Sending, Transport and Delivery Process

The following image illustrates the process of email transport starting with the sender until the message reaches the recipient’s inbox:

Process of Email Transport

Process of Email Transport

To make this possible, several things happen behind the scenes. In order for an email message to be delivered from a client application (such as Thunderbird, Outlook, or webmail services such as Gmail or Yahoo! Mail) to his / her mail server and from there to the destination server and finally to its intended recipient, a SMTP (Simple Mail Transfer Protocol) service must be in place in each server.

When talking about email services, you will find the following terms mentioned very often:

Message Transport Agent – MTA

MTA (short for Mail or Message Transport Agent), aka mail relay, is a software that is in charge of transferring email messages from a server to a client (and the other way around as well). In this series, Postfix acts as our MTA.

Mail User Agent – MUA

MUA, or Mail User Agent, is a computer program used to access and manage the user’s email inboxes. Examples of MUAs include, but are not limited to, Thunderbird, Outlook, and webmail interfaces such as Gmail, Outlook.com, to name a few. In this series, we will use Thunderbird in our examples.

Mail Delivery Agent

MDA (short for Message or Mail Delivery Agent) is the software part that actually delivers email messages to user’s inboxes. In this tutorial, we will use Dovecot as our MDA. Dovecot will also will handle user authentication.

Simple Mail Transfer Protocol – SMTP

In order for these components to be able to “talk” to each other, they must “speak” the same “language” (or protocol), namely SMTP (Simple Mail Transfer Protocol) as defined in the RFC 2821. Most likely, you will have to refer to that RFC while setting up your mail server environment.

Other protocols that we need to take into account are IMAP4 (Internet Message Access Protocol), which allows to manage email messages directly on the server without downloading them to our client’s hard drive, and POP3(Post Office Protocol), which allows to download the messages and folders to the user’s computer.

Our Testing Environment

Our testing environment is as follows:

Mail Server Setup
Mail Server OS	: 	Debian Wheezy 7.5 
IP Address	:	192.168.0.15
Local Domain	:	example.com.ar
User Aliases	:	sysadmin@example.com.ar is aliased to gacanepa@example.com.ar and jdoe@example.com.ar
Client Machine Setup
Mail Client OS	: 	Ubuntu 12.04
IP Address	:	192.168.0.103

On our client, we have set up elementary DNS resolution adding the following line to the /etc/hosts file.

192.168.0.15 example.com.ar mailserver

Adding Email Aliases

By default, a message sent to a specific user should be delivered to that user only. However, if you want to also deliver it to a group of users as well, or to a different user, you can create a mail alias or use one of the existing ones in /etc/postfix/aliases, following this syntax:

user1: user1, user2

Thus, emails sent to user1 will be also delivered to user2. Note that if you omit the word user1 after the colon, as in

user1: user2

the messages sent to user1 will only be sent to user2, and not to user1.

In the above example, user1 and user2 should already exist on the system. You may want to refer to Part 8 of the LFCS series if you need to refresh your memory before adding new users.

  1. How to Add and Manage Users/Groups in Linux
  2. 15 Commands to Add Users in Linux

In our specific case, we will use the following alias as explained before (add the following line in /etc/aliases).

sysadmin: gacanepa, jdoe

And run the following command to create or refresh the aliases lookup table.

postalias /etc/postfix/aliases

So that messages sent to sysadmin@example.com.ar will be delivered to the inbox of the users listed above.

Configuring Postfix – The SMTP Service

The main configuration file for Postfix is /etc/postfix/main.cf. You only need to set up a few parameters before being able to use the mail service. However, you should become acquainted with the full configuration parameters (which can be listed with man 5 postconf) in order to set up a secure and fully customized mail server.

Note: That this tutorial is only supposed to get you started in that process and does not represent a comprehensive guide on email services with Linux.

Open /etc/postfix/main.cf file with your choice of editor and do following changes as explained.

# vi /etc/postfix/main.cf

1myorigin specifies the domain that appears in messages sent from the server. You may see the /etc/mailname file used with this parameter. Feel free to edit it if needed.

myorigin = /etc/mailname

Configure Myorigin in Postfix

Configure Myorigin

If the value above is used, mails will be sent as user@example.com.ar, where user is the user sending the message.

2mydestination lists what domains this machine will deliver email messages locally, instead of forwarding to another machine (acting as a relay system). The default settings will suffice in our case (make sure to edit the file to suit your environment).

Configure Mydestination

Configure Mydestination

Where the /etc/postfix/transport file defines the relationship between domains and the next server to which mail messages should be forwarded. In our case, since we will be delivering messages to our local area network only (thus bypassing any external DNS resolution), the following configuration will suffice.

example.com.ar    local:
.example.com.ar    local:

Next, we need to convert this plain text file to the .db format, which creates the lookup table that Postfix will actually use to know what to do with incoming and outgoing mail.

# postmap /etc/postfix/transport

You will need to remember to recreate this table if you add more entries to the corresponding text file.

3mynetworks defines the authorized networks Postfix will forward messages from. The default value, subnet, tells Postfix to forward mail from SMTP clients in the same IP subnetworks as the local machine only.

mynetworks = subnet

Configure Mynetworks in Postfix

Configure Mynetworks

4relay_domains specifies the destinations to which emails should be sent to. We will leave the default value untouched, which points to mydestination. Remember that we are setting up a mail server for our LAN.

relay_domains = $mydestination

Note that you can use $mydestination instead of listing the actual contents.

Configure Relay Domains in Postfix

Configure Relay Domains

5inet_interfaces defines which network interfaces the mail service should listen on. The default, all, tells Postfix to use all network interfaces.

inet_interfaces = all

Configure Network Interfaces in Postfix

Configure Network Interfaces

6. Finally, mailbox_size_limit and message_size_limit will be used to set the size of each user’s mailbox and the maximum allowed size of individual messages, respectively, in bytes.

mailbox_size_limit = 51200000
message_size_limit = 5120000

Restricting Access to the SMTP Server

The Postfix SMTP server can apply certain restrictions to each client connection request. Not all clients should be allowed to identify themselves to the mail server using the smtp HELO command, and certainly not all of them should be granted access to send or receive messages.

To implement these restrictions, we will use the following directives in the main.cf file. Though they are self-explanatory, comments have been added for clarification purposes.

# Require that a remote SMTP client introduces itself with the HELO or EHLO command before sending the MAIL command or other commands that require EHLO negotiation.
smtpd_helo_required = yes

# Permit the request when the client IP address matches any network or network address listed in $mynetworks
# Reject the request when the client HELO and EHLO command has a bad hostname syntax
smtpd_helo_restrictions = permit_mynetworks, reject_invalid_helo_hostname

# Reject the request when Postfix does not represent the final destination for the sender address
smtpd_sender_restrictions = permit_mynetworks, reject_unknown_sender_domain

# Reject the request unless 1) Postfix is acting as mail forwarder or 2) is the final destination
smtpd_recipient_restrictions = permit_mynetworks, reject_unauth_destination

The Postfix configuration parameters postconf page may come in handy in order to further explore the available options.

Configuring Dovecot

Right after installing dovecot, it supports out-of-the-box for the POP3 and IMAP protocols, along with their secure versions, POP3S and IMAPS, respectively.

Add the following lines in /etc/dovecot/conf.d/10-mail.conf file.

# %u represents the user account that logs in
# Mailboxes are in mbox format
mail_location = mbox:~/mail:INBOX=/var/mail/%u
# Directory owned by the mail group and the directory set to group-writable (mode=0770, group=mail)
# You may need to change this setting if postfix is running a different user / group on your system
mail_privileged_group = mail

If you check your home directory, you will notice there is a mail subdirectory with the following contents.

Configure Dovecot for Postfix

Configure Dovecot

Also, please note that the /var/mail/%u file is where the user’s mails are store on most systems.

Add the following directive to /etc/dovecot/dovecot.conf (note that imap and pop3 imply imaps and pop3s as well).

protocols = imap pop3

And make sure /etc/conf.d/10-ssl.conf includes the following lines (otherwise, add them).

ssl_cert = </etc/dovecot/dovecot.pem
ssl_key = </etc/dovecot/private/dovecot.pem

Now let’s restart Dovecot and verify that it listens on the ports related to imap, imaps, pop3, and pop3s.

# netstat -npltu | grep dovecot

Check Listening Ports

Check Listening Ports

Setting Up a Mail Client and Sending/Receving Mails

On our client computer, we will open Thunderbird and click on File → New → Existing mail account. We will be prompted to enter the name of the account and the associated email address, along with its password. When we click Continue, Thunderbird will then try to connect to the mail server in order to verify settings.

Configure Mail Client

Configure Mail Client

Repeat the process above for the next account (gacanepa@example.com.ar) and the following two inboxes should appear in Thunderbird’s left pane.

User Mail Inbox

User Mail Inbox

On our server, we will write an email message to sysadmin, which is aliased to jdoe and gacanepa.

Send Mail from Commandline

Send Mail from Commandline

The mail log (/var/log/mail.log) seems to indicate that the email that was sent to sysadmin was relayed to jdoe@example.com.ar and gacanepa@example.com.ar, as can be seen in the following image.

Check Mail Status Delivery

Check Mail Status Delivery

We can verify if the mail was actually delivered to our client, where the IMAP accounts were configured in Thunderbird.

Verify Email Messages

Verify Email Messages

Finally, let’s try to send a message from jdoe@example.com.ar to gacanepa@example.com.ar.

Send Message to User

Send Message to User

In the exam you will be asked to work exclusively with command-line utilities. This means you will not be able to install a desktop client application such as Thunderbird, but will be required to use mail instead. We have used Thunderbird in this chapter for illustrative purposes only.

Conclusion

In this post we have explained how to set up an IMAP mail server for your local area network and how to restrict access to the SMTP server. If you happen to run into an issue while implementing a similar setup in your testing environment, you will want to check the online documentation of Postfix and Dovecot (specially the pages about the main configuration files, /etc/postfix/main.cf and /etc/dovecot/dovecot.conf, respectively), but in any case do not hesitate to contact me using the comment form below. I will be more than glad to help you.

How To Setup an Iptables Firewall to Enable Remote Access to Services in Linux – Part 8

Configure Linux Iptables Firewall

Linux Foundation Certified Engineer – Part 8

Introducing The Linux Foundation Certification Program

You will recall from Part 1 – About Iptables of this LFCE (Linux Foundation Certified Engineer​) series that we gave a basic description of what a firewall is: a mechanism to manage packets coming into and leaving the network. By “manage” we actually mean:

  1. To allow or prevent certain packets to enter or leave our network.
  2. To forward other packets from one point of the network to another.

based on predetermined criteria.

In this article we will discuss how to implement basic packet filtering and how to configure the firewall with iptables, a frontend to netfilter, which is a native kernel module used for firewalling.

Please note that firewalling is a vast subject and this article is not intended to be a comprehensive guide to understanding all that there is to know about it, but rather as a starting point for a deeper study of this topic. However, we will revisit the subject in Part 10 of this series when we explore a few specific use cases of a firewall in Linux.

You can think of a firewall as an international airport where passenger planes come and go almost 24/7. Based on a number of conditions, such as the validity of a person’s passport, or his / her country of origin (to name a few examples) he or she may, or may not, be allowed to enter or leave a certain country.

At the same time, airport officers can instruct people to move from one place of the airport to another if necessary, for example when they need to go through Customs Services.

We may find the airport analogy useful during the rest of this tutorial. Just keep in mind the following relations as we proceed:

  1. Persons = Packets
  2. Firewall = Airport
  3. Country #1 = Network #1
  4. Country #2 = Network #2
  5. Airport regulations enforced by officers = firewall rules

Iptables – The Basics

At the low level, it is the kernel itself which “decides” what to do with packets based on rules grouped in chains, or sentences. These chains define what actions should be taken when a package matches the criteria specified by them.

The first action taken by iptables will consist in deciding what to do with a packet:

  1. Accept it (let it go through into our network)?
  2. Reject it (prevent it from accessing our network)?
  3. Forward it (to another chain)?

Just in case you were wondering why this tool is called iptables, it’s because these chains are organized in tables, with the filter table being the most well know and the one that is used to implement packet filtering with its three default chains:

1. The INPUT chain handles packets coming into the network, which are destined for local programs.

2. The OUTPUT chain is used to analyze packets originated in the local network, which are to be sent to the outside.

3. The FORWARD chain processes the packets which should be forwarded to another destination (as in the case of a router).

For each of these chains there is a default policy, which dictates what should be done by default when packets do not match any of the rules in the chain. You can view the rules created for each chain and the default policy by running the following command:

# iptables -L

The available policies are as follows:

  1. ACCEPT → lets the packet through. Any packet that does not match any rules in the chain is allowed into the network.
  2. DROP → drops the packet quietly. Any packet that does not match any rules in the chain is prevented from entering the network.
  3. REJECT → rejects the packet and returns an informative message. This one in particular does not work as a default policy. Instead, it is meant to complement packet filtering rules.

Linux Firewall Policies

Linux Iptables Policies

When it comes to deciding which policy you will implement, you need to consider the pros and cons of each approach as explained above – note that there is no one-size-fits-all solution.

Adding Rules

To add a rule to the firewall, invoke the iptables command as follows:

# iptables -A chain_name criteria -j target

where,

  1. -A stands for Append (append the current rule to the end of the chain).
  2. chain_name is either INPUT, OUTPUT, or FORWARD.
  3. target is the action, or policy, to apply in this case (ACCEPT, REJECT, or DROP).
  4. criteria is the set of conditions against which the packets are to be examined. It is composed of at least one (most likely more) of the following flags. Options inside brackets, separated by a vertical bar, are equivalent to each other. The rest represents optional switches:
[--protocol | -p] protocol: specifies the protocol involved in a rule.
[--source-port | -sport] port:[port]: defines the port (or range of ports) where the packet originated.
[--destination-port | -dport] port:[port]: defines the port (or range of ports) to which the packet is destined.
[--source | -s] address[/mask]: represents the source address or network/mask.
[--destination | -d] address[/mask]: represents the destination address or network/mask.
[--state] state (preceded by -m state): manage packets depending on whether they are part of a state connection, where state can be NEW, ESTABLISHED, RELATED, or INVALID.
[--in-interface | -i] interface: specifies the input interface of the packet.
[--out-interface | -o] interface: the output interface.
[--jump | -j] target: what to do when the packet matches the rule.

Our Testing Environment

Let’s glue all that in 3 classic examples using the following test environment for the first two:

Firewall: Debian Wheezy 7.5 
Hostname: dev2.gabrielcanepa.com
IP Address: 192.168.0.15
Source: CentOS 7 
Hostname: dev1.gabrielcanepa.com
IP Address: 192.168.0.17

And this for the last example

NFSv4 server and firewall: Debian Wheezy 7.5 
Hostname: debian
IP Address: 192.168.0.10
Source: Debian Wheezy 7.5 
Hostname: dev2.gabrielcanepa.com
IP Address: 192.168.0.15
EXAMPLE 1: Analyzing the difference between the DROP and REJECT policies

We will define a DROP policy first for input pings to our firewall. That is, icmp packets will be dropped quietly.

# ping -c 3 192.168.0.15
# iptables -A INPUT --protocol icmp --in-interface eth0 -j DROP

Linux Iptables Block ICMP Ping

Drop ICMP Ping Request

Before proceeding with the REJECT part, we will flush all rules from the INPUT chain to make sure our packets will be tested by this new rule:

# iptables -F INPUT
# iptables -A INPUT --protocol icmp --in-interface eth0 -j REJECT
# ping -c 3 192.168.0.15

Reject ICMP Ping Request in Linux

Reject ICMP Ping Request in Firewall

EXAMPLE 2: Disabling / re-enabling ssh logins from dev2 to dev1

We will be dealing with the OUTPUT chain as we’re handling outgoing traffic:

# iptables -A OUTPUT --protocol tcp --destination-port 22 --out-interface eth0 --jump REJECT

Block SSH Login in Linux Firewall

Block SSH Login in Firewall

EXAMPLE 3: Allowing / preventing NFS clients (from 192.168.0.0/24) to mount NFS4 shares

Run the following commands in the NFSv4 server / firewall to close ports 2049 and 111 for all kind of traffic:

# iptables -F
# iptables -A INPUT -i eth0 -s 0/0 -p tcp --dport 2049 -j REJECT
# iptables -A INPUT -i eth0 -s 0/0 -p tcp --dport 111 -j REJECT

Block NFS Port in Linux Firewall

Block NFS Ports in Firewall

Now let’s open those ports and see what happens.

# iptables -A INPUT -i eth0 -s 0/0 -p tcp --dport 111 -j ACCEPT
# iptables -A INPUT -i eth0 -s 0/0 -p tcp --dport 2049 -j ACCEPT

Open NFS Ports in Firewall

Open NFS Ports in Firewall

As you can see, we were able to mount the NFSv4 share after opening the traffic.

Inserting, Appending and Deleting Rules

In the previous examples we showed how to append rules to the INPUT and OUTPUT chains. Should we want to insert them instead at a predefined position, we should use the -I (uppercase i) switch instead.

You need to remember that rules will be evaluated one after another, and that the evaluation stops (or jumps) when a DROP or ACCEPT policy is matched. For that reason, you may find yourself in the need to move rules up or down in the chain list as needed.

We will use a trivial example to demonstrate this:

Check Linux Iptables Rules

Check Rules of Iptables Firewall

Let’s place the following rule,

# iptables -I INPUT 2 -p tcp --dport 80 -j ACCEPT

at position 2) in the INPUT chain (thus moving previous #2 as #3)

Linux Iptables Accept Rule

Iptables Accept Rule

Using the setup above, traffic will be checked to see whether it’s directed to port 80 before checking for port 2049.

Alternatively, you can delete a rule and change the target of the remaining rules to REJECT (using the -R switch):

# iptables -D INPUT 1
# iptables -nL -v --line-numbers
# iptables -R INPUT 2 -i eth0 -s 0/0 -p tcp --dport 2049 -j REJECT
# iptables -R INPUT 1 -p tcp --dport 80 -j REJECT

Linux Iptables Drop Rule

Iptables Drop Rule

Last, but not least, you will need to remember that in order for the firewall rules to be persistent, you will need to save them to a file and then restore them automatically upon boot (using the preferred method of your choice or the one that is available for your distribution).

Saving firewall rules:

# iptables-save > /etc/iptables/rules.v4		[On Ubuntu]
# iptables-save > /etc/sysconfig/iptables		[On CentOS / OpenSUSE]

Restoring rules:

# iptables-restore < /etc/iptables/rules.v4		[On Ubuntu]
# iptables-restore < /etc/sysconfig/iptables		[On CentOS / OpenSUSE]

Here we can see a similar procedure (saving and restoring firewall rules by hand) using a dummy file called iptables.dump instead of the default one as shown above.

# iptables-save > iptables.dump

Save Iptables Rules in Linux

Dump Linux Iptables

To make these changes persistent across boots:

Ubuntu: Install the iptables-persistent package, which will load the rules saved in the /etc/iptables/rules.v4 file.

# apt-get install iptables-persistent

CentOS: Add the following 2 lines to /etc/sysconfig/iptables-config file.

IPTABLES_SAVE_ON_STOP="yes"
IPTABLES_SAVE_ON_RESTART="yes"

OpenSUSE: List allowed ports, protocols, addresses, and so forth (separated by commas) in /etc/sysconfig/SuSEfirewall2.

For more information refer to the file itself, which is heavily commented.

Conclusion

The examples provided in this article, while not covering all the bells and whistles of iptables, serve the purpose of illustrating how to enable and disable traffic incoming or outgoing traffic.

For those of you who are firewall fans, keep in mind that we will revisit this topic with more specific applications in Part 10 of this LFCE series.

Feel free to let me know if you have any questions or comments.

How to Monitor System Usage, Outages and Troubleshoot Linux Servers – Part 9

Although Linux is very reliable, wise system administrators should find a way to keep an eye on the system’s behaviour and utilization at all times. Ensuring an uptime as close to 100% as possible and the availability of resources are critical needs in many environments. Examining the past and current status of the system will allow us to foresee and most likely prevent possible issues.

Monitor Linux Servers

Linux Foundation Certified Engineer – Part 9

Introducing The Linux Foundation Certification Program

In this article we will present a list of a few tools that are available in most upstream distributions to check on the system status, analyze outages, and troubleshoot ongoing issues. Specifically, of the myriad of available data, we will focus on CPU, storage space and memory utilization, basic process management, and log analysis.

Storage Space Utilization

There are 2 well-known commands in Linux that are used to inspect storage space usage: df and du.

The first one, df (which stands for disk free), is typically used to report overall disk space usage by file system.

Example 1: Reporting disk space usage in bytes and human-readable format

Without options, df reports disk space usage in bytes. With the -h flag it will display the same information using MB or GB instead. Note that this report also includes the total size of each file system (in 1-K blocks), the free and available spaces, and the mount point of each storage device.

# df
# df -h

Monitor Linux Disk Space Utilization

Disk Space Utilization

That’s certainly nice – but there’s another limitation that can render a file system unusable, and that is running out of inodes. All files in a file system are mapped to an inode that contains its metadata.

Example 2: Inspecting inode usage by file system in human-readable format with
# df -hTi

you can see the amount of used and available inodes:

Monitor Linux Inode Disk Usage

Inode Disk Usage

According to the above image, there are 146 used inodes (1%) in /home, which means that you can still create 226K files in that file system.

Example 3: Finding and / or deleting empty files and directories

Note that you can run out of storage space long before running out of inodes, and vice-versa. For that reason, you need to monitor not only the storage space utilization but also the number of inodes used by file system.

Use the following commands to find empty files or directories (which occupy 0B) that are using inodes without a reason:

# find  /home -type f -empty
# find  /home -type d -empty

Also, you can add the -delete flag at the end of each command if you also want to delete those empty files and directories:

# find  /home -type f -empty --delete
# find  /home -type f -empty

Find all Empty Files in Linux

Find and Delete Empty Files in Linux

The previous procedure deleted 4 files. Let’s check again the number of used / available nodes again in /home:

# df -hTi | grep home

Check Linux Inode Usage

Check Linux Inode Usage

As you can see, there are 142 used inodes now (4 less than before).

Example 4: Examining disk usage by directory

If the use of a certain file system is above a predefined percentage, you can use du (short for disk usage) to find out what are the files that are occupying the most space.

The example is given for /var, which as you can see in the first image above, is used at its 67%.

# du -sch /var/*

Find Size of all Directoires in Linux

Check Disk Space Usage by Directory

Note: That you can switch to any of the above subdirectories to find out exactly what’s in them and how much each item occupies. You can then use that information to either delete some files if there are not needed or extend the size of the logical volume if necessary.

Read Also

  1. 12 Useful “df” Commands to Check Disk Space
  2. 10 Useful “du” Commands to Find Disk Usage of Files and Directories

Memory and CPU Utilization

The classic tool in Linux that is used to perform an overall check of CPU / memory utilization and process management is top command. In addition, top displays a real-time view of a running system. There other tools that could be used for the same purpose, such as htop, but I have settled for top because it is installed out-of-the-box in any Linux distribution.

Example 5: Displaying a live status of your system with top

To start top, simply type the following command in your command line, and hit Enter.

# top

Let’s examine a typical top output:

Show All Running Processes in Linux

List All Running Processes in Linux

In rows 1 through 5 the following information is displayed:

1. The current time (8:41:32 pm) and uptime (7 hours and 41 minutes). Only one user is logged on to the system, and the load average during the last 1, 5, and 15 minutes, respectively. 0.00, 0.01, and 0.05 indicate that over those time intervals, the system was idle for 0% of the time (0.00: no processes were waiting for the CPU), it then was overloaded by 1% (0.01: an average of 0.01 processes were waiting for the CPU) and 5% (0.05). If less than 0 and the smaller the number (0.65, for example), the system was idle for 35% during the last 1, 5, or 15 minutes, depending where 0.65 appears.

2. Currently there are 121 processes running (you can see the complete listing in 6). Only 1 of them is running (top in this case, as you can see in the %CPU column) and the remaining 120 are waiting in the background but are “sleeping” and will remain in that state until we call them. How? You can verify this by opening a mysql prompt and execute a couple of queries. You will notice how the number of running processes increases.

Alternatively, you can open a web browser and navigate to any given page that is being served by Apache and you will get the same result. Of course, these examples assume that both services are installed in your server.

3. us (time running user processes with unmodified priority), sy (time running kernel processes), ni (time running user processes with modified priority), wa (time waiting for I/O completion), hi (time spent servicing hardware interrupts), si (time spent servicing software interrupts), st (time stolen from the current vm by the hypervisor – only in virtualized environments).

4. Physical memory usage.

5. Swap space usage.

Example 6: Inspecting physical memory usage

To inspect RAM memory and swap usage you can also use free command.

# free

Linux Check Memory Usage

Check Linux Memory Usage

Of course you can also use the -m (MB) or -g (GB) switches to display the same information in human-readable form:

# free -m

View Linux Memory Usage

View Linux Memory Usage

Either way, you need to be aware of the fact that the kernel reserves as much memory as possible and makes it available to processes when they request it. Particularly, the “-/+ buffers/cache” line shows the actual values after this I/O cache is taken into account.

In other words, the amount of memory used by processes and the amount available to other processes (in this case, 232 MB used and 270 MB available, respectively). When processes need this memory, the kernel will automatically decrease the size of the I/O cache.

Read Also10 Useful “free” Command to Check Linux Memory Usage

Taking a Closer Look at Processes

At any given time, there many processes running on our Linux system. There are two tools that we will use to monitor processes closely: ps and pstree.

Example 7: Displaying the whole process list in your system with ps (full standard format)

Using the -e and -f options combined into one (-ef) you can list all the processes that are currently running on your system. You can pipe this output to other tools, such as grep (as explained in Part 1 of the LFCS series) to narrow down the output to your desired process(es):

# ps -ef | grep -i squid | grep -v grep

Monitoring Processes in Linux

Monitoring Processes in Linux

The process listing above shows the following information:

owner of the process, PID, Parent PID (the parent process), processor utilization, time when command started, tty (the ? indicates it’s a daemon), the cumulated CPU time, and the command associated with the process.

Example 8: Customizing and sorting the output of ps

However, perhaps you don’t need all that information, and would like to show the owner of the process, the command that started it, its PID and PPID, and the percentage of memory it’s currently using – in that order, and sort by memory use in descending order (note that ps by default is sorted by PID).

# ps -eo user,comm,pid,ppid,%mem --sort -%mem

Where the minus sign in front of %mem indicates sorting in descending order.

Monitor Linux Processes by Memory Usage

Monitor Linux Process Memory Usage

If for some reason a process starts taking too much system resources and it’s likely to jeopardize the overall functionality of the system, you will want to stop or pause its execution passing one of the following signals using the kill program to it. Other reasons why you would consider doing this is when you have started a process in the foreground but want to pause it and resume in the background.

Signal name Signal number Description
 SIGTERM 15  Kill the process gracefully.
 SIGINT 2  This is the signal that is sent when we press Ctrl + C. It aims to interrupt the process, but the process may ignore it.
 SIGKILL 9  This signal also interrupts the process but it does so unconditionally (use with care!) since a process cannot ignore it.
 SIGHUP 1  Short for “Hang UP”, this signals instructs daemons to reread its configuration file without actually stopping the process.
 SIGTSTP 20  Pause execution and wait ready to continue. This is the signal that is sent when we type the Ctrl + Z key combination.
 SIGSTOP 19  The process is paused and doesn’t get any more attention from the CPU cycles until it is restarted.
 SIGCONT 18  This signal tells the process to resume execution after having received either SIGTSTP or SIGSTOP. This is the signal that is sent by the shell when we use the fg or bg commands.
Example 9: Pausing the execution of a running process and resuming it in the background

When the normal execution of a certain process implies that no output will be sent to the screen while it’s running, you may want to either start it in the background (appending an ampersand at the end of the command).

process_name &

or,
Once it has started running in the foreground, pause it and send it to the background with

Ctrl + Z
# kill -18 PID

Terminate Process in Linux

Kill Process in Linux

Example 10: Killing by force a process “gone wild”

Please note that each distribution provides tools to gracefully stop / start / restart / reload common services, such as service in SysV-based systems or systemctl in systemd-based systems.

If a process does not respond to those utilities, you can kill it by force by sending it the SIGKILL signal to it.

# ps -ef | grep apache
# kill -9 3821

Forcefully Kill Linux Process

Forcefully Kill Linux Process

So.. What Happened / Is Happening?

When there has been any kind of outage in the system (be it a power outage, a hardware failure, a planned or unplanned interruption of a process, or any abnormality at all), the logs in /var/log are your best friends to determine what happened or what could be causing the issues you’re facing.

# cd /var/log

Monitor Linux Log Files

View Linux Logs

Some of the items in /var/log are regular text files, others are directories, and yet others are compressed files of rotated (historical) logs. You will want to check those with the word error in their name, but inspecting the rest can come in handy as well.

Example 11: Examining logs for errors in processes

Picture this scenario. Your LAN clients are unable to print to network printers. The first step to troubleshoot this situation is going to /var/log/cups directory and see what’s in there.

You can use the tail command to display the last 10 lines of the error_log file, or tail -f error_log for a real-time view of the log.

# cd /var/log/cups
# ls
# tail error_log

Linux View Logfile In Real Time

Monitor Log Files in Real Time

The above screenshot provides some helpful information to understand what could be causing your issue. Note that following the steps or correcting the malfunctioning of the process still may not solve the overall problem, but if you become used right from the start to check on the logs every time a problem arises (be it a local or a network one) you’ll be definitely on the right track.

Example 12: Examining the logs for hardware failures

Although hardware failures can be tricky to troubleshoot, you should check the dmesg and messages logs and grep for related words to a hardware part presumed faulty.

The image below is taken from /var/log/messages after looking for the word error using the following command:

# less /var/log/messages | grep -i error

We can see that we’re having a problem with two storage devices: /dev/sdb and /dev/sdc, which in turn cause an issue with the RAID array.

Troubleshoot Linux Server

Troubleshooting Linux Problems

Conclusion

In this article we have explored some of the tools that can help you to always be aware of your system’s overall status. In addition, you need to make sure that your operating system and installed packages are updated to their latest stable versions. And never, ever, forget to check the logs! Then you will be headed in the right direction to find the definitive solution to any issues.

Feel free to leave your comments, suggestions, or questions -if you have any- using the form below.

How to Turn a Linux Server into a Router to Handle Traffic Statically and Dynamically – Part 10

As we have anticipated in previous tutorials of this LFCE (Linux Foundation Certified Engineer) series, in this article we will discuss the routing of IP traffic statically and dynamically with specific applications.

Configure Linux as Router

Linux Foundation Certified Engineer – Part 10

Introducing The Linux Foundation Certification Program

First things first, let’s get some definitions straight:

  1. In simple words, a packet is the basic unit that is used to transmit information within a network. Networks that use TCP/IP as network protocol follow the same rules for transmission of data: the actual information is split into packets that are made of both data and the address where it should be sent to.
  2. Routing is the process of “guiding” the data from source to destination inside a network.
  3. Static routing requires a manually-configured set of rules defined in a routing table. These rules are fixed and are used to define the way a packet must go through as it travels from one machine to another.
  4. Dynamic routing, or smart routing (if you wish), means that the system can alter automatically, as needed, the route that a packet follows.

Advanced IP and Network Device Configuration

The iproute package provides a set of tools to manage networking and traffic control that we will use throughout this article as they represent the replacement of legacy tools such as ifconfig and route.

The central utility in the iproute suite is called simply ip. Its basic syntax is as follows:

# ip object command

Where object can be only one of the following (only the most frequent objects are shown – you can refer to man ip for a complete list):

  1. link: network device.
  2. addr: protocol (IP or IPv6) address on a device.
  3. route: routing table entry.
  4. rule: rule in routing policy database.

Whereas command represents a specific action that can be performed on object. You can run the following command to display the complete list of commands that can be applied to a particular object:

# ip object help

For example,

# ip link help

IP Command Help

IP Command Help

The above image shows, for example, that you can change the status of a network interface with the following command:

# ip link set interface {up | down}

For such more examples of ‘ip‘ command, read 10 Useful ‘ip’ Commands to Configure IP Address

Example 1: Disabling and enabling a network interface

In this example, we will disable and enable eth1:

# ip link show
# ip link set eth1 down
# ip link show

Disable eth0 Interface in Linux

Disable eth0 Interface

If you want to re-enable eth1,

# ip link set eth1 up

Instead of displaying all the network interfaces, we can specify one of them:

# ip link show eth1

Which will return all the information for eth1.

Example 2: Displaying the main routing table

You can view your current main routing table with either of the following 3 commands:

# ip route show
# route -n
# netstat -rn

Check route in Linux

Check Linux Route Table

The first column in the output of the three commands indicates the target network. The output of ip route show (following the keyword dev) also presents the network devices that serve as physical gateway to those networks.

Although nowadays the ip command is preferred over route, you can still refer to man ip-route and man route for a detailed explanation of the rest of the columns.

Example 3: Using a Linux server to route packets between two private networks

We want to route icmp (ping) packets from dev2 to dev4 and the other way around as well (note that both client machines are on different networks). The name of each NIC, along with its corresponding IPv4 address, is given inside square brackets.

Our test environment is as follows:

Client 1: CentOS 7 [enp0s3: 192.168.0.17/24] - dev1
Router: Debian Wheezy 7.7 [eth0: 192.168.0.15/24, eth1: 10.0.0.15/24] - dev2
Client 2: openSUSE 13.2 [enp0s3: 10.0.0.18/24] - dev4

Let’s view the routing table in dev1 (CentOS box):

# ip route show

and then modify it in order to use its enp0s3 NIC and the connection to 192.168.0.15 to access hosts in the 10.0.0.0/24 network:

# ip route add 10.0.0.0/24 via 192.168.0.15 dev enp0s3

Which essentially reads, “Add a route to the 10.0.0.0/24 network through the enp0s3 network interface using 192.168.0.15 as gateway”.

Route Network in Linux

Route Network in Linux

Likewise in dev4 (openSUSE box) to ping hosts in the 192.168.0.0/24 network:

# ip route add 192.168.0.0/24 via 10.0.0.15 dev enp0s3

Network Routing in Linux

Network Routing in Linux

Finally, we need to enable forwarding in our Debian router:

# echo 1 > /proc/sys/net/ipv4/ip_forward

Now let’s ping:

Check Network Routing

Check Network Routing

and,

Route Ping Status

Route Ping Status

To make these settings persistent across boots, edit /etc/sysctl.conf on the router and make sure the net.ipv4.ip_forward variable is set to true as follows:

net.ipv4.ip_forward = 1

In addition, configure the NICs on both clients (look for the configuration file within /etc/sysconfig/network on openSUSE and /etc/sysconfig/network-scripts on CentOS – in both cases it’s called ifcfg-enp0s3).

Here’s the configuration file from the openSUSE box:

BOOTPROTO=static
BROADCAST=10.0.0.255
IPADDR=10.0.0.18
NETMASK=255.255.255.0
GATEWAY=10.0.0.15
NAME=enp0s3
NETWORK=10.0.0.0
ONBOOT=yes
Example 4: Using a Linux server to route packages between a private networks and the Internet

Another scenario where a Linux machine can be used as router is when you need to share your Internet connection with a private LAN.

Router: Debian Wheezy 7.7 [eth0: Public IP, eth1: 10.0.0.15/24] - dev2
Client: openSUSE 13.2 [enp0s3: 10.0.0.18/24] - dev4

In addition to set up packet forwarding and the static routing table in the client as in the previous example, we need to add a few iptables rules in the router:

# iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
# iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
# iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT

The first command adds a rule to the POSTROUTING chain in the nat (Network Address Translation) table, indicating that the eth0 NIC should be used for outgoing packages.

MASQUERADE indicates that this NIC has a dynamic IP and that before sending the package to the “wild wild world” of the Internet, the private source address of the packet has to be changed to that of the public IP of the router.

In a LAN with many hosts, the router keeps track of established connections in /proc/net/ip_conntrack so it knows where to return the response from the Internet to.

Only part of the output of:

# cat /proc/net/ip_conntrack

is show in the following screenshot.

Route Packages in Linux

Route Packages in Linux

Where the origin (private IP of openSUSE box) and destination (Google DNS) of packets is highlighted. This was the result of running:

# curl www.tecmint.com

on the openSUSE box.

As I’m sure you can already guess, the router is using Google’s 8.8.8.8 as nameserver, which explains why the destination of outgoing packets points to that address.

Note: That incoming packages from the Internet are only accepted is if they are part of an already established connection (command #2), while outgoing packages are allowed “free exit” (command #3).

Don’t forget to make your iptables rules persistent following the steps outlined in Part 8 – Configure Iptables Firewall of this series.

Dynamic Routing with Quagga

Nowadays, the tool most used for dynamic routing in Linux is quagga. It allows system administrators to implement, with a relatively low-cost Linux server, the same functionality that is provided by powerful (and costly) Cisco routers.

The tool itself does not handle the routing, but rather modifies the kernel routing table as it learns new best routes to handle packets.

Since it’s a fork of zebra, a program whose development ceased a while ago, it maintains for historical reasons the same commands and structure than zebra. That is why you will see a lot of reference to zebra from this point on.

Please note that it is not possible to cover dynamic routing and all the related protocols in a single article, but I am confident that the content presented here will serve as a starting point for you to build on.

Installing Quagga in Linux

To install quagga on your chosen distribution:

# aptitude update && aptitude install quagga 				[On Ubuntu]
# yum update && yum install quagga 					[CentOS/RHEL]
# zypper refresh && zypper install quagga 				[openSUSE]

We will use the same environment as with Example #3, with the only difference that eth0 is connected to a main gateway router with IP 192.168.0.1.

Next, edit /etc/quagga/daemons with,

zebra=1
ripd=1

Now create the following configuration files.

# /etc/quagga/zebra.conf
# /etc/quagga/ripd.conf

and add these lines (replace for a hostname and password of your choice):

service quagga restart
hostname    	dev2
password    	quagga
# service quagga restart

Install Quagga in Linux

Start Quagga Service

Note: That ripd.conf is the configuration file for the Routing Information Protocol, which provides the router with the information of which networks can be reached and how far (in terms of amount of hops) they are.

Note that this is only one of the protocols that can be used along with quagga, and I chose it for this tutorial due to easiness of use and because most network devices support it, although it has the disadvantage of passing credentials in plain text. For that reason, you need to assign proper permissions to the configuration file:

# chown quagga:quaggavty /etc/quagga/*.conf
# chmod 640 /etc/quagga/*.conf 
Example 5: Setting up quagga to route IP traffic dynamically

In this example we will use the following setup with two routers (make sure to create the configuration files for router #2 as explained previously):

Configure Quagga in Linux

Configure Quagga

Important: Don’t forget to repeat the following setup for both routers.

Connect to zebra (listening on port 2601), which is the logical intermediary between the router and the kernel:

# telnet localhost 2601

Enter the password that was set in the /etc/quagga/zebra.conf file, and then enable configuration:

enable
configure terminal

Enter the IP address and network mask of each NIC:

inter eth0
ip addr 192.168.0.15
inter eth1
ip addr 10.0.0.15
exit
exit
write

Configure Router in Linux

Configure Router

Now we need to connect to the RIP daemon terminal (port 2602):

# telnet localhost 2602

Enter username and password as configured in the /etc/quagga/ripd.conf file, and then type the following commands in bold (comments are added for the sake of clarification):

enable turns on privileged mode command.
configure terminal changes to configuration mode. This command is the first step to configuration
router rip enables RIP.
network 10.0.0.0/24 sets the RIP enable interface for the 10.0.0.0/24 network. 
exit
exit
write writes current configuration to configuration file.

Enable Router in Linux

Enable Router

Note: That in both cases the configuration is appended to the lines that we added previously (/etc/quagga/zebra.conf and /etc/quagga/ripd.conf).

Finally, connect again to the zebra service on both routers and note how each one of them has “learned” the route to the network that is behind the other, and which is the next hop to get to that network, by running the command show ip route:

# show ip route

Check IP Routing Table in Linux

Check IP Routing

If you want to try different protocols or setups, you may want to refer to the Quagga project site for further documentation.

Conclusion

In this article we have explained how to set up static and dynamic routing, using a Linux box router(s). Feel free to add as many routers as you wish, and to experiment as much as you want. Do not hesitate to get back to us using the contact form below if you have any comments or questions.

How to Setup a Network Repository to Install or Update Packages – Part 11

Installing, updating, and removing (when needed) installed programs are key responsibilities in a system administrator’s daily life. When a machine is connected to the Internet, these tasks can be easily performed using a package management system such as aptitude (or apt-get), yum, or zypper, depending on your chosen distribution, as explained in Part 9 – Linux Package Management of the LFCE (Linux Foundation Certified Engineer) series. You can also download standalone .deb or .rpm files and install them with dpkg or rpm, respectively.

Setup Yum Local Repository in CentOS 7

Linux Foundation Certified Engineer – Part 11

Introducing The Linux Foundation Certification Program

However, when a machine does not have access to the world wide web, another methods are necessary. Why would anyone want to do that? The reasons range from saving Internet bandwidth (thus avoiding several concurrent connections to the outside) to securing packages compiled from source locally, and including the possibility of providing packages that for legal reasons (for example, software that is restricted in some countries) cannot be included in official repositories.

That is precisely where network repositories come into play, which is the central topic of this article.

Our Testing Environment
Network Repository Server:	CentOS 7 [enp0s3: 192.168.0.17] - dev1
Client Machine:			CentOS 6.6 [eth0: 192.168.0.18] - dev2

Setting Up a Network Repository Server on CentOS 7

As a first step, we will handle the installation and configuration of a CentOS 7 box as a repository server [IP address 192.168.0.17] and a CentOS 6.6 machine as client. The setup for openSUSE is almost identical.

For CentOS 7, follow the below articles that explains a step-by-step instructions of CentOS 7 installation and how to setup a static IP address.

  1. Installation of CentOS 7.0 with Screenshots
  2. How to Configure Network Static IP Address on CentOS 7

As for Ubuntu, there is a great article on this site that explains, step by step, how to set up your own, private repository.

  1. Setup Local Repositories with ‘apt-mirror’ in Ubuntu

Our first choice will be the way in which clients will access the repository server – FTP and HTTP are the most well used. We will choose the latter as the Apache installation was covered in Part 1 – Installing Apache of this LFCE series. This will also allow us to display the package listing using a web browser.

Next, we need to create directories to store the .rpm packages. We will create subdirectories within /var/www/html/repos accordingly. For our convenience, we may also want to create other subdirectories to host packages for different versions of each distribution (of course we can still add as many directories as needed later) and even different architectures.

Setting Up the Repository

An important thing to take into consideration when setting up your own repository is that you will need a considerable amount of available disk space (~20 GB). If you don’t, resize the filesystem where you’re planning on storing the repository’s contents, or even better add an extra dedicated storage device to host the repository.

That being said, we will begin by creating the directories that we will need to host the repository:

# mkdir -p /var/www/html/repos/centos/6/6

After we have created the directory structure for our repository server, we will initialize in /var/www/html/repos/centos/6/6 the database that keeps tracks of packages and their corresponding dependencies using createrepo.

Install createrepo if you haven’t already done so:

# yum update && yum install createrepo

Then initialize the database,

# createrepo /var/www/html/repos/centos/6/6

Createrepo Repository Initialization

Createrepo Repository Initialization

Updating the Repository

Assuming that the repository server has access to the Internet, we will pull an online repository to get the latest updates of packages. If that is not the case, you can still copy the entire contents of the Packages directory from a CentOS 6.6 installation DVD.

In this tutorial we will assume the first case. In order to optimize our download speed, we will choose a CentOS 6.6 mirror from a location near us. Go to CentOS download mirrorand pick the one that is closer to your location (Argentina in my case):

Select CentOS Download Mirror

Select CentOS Download Mirror

Then, navigate to the os directory inside the highlighted link and then choose the appropriate architecture. Once there, copy the link in the address bar and download the contents to the dedicated directory in the repository server:

Download CentOS Mirror

Download CentOS Mirror

# rsync -avz rsync://centos.ar.host-engine.com/6.6/os/x86_64/ /var/www/html/repos/centos/6/6/ 

In case that the chosen repository turns out to be offline for some reason, go back and choose a different one. No big deal.

Now is the time when you may want to relax and maybe watch an episode of your favourite TV show, because mirroring the online repository may take quite a while.

Once the download has completed, you can verify the usage of disk space with:

# du -sch /var/www/html/repos/centos/6/6/*

Check CentOS Mirror Size

Check CentOS Mirror Size

Finally, update the repository’s database.

# createrepo --update /var/www/html/repos/centos/6/6

You may also want to launch your web browser and navigate to the repos/centos/6/6 directory so as to verify that you can see the contents:

Verify CentOS Packages

Verify CentOS Packages

And you’re ready to go – now it’s time to configure the client.

Configuring the Client Machine

You can add the configuration files for custom repositories in the /etc/yum.repos.d directory. Configuration files need to end in .repo and follow the same basic structure.

[repository_name]
Description
URL

Most likely, there will be already other .repo files in /etc/yum.repos.d. To properly test your repository, you can either delete those configuration files (not really recommended, since you may need them later) or rename them, as I did, by appending .orig to each file name:

# cd /etc/yum.repos.d
# for i in $(ls *.repo); do mv $i $i.orig; done

Backup Yum Repositories

Backup Yum Repositories

In our case, we will name our configuration file as tecmint.repo and insert the following lines:

[tecmint]
name=Example repo for Part 11 of the LFCE series on Tecmint.com
baseurl=http://192.168.0.17/repos/centos/6/6/
gpgcheck=1
gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-6

In addition, we have enabled GPG signature-checking for all packages in our repository.

Using the Repository

Once the client has been properly configured, you can issue the usual yum commands to query the repository. Note how yum info subversion indicates that the information about the package is coming from our newly created repository:

# yum info subversion

Check Yum Repository Info

Check Yum Repository Info

Or you can install or update an already installed package:

# yum { install | update } package

For example,

# yum update && yum install subversion

Yum Install Package

Yum Install Package

Keeping the Repository Up-to-date

To make sure our repository is always current, we need to synchronize it with the online repositories on a periodic basis. We can use rsync for this task as well (as explained in Part 6 of the LFCS series.

rsync allows us to synchronize two directories, one local and one remote). Run the rsync command that was used to initially download the repository through a cron job and you’re good to go. Remember to set the cron job to a time of the day when the update will not cause a negative impact in the available bandwidth.

For example, if you want to update your repository every day beginning at 2:30 AM:

30 2 * * * rsync -avz rsync://centos.ar.host-engine.com/6.6/os/x86_64/ /var/www/html/repos/centos/6/6/ 

Important: Make sure to execute above command on the CentOS 7 server to keep your repository.

Of course, you can put that line inside a script to do more complex and customized tasks before or after performing the update. Feel free to experiment and tell me about the results.

Conclusion

You should never underestimate the importance of a local or network repository given the many benefits it brings as I explained in this article. If you can afford the disk space, this is definitely the way to go. I look forward to hearing from you and don’t hesitate to let me know if you have any questions.

How to Audit Network Performance, Security, and Troubleshooting in Linux – Part 12

A sound analysis of a computer network begins by understanding what are the available tools to perform the task, how to pick the right one(s) for each step of the way, and last but not least, where to begin.

This is the last part of the LFCE (Linux Foundation Certified Engineer) series, here we will review some well-known tools to examine the performance and increase the security of a network, and what to do when things aren’t going as expected.

Audit Linux Systems

Linux Foundation Certified Engineer – Part 12

Introducing The Linux Foundation Certification Program

Please note that this list does not pretend to be comprehensive, so feel free to comment on this post using the form at the bottom if you would like to add another useful utility that we could be missing.

What Services are Running and Why?

One of the first things that a system administrator needs to know about each system is what services are running and why. With that information in hand, it is a wise decision to disable all those that are not strictly necessary and shun hosting too many servers in the same physical machine.

For example, you need to disable your FTP server if your network does not require one (there are more secure methods to share files over a network, by the way). In addition, you should avoid having a web server and a database server in the same system. If one component becomes compromised, the rest run the risk of getting compromised as well.

Investigating Socket Connections with ss

ss is used to dump socket statistics and shows information similar to netstat, though it can display more TCP and state information than other tools. In addition, it is listed in man netstat as replacement for netstat, which is obsolete.

However, in this article we will focus on the information related to network security only.

Example 1: Showing ALL TCP ports (sockets) that are open on our server

All services running on their default ports (i.e. http on 80, mysql on 3306) are indicated by their respective names. Others (obscured here for privacy reasons) are shown in their numeric form.

# ss -t -a

Linux ss Command

Check All Open TCP Ports

The first column shows the TCP state, while the second and third column display the amount of data that is currently queued for reception and transmission. The fourth and fifth columns show the source and destination sockets of each connection.
On a side note, you may want to check RFC 793 to refresh your memory about possible TCP states because you also need to check on the number and the state of open TCP connections in order to become aware of (D)DoS attacks.

Example 2: Displaying ALL active TCP connections with their timers
# ss -t -o

Display all Active Connections

Check all Active Connections

In the output above, you can see that there are 2 established SSH connections. If you notice the value of the second field of timer:, you will notice a value of 36 minutes in the first connection. That is the amount of time until the next keepalive probe will be sent.

Since it’s a connection that is being kept alive, you can safely assume that is an inactive connection and thus can kill the process after finding out its PID.

Linux Kill Active Process

Kill Active Process

As for the second connection, you can see that it’s currently being used (as indicated by on).

Example 3: Filtering connections by socket

Suppose you want to filter TCP connections by socket. From the server’s point of view, you need to check for connections where the source port is 80.

# ss -tn sport = :80

Resulting in..

Filter Connections by Socket

Filter Connections by Socket

Protecting Against Port Scanning with NMAP

Port scanning is a common technique used by crackers to identify active hosts and open ports on a network. Once a vulnerability is discovered, it is exploited in order to gain access to the system.

A wise sysadmin needs to check how his or her systems are seen by outsiders, and make sure nothing is left to chance by auditing them frequently. That is called “defensive port scanning”.

Example 4: Displaying information about open ports

You can use the following command to scan which ports are open on your system or in a remote host:

# nmap -A -sS [IP address or hostname]

The above command will scan the host for OS and version detection, port information, and traceroute (-A). Finally, -sS sends a TCP SYN scan, preventing nmap to complete the 3-way TCP handshake and thus typically leaving no logs on the target machine.

Before proceeding with the next example, please keep in mind that port scanning is not an illegal activity. What IS illegal is using the results for a malicious purpose.

For example, the output of the above command run against the main server of a local university returns the following (only part of the result is shown for sake of brevity):

Check Open Ports in Linux

Check Open Ports

As you can see, we discovered several anomalies that we should do well to report to the system administrators at this local university.

This specific port scan operation provides all the information that can also be obtained by other commands, such as:

Example 5: Displaying information about a specific port in a local or remote system
# nmap -p [port] [hostname or address]
Example 6: Showing traceroute to, and finding out version of services and OS type, hostname
# nmap -A [hostname or address]
Example 7: Scanning several ports or hosts simultaneously

You can also scan several ports (range) or subnets, as follows:

# nmap -p 21,22,80 192.168.0.0/24 

Note: That the above command scans ports 21, 22, and 80 on all hosts in that network segment.

You can check the man page for further details on how to perform other types of port scanning. Nmap is indeed a very powerful and versatile network mapper utility, and you should be very well acquainted with it in order to defend the systems you’re responsible for against attacks originated after a malicious port scan by outsiders.

Reporting Usage and Performance on Your Network

Although there are several available tools to analyze and troubleshoot network performance, two of them are very easy to learn and user friendly.

To install both of them on CentOS, you will need to enable the EPEL repository first.

1. nmon utility

nmon is a system tuner and benchmark tool. As such, it can display the CPU, memory, network, disks, file systems, NFS, top processes, and resources (Linux version & processors). Of course, we’re mainly interested in the network performance feature.

To install nmon, run the following command on your chosen distribution:

# yum update && yum install nmon 				[On CentOS]
# aptitude update && aptitude install nmon 		[On Ubuntu]
# zypper refresh && and zypper install nmon 		[On openSUSE]

Install Nmon Tool

Network Reporting Usage and Performance

Make it a habit to look at the network traffic in real time to ensure that your system is capable of supporting normal loads and to watch out for unnecessary traffic or suspicious activity.

Vnstat Utility

vnstat is a console-based network traffic monitor that keeps a log of hourly (daily or monthly as well) network traffic for the selected interface(s).

# yum update && yum install vnstat				[On CentOS]
# aptitude update && aptitude install vnstat		[On Ubuntu]
# zypper refresh && and zypper install vnstat		[On openSUSE]

After installing the package, you need to enable the monitoring daemon as follows:

# service vnstat start 					[On SysV-based systems (Ubuntu)]
# systemctl start vnstat 				[On systemd-based systems (CentOS / openSUSE)]

Once you have installed and enabled vnstat, you can initialize the database to record traffic for eth0 (or other NIC) as follows:

# vnstat -u -i eth0

As I have just installed vnstat in the machine that I’m using to write this article, I still haven’t gathered enough data to display usage statistics:

Linux Network Traffic Monitor

Network Traffic Monitor

The vnstatd daemon will continue running in the background and collecting traffic data. Until it collects enough data to produce output, you can refer to the project’s web site to see what the traffic analysis looks like.

Transferring Files Securely Over the Network

If you need to ensure security while transferring or receiving files over a network, and specially if you need to perform that operation over the Internet, you will want to resort to 2 secure methods for file transfers (don’t even think about doing it over plain FTP!).

Example 8: Transferring files with scp (secure copy)

Use the -P flag if SSH on the remote hosts is listening on a port other than the default 22. The -p switch will preserve the permissions of local_file after the transfer, which will be made with the credentials of remote_useron remote_hosts. You will need to make sure that /absolute/path/to/remote/directory is writeable by this user.

# scp -P XXXX -p local_file remote_user@remote_host:/absolute/path/to/remote/directory
Example 9: Receiving files with scp (secure copy)

You can also download files with scp from a remote host:

# scp remote_user@remote_host:myFile.txt /absolute/path/to/local/directory

Or even between two remote hosts (in this case, copy the file myFile.txt from remote_host1 to remote_host2):

# scp remote_user1@remote_host1:/absolute/path/to/remote/directory1/myFile.txt remote_user1@remote_host2:/absolute/path/to/remote/directory2/

Don’t forget to use the -P switch if SSH is listening on a port other than the default 22.

You can read more about SCP commands.

Example 10: Sending and receiving files with SFTP

Unlike SCP, SFTP does not require previously knowing the location of the file that we want to download or send.

This is the basic syntax to connect to a remote host using SFTP:

# sftp -oPort=XXXX username@host

Where XXXX represents the port where SSH is listening on host, which can be either a hostname or its corresponding IP address. You can disregard the -oPort flag if SSH is listening on its default port (22).

Once the connection is successful, you can issue the following commands to send or receive files:

get -Pr [remote file or directory] # Receive files
put -r [local file or directory] # Send files

In both cases, the -r switch is used to recursively receive or send files, respectively. In the first case, the -Poption will also preserve the original file permissions.

To close the connection, simply type “exit” or “bye”. You can read more about sftp command.

Summing Up

You may want to complement what we have covered in this article with what we’ve already learned in other tutorials of this series, for example Part 8: How To Setup an Iptables Firewall.

If you know your systems well, you will be able to easily detect malicious or suspicious activity when the numbers show unusual activity without an apparent reason. You will also be able to plan ahead for network resources if you’re expecting a sudden increase in their use.

As a final note, remember to shut down the ports that are not strictly needed, or change the default to a higher number (~20000) to avoid common port scans to detect them.

As always, don’t hesitate to let us know if you have any questions or concerns about this article.

Source

LFCS (Linux Foundation Certified Sysadmin)

LFCS: How to use GNU ‘sed’ Command to Create, Edit, and Manipulate files in Linux – Part 1

The Linux Foundation announced the LFCS (Linux Foundation Certified Sysadmin) certification, a new program that aims at helping individuals all over the world to get certified in basic to intermediate system administration tasks for Linux systems. This includes supporting running systems and services, along with first-hand troubleshooting and analysis, and smart decision-making to escalate issues to engineering teams.

Linux Foundation Certified Sysadmin

Linux Foundation Certified Sysadmin – Part 1

Please watch the following video that demonstrates about The Linux Foundation Certification Program.

The series will be titled Preparation for the LFCS (Linux Foundation Certified Sysadmin) Parts 1 through 10 and cover the following topics for Ubuntu, CentOS, and openSUSE:

Part 1How to use GNU ‘sed’ Command to Create, Edit, and Manipulate files in Linux

Important: Due to changes in the LFCS certification requirements effective Feb. 2, 2016, we are including the following necessary topics to the LFCS series published here. To prepare for this exam, your are highly encouraged to use the LFCE series as well.

This post is Part 1 of a 20-tutorial series, which will cover the necessary domains and competencies that are required for the LFCS certification exam. That being said, fire up your terminal, and let’s start.

Processing Text Streams in Linux

Linux treats the input to and the output from programs as streams (or sequences) of characters. To begin understanding redirection and pipes, we must first understand the three most important types of I/O (Input and Output) streams, which are in fact special files (by convention in UNIX and Linux, data streams and peripherals, or device files, are also treated as ordinary files).

The difference between > (redirection operator) and | (pipeline operator) is that while the first connects a command with a file, the latter connects the output of a command with another command.

# command > file
# command1 | command2

Since the redirection operator creates or overwrites files silently, we must use it with extreme caution, and never mistake it with a pipeline. One advantage of pipes on Linux and UNIX systems is that there is no intermediate file involved with a pipe – the stdout of the first command is not written to a file and then read by the second command.

For the following practice exercises we will use the poem “A happy child” (anonymous author).

cat command

cat command example

Using sed

The name sed is short for stream editor. For those unfamiliar with the term, a stream editor is used to perform basic text transformations on an input stream (a file or input from a pipeline).

The most basic (and popular) usage of sed is the substitution of characters. We will begin by changing every occurrence of the lowercase y to UPPERCASE Y and redirecting the output to ahappychild2.txt. The g flag indicates that sed should perform the substitution for all instances of term on every line of file. If this flag is omitted, sed will replace only the first occurrence of term on each line.

Basic syntax:
# sed ‘s/term/replacement/flag’ file
Our example:
# sed ‘s/y/Y/g’ ahappychild.txt > ahappychild2.txt

sed command

sed command example

Should you want to search for or replace a special character (such as /\&) you need to escape it, in the term or replacement strings, with a backward slash.

For example, we will substitute the word and for an ampersand. At the same time, we will replace the word Iwith You when the first one is found at the beginning of a line.

# sed 's/and/\&/g;s/^I/You/g' ahappychild.txt

sed replace string

sed replace string

In the above command, a ^ (caret sign) is a well-known regular expression that is used to represent the beginning of a line.

As you can see, we can combine two or more substitution commands (and use regular expressions inside them) by separating them with a semicolon and enclosing the set inside single quotes.

Another use of sed is showing (or deleting) a chosen portion of a file. In the following example, we will display the first 5 lines of /var/log/messages from Jun 8.

# sed -n '/^Jun  8/ p' /var/log/messages | sed -n 1,5p

Note that by default, sed prints every line. We can override this behaviour with the -n option and then tell sed to print (indicated by p) only the part of the file (or the pipe) that matches the pattern (Jun 8 at the beginning of line in the first case and lines 1 through 5 inclusive in the second case).

Finally, it can be useful while inspecting scripts or configuration files to inspect the code itself and leave out comments. The following sed one-liner deletes (d) blank lines or those starting with # (the | character indicates a boolean OR between the two regular expressions).

# sed '/^#\|^$/d' apache2.conf

sed match string

sed match string

uniq Command

The uniq command allows us to report or remove duplicate lines in a file, writing to stdout by default. We must note that uniq does not detect repeated lines unless they are adjacent. Thus, uniq is commonly used along with a preceding sort (which is used to sort lines of text files). By default, sort takes the first field (separated by spaces) as key field. To specify a different key field, we need to use the -k option.

Examples

The du –sch /path/to/directory/* command returns the disk space usage per subdirectories and files within the specified directory in human-readable format (also shows a total per directory), and does not order the output by size, but by subdirectory and file name. We can use the following command to sort by size.

# du -sch /var/* | sort –h

sort command

sort command example

You can count the number of events in a log by date by telling uniq to perform the comparison using the first 6 characters (-w 6) of each line (where the date is specified), and prefixing each output line by the number of occurrences (-c) with the following command.

# cat /var/log/mail.log | uniq -c -w 6

Count Numbers in File

Count Numbers in File

Finally, you can combine sort and uniq (as they usually are). Consider the following file with a list of donors, donation date, and amount. Suppose we want to know how many unique donors there are. We will use the following command to cut the first field (fields are delimited by a colon), sort by name, and remove duplicate lines.

# cat sortuniq.txt | cut -d: -f1 | sort | uniq

Find Unique Records in File

Find Unique Records in File

Read Also13 “cat” Command Examples

grep Command

grep searches text files or (command output) for the occurrence of a specified regular expression and outputs any line containing a match to standard output.

Examples

Display the information from /etc/passwd for user gacanepa, ignoring case.

# grep -i gacanepa /etc/passwd

grep Command

grep command example

Show all the contents of /etc whose name begins with rc followed by any single number.

# ls -l /etc | grep rc[0-9]

List Content Using grep

List Content Using grep

Read Also12 “grep” Command Examples

tr Command Usage

The tr command can be used to translate (change) or delete characters from stdin, and write the result to stdout.

Examples

Change all lowercase to uppercase in sortuniq.txt file.

# cat sortuniq.txt | tr [:lower:] [:upper:]

Sort Strings in File

Sort Strings in File

Squeeze the delimiter in the output of ls –l to only one space.

# ls -l | tr -s ' '

Squeeze Delimiter

Squeeze Delimiter

cut Command Usage

The cut command extracts portions of input lines (from stdin or files) and displays the result on standard output, based on number of bytes (-b option), characters (-c), or fields (-f). In this last case (based on fields), the default field separator is a tab, but a different delimiter can be specified by using the -d option.

Examples

Extract the user accounts and the default shells assigned to them from /etc/passwd (the –d option allows us to specify the field delimiter, and the –f switch indicates which field(s) will be extracted.

# cat /etc/passwd | cut -d: -f1,7

Extract User Accounts

Extract User Accounts

Summing up, we will create a text stream consisting of the first and third non-blank files of the output of the lastcommand. We will use grep as a first filter to check for sessions of user gacanepa, then squeeze delimiters to only one space (tr -s ‘ ‘). Next, we’ll extract the first and third fields with cut, and finally sort by the second field (IP addresses in this case) showing unique.

# last | grep gacanepa | tr -s ' ' | cut -d' ' -f1,3 | sort -k2 | uniq

last command

last command example

The above command shows how multiple commands and pipes can be combined so as to obtain filtered data according to our desires. Feel free to also run it by parts, to help you see the output that is pipelined from one command to the next (this can be a great learning experience, by the way!).

Summary

Although this example (along with the rest of the examples in the current tutorial) may not seem very useful at first sight, they are a nice starting point to begin experimenting with commands that are used to create, edit, and manipulate files from the Linux command line. Feel free to leave your questions and comments below – they will be much appreciated!

Reference Links
  1. About the LFCS
  2. Why get a Linux Foundation Certification?
  3. Register for the LFCS exam

LFCS: How to Install and Use vi/vim as a Full Text Editor – Part 2

A couple of months ago, the Linux Foundation launched the LFCS (Linux Foundation Certified Sysadmin) certification in order to help individuals from all over the world to verify they are capable of doing basic to intermediate system administration tasks on Linux systems: system support, first-hand troubleshooting and maintenance, plus intelligent decision-making to know when it’s time to raise issues to upper support teams.

Learning VI Editor in Linux

Learning VI Editor in Linux

Please take a look at the below video that explains The Linux Foundation Certification Program.

This post is Part 2 of a 10-tutorial series, here in this part, we will cover the basic file editing operations and understanding modes in vi/m editor, that are required for the LFCS certification exam.

Perform Basic File Editing Operations Using vi/m

Vi was the first full-screen text editor written for Unix. Although it was intended to be small and simple, it can be a bit challenging for people used exclusively to GUI text editors, such as NotePad++, or gedit, to name a few examples.

To use Vi, we must first understand the 3 modes in which this powerful program operates, in order to begin learning later about the its powerful text-editing procedures.

Please note that most modern Linux distributions ship with a variant of vi known as vim (“Vi improved”), which supports more features than the original vi does. For that reason, throughout this tutorial we will use vi and vim interchangeably.

If your distribution does not have vim installed, you can install it as follows.

  1. Ubuntu and derivatives: aptitude update && aptitude install vim
  2. Red Hat-based distributions: yum update && yum install vim
  3. openSUSE: zypper update && zypper install vim

Why should I want to learn vi?

There are at least 2 good reasons to learn vi.

1. vi is always available (no matter what distribution you’re using) since it is required by POSIX.

2. vi does not consume a considerable amount of system resources and allows us to perform any imaginable tasks without lifting our fingers from the keyboard.

In addition, vi has a very extensive built-in manual, which can be launched using the :help command right after the program is started. This built-in manual contains more information than vi/m’s man page.

vi Man Pages

vi Man Pages

Launching vi

To launch vi, type vi in your command prompt.

Start vi Editor

Start vi Editor

Then press i to enter Insert mode, and you can start typing. Another way to launch vi/m is.

# vi filename

Which will open a new buffer (more on buffers later) named filename, which you can later save to disk.

Understanding Vi modes

1. In command mode, vi allows the user to navigate around the file and enter vi commands, which are brief, case-sensitive combinations of one or more letters. Almost all of them can be prefixed with a number to repeat the command that number of times.

For example, yy (or Y) copies the entire current line, whereas 3yy (or 3Y) copies the entire current line along with the two next lines (3 lines in total). We can always enter command mode (regardless of the mode we’re working on) by pressing the Esc key. The fact that in command mode the keyboard keys are interpreted as commands instead of text tends to be confusing to beginners.

2. In ex mode, we can manipulate files (including saving a current file and running outside programs). To enter this mode, we must type a colon (:) from command mode, directly followed by the name of the ex-mode command that needs to be used. After that, vi returns automatically to command mode.

3. In insert mode (the letter i is commonly used to enter this mode), we simply enter text. Most keystrokes result in text appearing on the screen (one important exception is the Esc key, which exits insert mode and returns to command mode).

vi Insert Mode

vi Insert Mode

Vi Commands

The following table shows a list of commonly used vi commands. File edition commands can be enforced by appending the exclamation sign to the command (for example, <b.:q! enforces quitting without saving).

 Key command  Description
 h or left arrow  Go one character to the left
 j or down arrow  Go down one line
 k or up arrow  Go up one line
 l (lowercase L) or right arrow  Go one character to the right
 H  Go to the top of the screen
 L  Go to the bottom of the screen
 G  Go to the end of the file
 w  Move one word to the right
 b  Move one word to the left
 0 (zero)  Go to the beginning of the current line
 ^  Go to the first nonblank character on the current line
 $  Go to the end of the current line
 Ctrl-B  Go back one screen
 Ctrl-F  Go forward one screen
 i  Insert at the current cursor position
 I (uppercase i)  Insert at the beginning of the current line
 J (uppercase j)  Join current line with the next one (move next line up)
 a  Append after the current cursor position
 o (lowercase O)  Creates a blank line after the current line
 O (uppercase o)  Creates a blank line before the current line
 r  Replace the character at the current cursor position
 R  Overwrite at the current cursor position
 x  Delete the character at the current cursor position
 X  Delete the character immediately before (to the left) of the current cursor position
 dd  Cut (for later pasting) the entire current line
 D  Cut from the current cursor position to the end of the line (this command is equivalent to d$)
 yX  Give a movement command X, copy (yank) the appropriate number of characters, words, or lines from the current cursor position
 yy or Y  Yank (copy) the entire current line
 p  Paste after (next line) the current cursor position
 P  Paste before (previous line) the current cursor position
 . (period)  Repeat the last command
 u  Undo the last command
 U  Undo the last command in the last line. This will work as long as the cursor is still on the line.
 n  Find the next match in a search
 N  Find the previous match in a search
 :n  Next file; when multiple files are specified for editing, this commands loads the next file.
 :e file  Load file in place of the current file.
 :r file  Insert the contents of file after (next line) the current cursor position
 :q  Quit without saving changes.
 :w file  Write the current buffer to file. To append to an existing file, use :w >> file.
 :wq  Write the contents of the current file and quit. Equivalent to x! and ZZ
 :r! command  Execute command and insert output after (next line) the current cursor position.

Vi Options

The following options can come in handy while running vim (we need to add them in our ~/.vimrc file).

# echo set number >> ~/.vimrc
# echo syntax on >> ~/.vimrc
# echo set tabstop=4 >> ~/.vimrc
# echo set autoindent >> ~/.vimrc

vi Editor Options

vi Editor Options

  1. set number shows line numbers when vi opens an existing or a new file.
  2. syntax on turns on syntax highlighting (for multiple file extensions) in order to make code and config files more readable.
  3. set tabstop=4 sets the tab size to 4 spaces (default value is 8).
  4. set autoindent carries over previous indent to the next line.

Search and replace

vi has the ability to move the cursor to a certain location (on a single line or over an entire file) based on searches. It can also perform text replacements with or without confirmation from the user.

a). Searching within a line: the f command searches a line and moves the cursor to the next occurrence of a specified character in the current line.

For example, the command fh would move the cursor to the next instance of the letter h within the current line. Note that neither the letter f nor the character you’re searching for will appear anywhere on your screen, but the character will be highlighted after you press Enter.

For example, this is what I get after pressing f4 in command mode.

Search String in Vi

Search String in Vi

b). Searching an entire file: use the / command, followed by the word or phrase to be searched for. A search may be repeated using the previous search string with the n command, or the next one (using the N command). This is the result of typing /Jane in command mode.

Vi Search String in File

Vi Search String in File

c). vi uses a command (similar to sed’s) to perform substitution operations over a range of lines or an entire file. To change the word “old” to “young” for the entire file, we must enter the following command.

 :%s/old/young/g 

Notice: The colon at the beginning of the command.

Vi Search and Replace

Vi Search and Replace

The colon (:) starts the ex command, s in this case (for substitution), % is a shortcut meaning from the first line to the last line (the range can also be specified as n,m which means “from line n to line m”), old is the search pattern, while young is the replacement text, and g indicates that the substitution should be performed on every occurrence of the search string in the file.

Alternatively, a c can be added to the end of the command to ask for confirmation before performing any substitution.

:%s/old/young/gc

Before replacing the original text with the new one, vi/m will present us with the following message.

Replace String in Vi

Replace String in Vi

  1. y: perform the substitution (yes)
  2. n: skip this occurrence and go to the next one (no)
  3. a: perform the substitution in this and all subsequent instances of the pattern.
  4. q or Esc: quit substituting.
  5. l (lowercase L): perform this substitution and quit (last).
  6. Ctrl-eCtrl-y: Scroll down and up, respectively, to view the context of the proposed substitution.

Editing Multiple Files at a Time

Let’s type vim file1 file2 file3 in our command prompt.

# vim file1 file2 file3

First, vim will open file1. To switch to the next file (file2), we need to use the :n command. When we want to return to the previous file, :N will do the job.

In order to switch from file1 to file3.

a). The :buffers command will show a list of the file currently being edited.

:buffers

Edit Multiple Files

Edit Multiple Files

b). The command :buffer 3 (without the s at the end) will open file3 for editing.

In the image above, a pound sign (#) indicates that the file is currently open but in the background, while %amarks the file that is currently being edited. On the other hand, a blank space after the file number (3 in the above example) indicates that the file has not yet been opened.

Temporary vi buffers

To copy a couple of consecutive lines (let’s say 4, for example) into a temporary buffer named a (not associated with a file) and place those lines in another part of the file later in the current vi section, we need to…

1. Press the ESC key to be sure we are in vi Command mode.

2. Place the cursor on the first line of the text we wish to copy.

3. Type “a4yy” to copy the current line, along with the 3 subsequent lines, into a buffer named a. We can continue editing our file – we do not need to insert the copied lines immediately.

4. When we reach the location for the copied lines, use “a before the p or P commands to insert the lines copied into the buffer named a:

  1. Type “ap to insert the lines copied into buffer a after the current line on which the cursor is resting.
  2. Type “aP to insert the lines copied into buffer a before the current line.

If we wish, we can repeat the above steps to insert the contents of buffer a in multiple places in our file. A temporary buffer, as the one in this section, is disposed when the current window is closed.

Summary

As we have seen, vi/m is a powerful and versatile text editor for the CLI. Feel free to share your own tricks and comments below.

Reference Links
  1. About the LFCS
  2. Why get a Linux Foundation Certification?
  3. Register for the LFCS exam

Update: If you want to extend your VI editor skills, then I would suggest you read following two guides that will guide you to some useful VI editor tricks and tips.

Part 1Learn Useful ‘Vi/Vim’ Editor Tips and Tricks to Enhance Your Skills

Part 28 Interesting ‘Vi/Vim’ Editor Tips and Tricks

LFCS: How to Archive/Compress Files & Directories, Setting File Attributes and Finding Files in Linux – Part 3

Recently, the Linux Foundation started the LFCS (Linux Foundation Certified Sysadmin) certification, a brand new program whose purpose is allowing individuals from all corners of the globe to have access to an exam, which if approved, certifies that the person is knowledgeable in performing basic to intermediate system administration tasks on Linux systems. This includes supporting already running systems and services, along with first-level troubleshooting and analysis, plus the ability to decide when to escalate issues to engineering teams.

Linux Foundation Certified Sysadmin – Part 3

Linux Foundation Certified Sysadmin – Part 3

Please watch the below video that gives the idea about The Linux Foundation Certification Program.

This post is Part 3 of a 10-tutorial series, here in this part, we will cover how to archive/compress files and directories, set file attributes, and find files on the filesystem, that are required for the LFCS certification exam.

Archiving and Compression Tools

A file archiving tool groups a set of files into a single standalone file that we can backup to several types of media, transfer across a network, or send via email. The most frequently used archiving utility in Linux is tar. When an archiving utility is used along with a compression tool, it allows to reduce the disk size that is needed to store the same files and information.

The tar utility

tar bundles a group of files together into a single archive (commonly called a tar file or tarball). The name originally stood for tape archiver, but we must note that we can use this tool to archive data to any kind of writeable media (not only to tapes). Tar is normally used with a compression tool such as gzipbzip2, or xz to produce a compressed tarball.

Basic syntax:
# tar [options] [pathname ...]

Where  represents the expression used to specify which files should be acted upon.

Most commonly used tar commands
Long option Abbreviation Description
 –create  c  Creates a tar archive
 –concatenate  A  Appends tar files to an archive
 –append  r  Appends files to the end of an archive
 –update  u  Appends files newer than copy in archive
 –diff or –compare  d  Find differences between archive and file system
 –file archive  f  Use archive file or device ARCHIVE
 –list  t  Lists the contents of a tarball
 –extract or –get  x  Extracts files from an archive
Normally used operation modifiers
Long option Abbreviation Description
 –directory dir  C  Changes to directory dir before performing operations
 –same-permissions  p  Preserves original permissions
 –verbose  v  Lists all files read or extracted. When this flag is used along with –list, the file sizes, ownership, and time stamps are displayed.
 –verify  W  Verifies the archive after writing it
 –exclude file  —  Excludes file from the archive
 –exclude=pattern  X  Exclude files, given as a PATTERN
 –gzip or –gunzip  z  Processes an archive through gzip
 –bzip2  j  Processes an archive through bzip2
 –xz  J  Processes an archive through xz

Gzip is the oldest compression tool and provides the least compression, while bzip2 provides improved compression. In addition, xz is the newest but (usually) provides the best compression. This advantages of best compression come at a price: the time it takes to complete the operation, and system resources used during the process.

Normally, tar files compressed with these utilities have .gz.bz2, or .xz extensions, respectively. In the following examples we will be using these files: file1, file2, file3, file4, and file5.

Grouping and compressing with gzip, bzip2 and xz

Group all the files in the current working directory and compress the resulting bundle with gzipbzip2, and xz(please note the use of a regular expression to specify which files should be included in the bundle – this is to prevent the archiving tool to group the tarballs created in previous steps).

# tar czf myfiles.tar.gz file[0-9]
# tar cjf myfiles.tar.bz2 file[0-9]
# tar cJf myfile.tar.xz file[0-9]

Compress Multiple Files Using tar

Compress Multiple Files

Listing the contents of a tarball and updating / appending files to the bundle

List the contents of a tarball and display the same information as a long directory listing. Note that update or append operations cannot be applied to compressed files directly (if you need to update or append a file to a compressed tarball, you need to uncompress the tar file and update / append to it, then compress again).

# tar tvf [tarball]

Check Files in tar Archive

List Archive Content

Run any of the following commands:

# gzip -d myfiles.tar.gz	[#1] 
# bzip2 -d myfiles.tar.bz2	[#2] 
# xz -d myfiles.tar.xz 		[#3] 

Then

# tar --delete --file myfiles.tar file4 (deletes the file inside the tarball)
# tar --update --file myfiles.tar file4 (adds the updated file)

and

# gzip myfiles.tar		[ if you choose #1 above ]
# bzip2 myfiles.tar		[ if you choose #2 above ]
# xz myfiles.tar 		[ if you choose #3 above ]

Finally,

# tar tvf [tarball] #again

and compare the modification date and time of file4 with the same information as shown earlier.

Excluding file types

Suppose you want to perform a backup of user’s home directories. A good sysadmin practice would be (may also be specified by company policies) to exclude all video and audio files from backups.

Maybe your first approach would be to exclude from the backup all files with an .mp3 or .mp4 extension (or other extensions). What if you have a clever user who can change the extension to .txt or .bkp, your approach won’t do you much good. In order to detect an audio or video file, you need to check its file type with file. The following shell script will do the job.

#!/bin/bash
# Pass the directory to backup as first argument.
DIR=$1
# Create the tarball and compress it. Exclude files with the MPEG string in its file type.
# -If the file type contains the string mpeg, $? (the exit status of the most recently executed command) expands to 0, and the filename is redirected to the exclude option. Otherwise, it expands to 1.
# -If $? equals 0, add the file to the list of files to be backed up.
tar X <(for i in $DIR/*; do file $i | grep -i mpeg; if [ $? -eq 0 ]; then echo $i; fi;done) -cjf backupfile.tar.bz2 $DIR/*

Exclude Files in tar Archive

Exclude Files in tar

Restoring backups with tar preserving permissions

You can then restore the backup to the original user’s home directory (user_restore in this example), preserving permissions, with the following command.

# tar xjf backupfile.tar.bz2 --directory user_restore --same-permissions

Restore Files from tar Archive

Restore Files from Archive

Read Also:

  1. 18 tar Command Examples in Linux
  2. Dtrx – An Intelligent Archive Tool for Linux

Using find Command to Search for Files

The find command is used to search recursively through directory trees for files or directories that match certain characteristics, and can then either print the matching files or directories or perform other operations on the matches.

Normally, we will search by name, owner, group, type, permissions, date, and size.

Basic syntax:

# find [directory_to_search] [expression]

Finding files recursively according to Size

Find all files (-f) in the current directory (.) and 2 subdirectories below (-maxdepth 3 includes the current working directory and 2 levels down) whose size (-size) is greater than 2 MB.

# find . -maxdepth 3 -type f -size +2M

Find Files by Size in Linux

Find Files Based on Size

Finding and deleting files that match a certain criteria

Files with 777 permissions are sometimes considered an open door to external attackers. Either way, it is not safe to let anyone do anything with files. We will take a rather aggressive approach and delete them! (‘{}‘ + is used to “collect” the results of the search).

# find /home/user -perm 777 -exec rm '{}' +

Find all 777 Permission Files

Find Files with 777Permission

Finding files per atime or mtime

Search for configuration files in /etc that have been accessed (-atime) or modified (-mtime) more (+180) or less (-180) than 6 months ago or exactly 6 months ago (180).

Modify the following command as per the example below:

# find /etc -iname "*.conf" -mtime -180 -print

Find Files by Modification Time

Find Modified Files

Read Also35 Practical Examples of Linux ‘find’ Command

File Permissions and Basic Attributes

The first 10 characters in the output of ls -l are the file attributes. The first of these characters is used to indicate the file type:

  1.  : a regular file
  2. -d : a directory
  3. -l : a symbolic link
  4. -c : a character device (which treats data as a stream of bytes, i.e. a terminal)
  5. -b : a block device (which handles data in blocks, i.e. storage devices)

The next nine characters of the file attributes are called the file mode and represent the read (r), write (w), and execute (x) permissions of the file’s owner, the file’s group owner, and the rest of the users (commonly referred to as “the world”).

Whereas the read permission on a file allows the same to be opened and read, the same permission on a directory allows its contents to be listed if the execute permission is also set. In addition, the execute permission in a file allows it to be handled as a program and run, while in a directory it allows the same to be cd’ed into it.

File permissions are changed with the chmod command, whose basic syntax is as follows:

# chmod [new_mode] file

Where new_mode is either an octal number or an expression that specifies the new permissions.

The octal number can be converted from its binary equivalent, which is calculated from the desired file permissions for the owner, the group, and the world, as follows:

The presence of a certain permission equals a power of 2 (r=22w=21x=20), while its absence equates to 0. For example:

Linux File Permissions

File Permissions

To set the file’s permissions as above in octal form, type:

# chmod 744 myfile

You can also set a file’s mode using an expression that indicates the owner’s rights with the letter u, the group owner’s rights with the letter g, and the rest with o. All of these “individuals” can be represented at the same time with the letter a. Permissions are granted (or revoked) with the + or  signs, respectively.

Revoking execute permission for a shell script to all users

As we explained earlier, we can revoke a certain permission prepending it with the minus sign and indicating whether it needs to be revoked for the owner, the group owner, or all users. The one-liner below can be interpreted as follows: Change mode for all (a) users, revoke () execute permission (x).

# chmod a-x backup.sh

Granting read, write, and execute permissions for a file to the owner and group owner, and read permissions for the world.

When we use a 3-digit octal number to set permissions for a file, the first digit indicates the permissions for the owner, the second digit for the group owner and the third digit for everyone else:

  1. Owner: (r=22 + w=21 + x=20 = 7)
  2. Group owner: (r=22 + w=21 + x=20 = 7)
  3. World: (r=22 + w=0 + x=0 = 4),
# chmod 774 myfile

In time, and with practice, you will be able to decide which method to change a file mode works best for you in each case. A long directory listing also shows the file’s owner and its group owner (which serve as a rudimentary yet effective access control to files in a system):

Linux File Listing

Linux File Listing

File ownership is changed with the chown command. The owner and the group owner can be changed at the same time or separately. Its basic syntax is as follows:

# chown user:group file

Where at least user or group need to be present.

Few Examples

Changing the owner of a file to a certain user.

# chown gacanepa sent

Changing the owner and group of a file to an specific user:group pair.

# chown gacanepa:gacanepa TestFile

Changing only the group owner of a file to a certain group. Note the colon before the group’s name.

# chown :gacanepa email_body.txt

Conclusion

As a sysadmin, you need to know how to create and restore backups, how to find files in your system and change their attributes, along with a few tricks that can make your life easier and will prevent you from running into future issues.

I hope that the tips provided in the present article will help you to achieve that goal. Feel free to add your own tips and ideas in the comments section for the benefit of the community. Thanks in advance!

Reference Links
  1. About the LFCS
  2. Why get a Linux Foundation Certification?
  3. Register for the LFCS exam

LFCS: Partitioning Storage Devices, Formatting Filesystems and Configuring Swap Partition – Part 4

Last August, the Linux Foundation launched the LFCS certification (Linux Foundation Certified Sysadmin), a shiny chance for system administrators to show, through a performance-based exam, that they can perform overall operational support of Linux systems: system support, first-level diagnosing and monitoring, plus issue escalation – if needed – to other support teams.

Linux Foundation Certified Sysadmin – Part 4

Linux Foundation Certified Sysadmin – Part 4

Please aware that Linux Foundation certifications are precise, totally based on performance and available through an online portal anytime, anywhere. Thus, you no longer have to travel to a examination center to get the certifications you need to establish your skills and expertise.

Please watch the below video that explains The Linux Foundation Certification Program.

This post is Part 4 of a 10-tutorial series, here in this part, we will cover the Partitioning storage devices, Formatting filesystems and Configuring swap partition, that are required for the LFCS certification exam.

Partitioning Storage Devices

Partitioning is a means to divide a single hard drive into one or more parts or “slices” called partitions. A partition is a section on a drive that is treated as an independent disk and which contains a single type of file system, whereas a partition table is an index that relates those physical sections of the hard drive to partition identifications.

In Linux, the traditional tool for managing MBR partitions (up to ~2009) in IBM PC compatible systems is fdisk. For GPT partitions (~2010 and later) we will use gdisk. Each of these tools can be invoked by typing its name followed by a device name (such as /dev/sdb).

Managing MBR Partitions with fdisk

We will cover fdisk first.

# fdisk /dev/sdb

A prompt appears asking for the next operation. If you are unsure, you can press the ‘m‘ key to display the help contents.

fdisk Help Menu

fdisk Help Menu

In the above image, the most frequently used options are highlighted. At any moment, you can press ‘p‘ to display the current partition table.

Check Partition Table in Linux

Show Partition Table

The Id column shows the partition type (or partition id) that has been assigned by fdisk to the partition. A partition type serves as an indicator of the file system, the partition contains or, in simple words, the way data will be accessed in that partition.

Please note that a comprehensive study of each partition type is out of the scope of this tutorial – as this series is focused on the LFCS exam, which is performance-based.

Some of the options used by fdisk as follows:

You can list all the partition types that can be managed by fdisk by pressing the ‘l‘ option (lowercase l).

Press ‘d‘ to delete an existing partition. If more than one partition is found in the drive, you will be asked which one should be deleted.

Enter the corresponding number, and then press ‘w‘ (write modifications to partition table) to apply changes.

In the following example, we will delete /dev/sdb2, and then print (p) the partition table to verify the modifications.

fdisk Command Options

fdisk Command Options

Press ‘n‘ to create a new partition, then ‘p‘ to indicate it will be a primary partition. Finally, you can accept all the default values (in which case the partition will occupy all the available space), or specify a size as follows.

Create New Partition in Linux

Create New Partition

If the partition Id that fdisk chose is not the right one for our setup, we can press ‘t‘ to change it.

Change Partition Name in Linux

Change Partition Name

When you’re done setting up the partitions, press ‘w‘ to commit the changes to disk.

Save Partition Changes

Save Partition Changes

Managing GPT Partitions with gdisk

In the following example, we will use /dev/sdb.

# gdisk /dev/sdb

We must note that gdisk can be used either to create MBR or GPT partitions.

Create GPT Partitions in Linux

Create GPT Partitions

The advantage of using GPT partitioning is that we can create up to 128 partitions in the same disk whose size can be up to the order of petabytes, whereas the maximum size for MBR partitions is 2 TB.

Note that most of the options in fdisk are the same in gdisk. For that reason, we will not go into detail about them, but here’s a screenshot of the process.

gdisk Command Options

gdisk Command Options

Formatting Filesystems

Once we have created all the necessary partitions, we must create filesystems. To find out the list of filesystems supported in your system, run.

# ls /sbin/mk*

Check Filesystems Type in Linux

Check Filesystems Type

The type of filesystem that you should choose depends on your requirements. You should consider the pros and cons of each filesystem and its own set of features. Two important attributes to look for in a filesystem are.

  1. Journaling support, which allows for faster data recovery in the event of a system crash.
  2. Security Enhanced Linux (SELinux) support, as per the project wiki, “a security enhancement to Linux which allows users and administrators more control over access control”.

In our next example, we will create an ext4 filesystem (supports both journaling and SELinux) labeled Tecminton /dev/sdb1, using mkfs, whose basic syntax is.

# mkfs -t [filesystem] -L [label] device
or
# mkfs.[filesystem] -L [label] device

Create ext4 Filesystems in Linux

Create ext4 Filesystems

Creating and Using Swap Partitions

Swap partitions are necessary if we need our Linux system to have access to virtual memory, which is a section of the hard disk designated for use as memory, when the main system memory (RAM) is all in use. For that reason, a swap partition may not be needed on systems with enough RAM to meet all its requirements; however, even in that case it’s up to the system administrator to decide whether to use a swap partition or not.

A simple rule of thumb to decide the size of a swap partition is as follows.

Swap should usually equal 2x physical RAM for up to 2 GB of physical RAM, and then an additional 1x physical RAM for any amount above 2 GB, but never less than 32 MB.

So, if:

M = Amount of RAM in GB, and S = Amount of swap in GB, then

If M < 2
	S = M *2
Else
	S = M + 2

Remember this is just a formula and that only you, as a sysadmin, have the final word as to the use and size of a swap partition.

To configure a swap partition, create a regular partition as demonstrated earlier with the desired size. Next, we need to add the following entry to the /etc/fstab file (X can be either b or c).

/dev/sdX1 swap swap sw 0 0

Finally, let’s format and enable the swap partition.

# mkswap /dev/sdX1
# swapon -v /dev/sdX1

To display a snapshot of the swap partition(s).

# cat /proc/swaps

To disable the swap partition.

# swapoff /dev/sdX1

For the next example, we’ll use /dev/sdc1 (=512 MB, for a system with 256 MB of RAM) to set up a partition with fdisk that we will use as swap, following the steps detailed above. Note that we will specify a fixed size in this case.

Create-Swap-Partition in Linux

Create Swap Partition

Add Swap Partition in Linux

Enable Swap Partition

Conclusion

Creating partitions (including swap) and formatting filesystems are crucial in your road to Sysadminship. I hope that the tips given in this article will guide you to achieve your goals. Feel free to add your own tips & ideas in the comments section below, for the benefit of the community.

Reference Links
  1. About the LFCS
  2. Why get a Linux Foundation Certification?
  3. Register for the LFCS exam

LFCS: How to Mount/Unmount Local and Network (Samba & NFS) Filesystems in Linux – Part 5

The Linux Foundation launched the LFCS certification (Linux Foundation Certified Sysadmin), a brand new program whose purpose is allowing individuals from all corners of the globe to get certified in basic to intermediate system administration tasks for Linux systems, which includes supporting running systems and services, along with overall monitoring and analysis, plus smart decision-making when it comes to raising issues to upper support teams.

Linux Foundation Certified Sysadmin – Part 5

Linux Foundation Certified Sysadmin – Part 5

The following video shows an introduction to The Linux Foundation Certification Program.

This post is Part 5 of a 10-tutorial series, here in this part, we will explain How to mount/unmount local and network filesystems in linux, that are required for the LFCS certification exam.

Mounting Filesystems

Once a disk has been partitioned, Linux needs some way to access the data on the partitions. Unlike DOS or Windows (where this is done by assigning a drive letter to each partition), Linux uses a unified directory tree where each partition is mounted at a mount point in that tree.

A mount point is a directory that is used as a way to access the filesystem on the partition, and mounting the filesystem is the process of associating a certain filesystem (a partition, for example) with a specific directory in the directory tree.

In other words, the first step in managing a storage device is attaching the device to the file system tree. This task can be accomplished on a one-time basis by using tools such as mount (and then unmounted with umount) or persistently across reboots by editing the /etc/fstab file.

The mount command (without any options or arguments) shows the currently mounted filesystems.

# mount

Check Mounted Filesystem in Linux

Check Mounted Filesystem

In addition, mount is used to mount filesystems into the filesystem tree. Its standard syntax is as follows.

# mount -t type device dir -o options

This command instructs the kernel to mount the filesystem found on device (a partition, for example, that has been formatted with a filesystem type) at the directory dir, using all options. In this form, mount does not look in /etc/fstab for instructions.

If only a directory or device is specified, for example.

# mount /dir -o options
or
# mount device -o options

mount tries to find a mount point and if it can’t find any, then searches for a device (both cases in the /etc/fstabfile), and finally attempts to complete the mount operation (which usually succeeds, except for the case when either the directory or the device is already being used, or when the user invoking mount is not root).

You will notice that every line in the output of mount has the following format.

device on directory type (options)

For example,

/dev/mapper/debian-home on /home type ext4 (rw,relatime,user_xattr,barrier=1,data=ordered)

Reads:

dev/mapper/debian-home is mounted on /home, which has been formatted as ext4, with the following options: rw,relatime,user_xattr,barrier=1,data=ordered

Mount Options

Most frequently used mount options include.

  1. async: allows asynchronous I/O operations on the file system being mounted.
  2. auto: marks the file system as enabled to be mounted automatically using mount -a. It is the opposite of noauto.
  3. defaults: this option is an alias for async,auto,dev,exec,nouser,rw,suid. Note that multiple options must be separated by a comma without any spaces. If by accident you type a space between options, mount will interpret the subsequent text string as another argument.
  4. loop: Mounts an image (an .iso file, for example) as a loop device. This option can be used to simulate the presence of the disk’s contents in an optical media reader.
  5. noexec: prevents the execution of executable files on the particular filesystem. It is the opposite of exec.
  6. nouser: prevents any users (other than root) to mount and unmount the filesystem. It is the opposite of user.
  7. remount: mounts the filesystem again in case it is already mounted.
  8. ro: mounts the filesystem as read only.
  9. rw: mounts the file system with read and write capabilities.
  10. relatime: makes access time to files be updated only if atime is earlier than mtime.
  11. user_xattr: allow users to set and remote extended filesystem attributes.
Mounting a device with ro and noexec options
# mount -t ext4 /dev/sdg1 /mnt -o ro,noexec

In this case we can see that attempts to write a file to or to run a binary file located inside our mounting point fail with corresponding error messages.

# touch /mnt/myfile
# /mnt/bin/echo “Hi there”

Mount Device in Read Write Mode

Mount Device Read Write

Mounting a device with default options

In the following scenario, we will try to write a file to our newly mounted device and run an executable file located within its filesystem tree using the same commands as in the previous example.

# mount -t ext4 /dev/sdg1 /mnt -o defaults

Mount Device in Linux

Mount Device

In this last case, it works perfectly.

Unmounting Devices

Unmounting a device (with the umount command) means finish writing all the remaining “on transit” data so that it can be safely removed. Note that if you try to remove a mounted device without properly unmounting it first, you run the risk of damaging the device itself or cause data loss.

That being said, in order to unmount a device, you must be “standing outside” its block device descriptor or mount point. In other words, your current working directory must be something else other than the mounting point. Otherwise, you will get a message saying that the device is busy.

Unmount Device in Linux

Unmount Device

An easy way to “leave” the mounting point is typing the cd command which, in lack of arguments, will take us to our current user’s home directory, as shown above.

Mounting Common Networked Filesystems

The two most frequently used network file systems are SMB (which stands for “Server Message Block”) and NFS (“Network File System”). Chances are you will use NFS if you need to set up a share for Unix-like clients only, and will opt for Samba if you need to share files with Windows-based clients and perhaps other Unix-like clients as well.

Read Also

  1. Setup Samba Server in RHEL/CentOS and Fedora
  2. Setting up NFS (Network File System) on RHEL/CentOS/Fedora and Debian/Ubuntu

The following steps assume that Samba and NFS shares have already been set up in the server with IP 192.168.0.10 (please note that setting up a NFS share is one of the competencies required for the LFCE exam, which we will cover after the present series).

Mounting a Samba share on Linux

Step 1: Install the samba-client samba-common and cifs-utils packages on Red Hat and Debian based distributions.

# yum update && yum install samba-client samba-common cifs-utils
# aptitude update && aptitude install samba-client samba-common cifs-utils

Then run the following command to look for available samba shares in the server.

# smbclient -L 192.168.0.10

And enter the password for the root account in the remote machine.

Mount Samba Share in Linux

Mount Samba Share

In the above image we have highlighted the share that is ready for mounting on our local system. You will need a valid samba username and password on the remote server in order to access it.

Step 2: When mounting a password-protected network share, it is not a good idea to write your credentials in the /etc/fstab file. Instead, you can store them in a hidden file somewhere with permissions set to 600, like so.

# mkdir /media/samba
# echo “username=samba_username” > /media/samba/.smbcredentials
# echo “password=samba_password” >> /media/samba/.smbcredentials
# chmod 600 /media/samba/.smbcredentials

Step 3: Then add the following line to /etc/fstab file.

# //192.168.0.10/gacanepa /media/samba cifs credentials=/media/samba/.smbcredentials,defaults 0 0

Step 4: You can now mount your samba share, either manually (mount //192.168.0.10/gacanepa) or by rebooting your machine so as to apply the changes made in /etc/fstab permanently.

# mount -a

Mount Password Protect Samba Share

Mount Password Protect Samba Share

Mounting a NFS share on Linux

Step 1: Install the nfs-common and portmap packages on Red Hat and Debian based distributions.

# yum update && yum install nfs-utils nfs-utils-lib
# aptitude update && aptitude install nfs-common

Step 2: Create a mounting point for the NFS share.

# mkdir /media/nfs

Step 3: Add the following line to /etc/fstab file.

192.168.0.10:/NFS-SHARE /media/nfs nfs defaults 0 0

Step 4: You can now mount your nfs share, either manually (mount 192.168.0.10:/NFS-SHARE) or by rebooting your machine so as to apply the changes made in /etc/fstab permanently.

Mount NFS Share in Linux

Mount NFS Share

Mounting Filesystems Permanently

As shown in the previous two examples, the /etc/fstab file controls how Linux provides access to disk partitions and removable media devices and consists of a series of lines that contain six fields each; the fields are separated by one or more spaces or tabs. A line that begins with a hash mark (#) is a comment and is ignored.

Each line has the following format.

<file system> <mount point> <type> <options> <dump> <pass>

Where:

  1. <file system>: The first column specifies the mount device. Most distributions now specify partitions by their labels or UUIDs. This practice can help reduce problems if partition numbers change.
  2. <mount point>: The second column specifies the mount point.
  3. <type>: The file system type code is the same as the type code used to mount a filesystem with the mount command. A file system type code of auto lets the kernel auto-detect the filesystem type, which can be a convenient option for removable media devices. Note that this option may not be available for all filesystems out there.
  4. <options>: One (or more) mount option(s).
  5. <dump>: You will most likely leave this to 0 (otherwise set it to 1) to disable the dump utility to backup the filesystem upon boot (The dump program was once a common backup tool, but it is much less popular today.)
  6. <pass>: This column specifies whether the integrity of the filesystem should be checked at boot time with fsck. A 0 means that fsck should not check a filesystem. The higher the number, the lowest the priority. Thus, the root partition will most likely have a value of 1, while all others that should be checked should have a value of 2.
Mount Examples

1. To mount a partition with label TECMINT at boot time with rw and noexec attributes, you should add the following line in /etc/fstab file.

LABEL=TECMINT /mnt ext4 rw,noexec 0 0

2. If you want the contents of a disk in your DVD drive be available at boot time.

/dev/sr0    /media/cdrom0    iso9660    ro,user,noauto    0    0

Where /dev/sr0 is your DVD drive.

Summary

You can rest assured that mounting and unmounting local and network filesystems from the command line will be part of your day-to-day responsibilities as sysadmin. You will also need to master /etc/fstab. I hope that you have found this article useful to help you with those tasks. Feel free to add your comments (or ask questions) below and to share this article through your network social profiles.

Reference Links
  1. About the LFCS
  2. Why get a Linux Foundation Certification?
  3. Register for the LFCS exam

LFCS: Assembling Partitions as RAID Devices – Creating & Managing System Backups – Part 6

Recently, the Linux Foundation launched the LFCS (Linux Foundation Certified Sysadmin) certification, a shiny chance for system administrators everywhere to demonstrate, through a performance-based exam, that they are capable of performing overall operational support on Linux systems: system support, first-level diagnosing and monitoring, plus issue escalation, when required, to other support teams.

Linux Foundation Certified Sysadmin – Part 6

Linux Foundation Certified Sysadmin – Part 6

The following video provides an introduction to The Linux Foundation Certification Program.

This post is Part 6 of a 10-tutorial series, here in this part, we will explain How to Assemble Partitions as RAID Devices – Creating & Managing System Backups, that are required for the LFCS certification exam.

Understanding RAID

The technology known as Redundant Array of Independent Disks (RAID) is a storage solution that combines multiple hard disks into a single logical unit to provide redundancy of data and/or improve performance in read / write operations to disk.

However, the actual fault-tolerance and disk I/O performance lean on how the hard disks are set up to form the disk array. Depending on the available devices and the fault tolerance / performance needs, different RAID levels are defined. You can refer to the RAID series here in Tecmint.com for a more detailed explanation on each RAID level.

RAID GuideWhat is RAID, Concepts of RAID and RAID Levels Explained

Our tool of choice for creating, assembling, managing, and monitoring our software RAIDs is called mdadm(short for multiple disks admin).

---------------- Debian and Derivatives ----------------
# aptitude update && aptitude install mdadm 
---------------- Red Hat and CentOS based Systems ----------------
# yum update && yum install mdadm
---------------- On openSUSE ----------------
# zypper refresh && zypper install mdadm # 

Assembling Partitions as RAID Devices

The process of assembling existing partitions as RAID devices consists of the following steps.

1. Create the array using mdadm

If one of the partitions has been formatted previously, or has been a part of another RAID array previously, you will be prompted to confirm the creation of the new array. Assuming you have taken the necessary precautions to avoid losing important data that may have resided in them, you can safely type y and press Enter.

# mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/sdb1 /dev/sdc1

Creating RAID Array

Creating RAID Array

2. Check the array creation status

In order to check the array creation status, you will use the following commands – regardless of the RAID type. These are just as valid as when we are creating a RAID0 (as shown above), or when you are in the process of setting up a RAID5, as shown in the image below.

# cat /proc/mdstat
or 
# mdadm --detail /dev/md0	[More detailed summary]

Check RAID Array Status

Check RAID Array Status

3. Format the RAID Device

Format the device with a filesystem as per your needs / requirements, as explained in Part 4 of this series.

4. Monitor RAID Array Service

Instruct the monitoring service to “keep an eye” on the array. Add the output of mdadm –detail –scan to /etc/mdadm/mdadm.conf (Debian and derivatives) or /etc/mdadm.conf (CentOS / openSUSE), like so.

# mdadm --detail --scan

Monitor RAID Array

Monitor RAID Array

# mdadm --assemble --scan 	[Assemble the array]

To ensure the service starts on system boot, run the following commands as root.

Debian and Derivatives

Debian and derivatives, though it should start running on boot by default.

# update-rc.d mdadm defaults

Edit the /etc/default/mdadm file and add the following line.

AUTOSTART=true
On CentOS and openSUSE (systemd-based)
# systemctl start mdmonitor
# systemctl enable mdmonitor
On CentOS and openSUSE (SysVinit-based)
# service mdmonitor start
# chkconfig mdmonitor on
5. Check RAID Disk Failure

In RAID levels that support redundancy, replace failed drives when needed. When a device in the disk array becomes faulty, a rebuild automatically starts only if there was a spare device added when we first created the array.

Check RAID Faulty Disk

Check RAID Faulty Disk

Otherwise, we need to manually attach an extra physical drive to our system and run.

# mdadm /dev/md0 --add /dev/sdX1

Where /dev/md0 is the array that experienced the issue and /dev/sdX1 is the new device.

6. Disassemble a working array

You may have to do this if you need to create a new array using the devices – (Optional Step).

# mdadm --stop /dev/md0 				#  Stop the array
# mdadm --remove /dev/md0 			# Remove the RAID device
# mdadm --zero-superblock /dev/sdX1 	# Overwrite the existing md superblock with zeroes
7. Set up mail alerts

You can configure a valid email address or system account to send alerts to (make sure you have this line in mdadm.conf). – (Optional Step)

MAILADDR root

In this case, all alerts that the RAID monitoring daemon collects will be sent to the local root account’s mail box. One of such alerts looks like the following.

Note: This event is related to the example in STEP 5, where a device was marked as faulty and the spare device was automatically built into the array by mdadm. Thus, we “ran out” of healthy spare devices and we got the alert.

RAID Monitoring Alerts

RAID Monitoring Alerts

Understanding RAID Levels

RAID 0

The total array size is n times the size of the smallest partition, where n is the number of independent disks in the array (you will need at least two drives). Run the following command to assemble a RAID 0 array using partitions /dev/sdb1 and /dev/sdc1.

# mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/sdb1 /dev/sdc1

Common uses: Setups that support real-time applications where performance is more important than fault-tolerance.

RAID 1 (aka Mirroring)

The total array size equals the size of the smallest partition (you will need at least two drives). Run the following command to assemble a RAID 1 array using partitions /dev/sdb1 and /dev/sdc1.

# mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1

Common uses: Installation of the operating system or important subdirectories, such as /home.

RAID 5 (aka drives with Parity)

The total array size will be (n – 1) times the size of the smallest partition. The “lost” space in (n-1) is used for parity (redundancy) calculation (you will need at least three drives).

Note that you can specify a spare device (/dev/sde1 in this case) to replace a faulty part when an issue occurs. Run the following command to assemble a RAID 5 array using partitions /dev/sdb1/dev/sdc1/dev/sdd1, and /dev/sde1 as spare.

# mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 --spare-devices=1 /dev/sde1

Common uses: Web and file servers.

RAID 6 (aka drives with double Parity

The total array size will be (n*s)-2*s, where n is the number of independent disks in the array and s is the size of the smallest disk. Note that you can specify a spare device (/dev/sdf1 in this case) to replace a faulty part when an issue occurs.

Run the following command to assemble a RAID 6 array using partitions /dev/sdb1/dev/sdc1/dev/sdd1/dev/sde1, and /dev/sdf1 as spare.

# mdadm --create --verbose /dev/md0 --level=6 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde --spare-devices=1 /dev/sdf1

Common uses: File and backup servers with large capacity and high availability requirements.

RAID 1+0 (aka stripe of mirrors)

The total array size is computed based on the formulas for RAID 0 and RAID 1, since RAID 1+0 is a combination of both. First, calculate the size of each mirror and then the size of the stripe.

Note that you can specify a spare device (/dev/sdf1 in this case) to replace a faulty part when an issue occurs. Run the following command to assemble a RAID 1+0 array using partitions /dev/sdb1/dev/sdc1/dev/sdd1/dev/sde1, and /dev/sdf1 as spare.

# mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 /dev/sd[b-e]1 --spare-devices=1 /dev/sdf1

Common uses: Database and application servers that require fast I/O operations.

Creating and Managing System Backups

It never hurts to remember that RAID with all its bounties IS NOT A REPLACEMENT FOR BACKUPS! Write it 1000 times on the chalkboard if you need to, but make sure you keep that idea in mind at all times. Before we begin, we must note that there is no one-size-fits-all solution for system backups, but here are some things that you do need to take into account while planning a backup strategy.

  1. What do you use your system for? (Desktop or server? If the latter case applies, what are the most critical services – whose configuration would be a real pain to lose?)
  2. How often do you need to take backups of your system?
  3. What is the data (e.g. files / directories / database dumps) that you want to backup? You may also want to consider if you really need to backup huge files (such as audio or video files).
  4. Where (meaning physical place and media) will those backups be stored?
Backing Up Your Data

Method 1: Backup entire drives with dd command. You can either back up an entire hard disk or a partition by creating an exact image at any point in time. Note that this works best when the device is offline, meaning it’s not mounted and there are no processes accessing it for I/O operations.

The downside of this backup approach is that the image will have the same size as the disk or partition, even when the actual data occupies a small percentage of it. For example, if you want to image a partition of 20 GB that is only 10% full, the image file will still be 20 GB in size. In other words, it’s not only the actual data that gets backed up, but the entire partition itself. You may consider using this method if you need exact backups of your devices.

Creating an image file out of an existing device
# dd if=/dev/sda of=/system_images/sda.img
OR
--------------------- Alternatively, you can compress the image file --------------------- 
# dd if=/dev/sda | gzip -c > /system_images/sda.img.gz 
Restoring the backup from the image file
# dd if=/system_images/sda.img of=/dev/sda
OR 

--------------------- Depending on your choice while creating the image  --------------------- 
gzip -dc /system_images/sda.img.gz | dd of=/dev/sda 

Method 2: Backup certain files / directories with tar command – already covered in Part 3 of this series. You may consider using this method if you need to keep copies of specific files and directories (configuration files, users’ home directories, and so on).

Method 3: Synchronize files with rsync command. Rsync is a versatile remote (and local) file-copying tool. If you need to backup and synchronize your files to/from network drives, rsync is a go.

Whether you’re synchronizing two local directories or local < — > remote directories mounted on the local filesystem, the basic syntax is the same.

Synchronizing two local directories or local < — > remote directories mounted on the local filesystem
# rsync -av source_directory destination directory

Where, -a recurse into subdirectories (if they exist), preserve symbolic links, timestamps, permissions, and original owner / group and -v verbose.

rsync Synchronizing Files

rsync Synchronizing Files

In addition, if you want to increase the security of the data transfer over the wire, you can use ssh over rsync.

Synchronizing local → remote directories over ssh
# rsync -avzhe ssh backups root@remote_host:/remote_directory/

This example will synchronize the backups directory on the local host with the contents of /root/remote_directory on the remote host.

Where the -h option shows file sizes in human-readable format, and the -e flag is used to indicate a ssh connection.

rsync Synchronize Remote Files

rsync Synchronize Remote Files

Synchronizing remote → local directories over ssh.

In this case, switch the source and destination directories from the previous example.

# rsync -avzhe ssh root@remote_host:/remote_directory/ backups 

Please note that these are only 3 examples (most frequent cases you’re likely to run into) of the use of rsync. For more examples and usages of rsync commands can be found at the following article.

Read Also10 rsync Commands to Sync Files in Linux

Summary

As a sysadmin, you need to ensure that your systems perform as good as possible. If you’re well prepared, and if the integrity of your data is well supported by a storage technology such as RAID and regular system backups, you’ll be safe.

If you have questions, comments, or further ideas on how this article can be improved, feel free to speak out below. In addition, please consider sharing this series through your social network profiles.

LFCS: Managing System Startup Process and Services (SysVinit, Systemd and Upstart) – Part 7

A couple of months ago, the Linux Foundation announced the LFCS (Linux Foundation Certified Sysadmin) certification, an exciting new program whose aim is allowing individuals from all ends of the world to get certified in performing basic to intermediate system administration tasks on Linux systems. This includes supporting already running systems and services, along with first-hand problem-finding and analysis, plus the ability to decide when to raise issues to engineering teams.

Linux Foundation Certified Sysadmin – Part 7

Linux Foundation Certified Sysadmin – Part 7

The following video describes an brief introduction to The Linux Foundation Certification Program.

This post is Part 7 of a 10-tutorial series, here in this part, we will explain how to Manage Linux System Startup Process and Services, that are required for the LFCS certification exam.

Managing the Linux Startup Process

The boot process of a Linux system consists of several phases, each represented by a different component. The following diagram briefly summarizes the boot process and shows all the main components involved.

Linux Boot Process

Linux Boot Process

When you press the Power button on your machine, the firmware that is stored in a EEPROM chip in the motherboard initializes the POST (Power-On Self Test) to check on the state of the system’s hardware resources. When the POST is finished, the firmware then searches and loads the 1st stage boot loader, located in the MBR or in the EFI partition of the first available disk, and gives control to it.

MBR Method

The MBR is located in the first sector of the disk marked as bootable in the BIOS settings and is 512 bytes in size.

  1. First 446 bytes: The bootloader contains both executable code and error message text.
  2. Next 64 bytes: The Partition table contains a record for each of four partitions (primary or extended). Among other things, each record indicates the status (active / not active), size, and start / end sectors of each partition.
  3. Last 2 bytes: The magic number serves as a validation check of the MBR.

The following command performs a backup of the MBR (in this example, /dev/sda is the first hard disk). The resulting file, mbr.bkp can come in handy should the partition table become corrupt, for example, rendering the system unbootable.

Of course, in order to use it later if the need arises, we will need to save it and store it somewhere else (like a USB drive, for example). That file will help us restore the MBR and will get us going once again if and only if we do not change the hard drive layout in the meanwhile.

Backup MBR
# dd if=/dev/sda of=mbr.bkp bs=512 count=1

Backup MBR in Linux

Backup MBR in Linux

Restoring MBR
# dd if=mbr.bkp of=/dev/sda bs=512 count=1

Restore MBR in Linux

Restore MBR in Linux

EFI/UEFI Method

For systems using the EFI/UEFI method, the UEFI firmware reads its settings to determine which UEFI application is to be launched and from where (i.e., in which disk and partition the EFI partition is located).

Next, the 2nd stage boot loader (aka boot manager) is loaded and run. GRUB [GRand Unified Boot] is the most frequently used boot manager in Linux. One of two distinct versions can be found on most systems used today.

  1. GRUB legacy configuration file: /boot/grub/menu.lst (older distributions, not supported by EFI/UEFI firmwares).
  2. GRUB2 configuration file: most likely, /etc/default/grub.

Although the objectives of the LFCS exam do not explicitly request knowledge about GRUB internals, if you’re brave and can afford to mess up your system (you may want to try it first on a virtual machine, just in case), you need to run.

# update-grub

As root after modifying GRUB’s configuration in order to apply the changes.

Basically, GRUB loads the default kernel and the initrd or initramfs image. In few words, initrd or initramfs help to perform the hardware detection, the kernel module loading and the device discovery necessary to get the real root filesystem mounted.

Once the real root filesystem is up, the kernel executes the system and service manager (init or systemd, whose process identification or PID is always 1) to begin the normal user-space boot process in order to present a user interface.

Both init and systemd are daemons (background processes) that manage other daemons, as the first service to start (during boot) and the last service to terminate (during shutdown).

Systemd and Init

Systemd and Init

Starting Services (SysVinit)

The concept of runlevels in Linux specifies different ways to use a system by controlling which services are running. In other words, a runlevel controls what tasks can be accomplished in the current execution state = runlevel (and which ones cannot).

Traditionally, this startup process was performed based on conventions that originated with System V UNIX, with the system passing executing collections of scripts that start and stop services as the machine entered a specific runlevel (which, in other words, is a different mode of running the system).

Within each runlevel, individual services can be set to run, or to be shut down if running. Latest versions of some major distributions are moving away from the System V standard in favour of a rather new service and system manager called systemd (which stands for system daemon), but usually support sysv commands for compatibility purposes. This means that you can run most of the well-known sysv init tools in a systemd-based distribution.

Read AlsoWhy ‘systemd’ replaces ‘init’ in Linux

Besides starting the system process, init looks to the /etc/inittab file to decide what runlevel must be entered.

Runlevel Description
0  Halt the system. Runlevel 0 is a special transitional state used to shutdown the system quickly.
1  Also aliased to s, or S, this runlevel is sometimes called maintenance mode. What services, if any, are started at this runlevel varies by distribution. It’s typically used for low-level system maintenance that may be impaired by normal system operation.
2  Multiuser. On Debian systems and derivatives, this is the default runlevel, and includes -if available- a graphical login. On Red-Hat based systems, this is multiuser mode without networking.
3  On Red-Hat based systems, this is the default multiuser mode, which runs everything except the graphical environment. This runlevel and levels 4 and 5 usually are not used on Debian-based systems.
4  Typically unused by default and therefore available for customization.
5  On Red-Hat based systems, full multiuser mode with GUI login. This runlevel is like level 3, but with a GUI login available.
6  Reboot the system.

To switch between runlevels, we can simply issue a runlevel change using the init command: init N (where N is one of the runlevels listed above). Please note that this is not the recommended way of taking a running system to a different runlevel because it gives no warning to existing logged-in users (thus causing them to lose work and processes to terminate abnormally).

Instead, the shutdown command should be used to restart the system (which first sends a warning message to all logged-in users and blocks any further logins; it then signals init to switch runlevels); however, the default runlevel (the one the system will boot to) must be edited in the /etc/inittab file first.

For that reason, follow these steps to properly switch between runlevels, As root, look for the following line in /etc/inittab.

id:2:initdefault:

and change the number 2 for the desired runlevel with your preferred text editor, such as vim (described in How to use vi/vim editor in Linux – Part 2 of this series).

Next, run as root.

# shutdown -r now

That last command will restart the system, causing it to start in the specified runlevel during next boot, and will run the scripts located in the /etc/rc[runlevel].d directory in order to decide which services should be started and which ones should not. For example, for runlevel 2 in the following system.

Change Runlevels in Linux

Change Runlevels in Linux

Manage Services using chkconfig

To enable or disable system services on boot, we will use chkconfig command in CentOS / openSUSE and sysv-rc-conf in Debian and derivatives. This tool can also show us what is the preconfigured state of a service for a particular runlevel.

Read AlsoHow to Stop and Disable Unwanted Services in Linux

Listing the runlevel configuration for a service.

# chkconfig --list [service name]
# chkconfig --list postfix
# chkconfig --list mysqld

Listing Runlevel Configuration

Listing Runlevel Configuration

In the above image we can see that postfix is set to start when the system enters runlevels 2 through 5, whereas mysqld will be running by default for runlevels 2 through 4. Now suppose that this is not the expected behaviour.

For example, we need to turn on mysqld for runlevel 5 as well, and turn off postfix for runlevels 4 and 5. Here’s what we would do in each case (run the following commands as root).

Enabling a service for a particular runlevel
# chkconfig --level [level(s)] service on
# chkconfig --level 5 mysqld on
Disabling a service for particular runlevels
# chkconfig --level [level(s)] service off
# chkconfig --level 45 postfix off

Enable Disable Services in Linux

Enable Disable Services

We will now perform similar tasks in a Debian-based system using sysv-rc-conf.

Manage Services using sysv-rc-conf

Configuring a service to start automatically on a specific runlevel and prevent it from starting on all others.

1. Let’s use the following command to see what are the runlevels where mdadm is configured to start.

# ls -l /etc/rc[0-6].d | grep -E 'rc[0-6]|mdadm'

Check Runlevel of Service Running

Check Runlevel of Service Running

2. We will use sysv-rc-conf to prevent mdadm from starting on all runlevels except 2. Just check or uncheck (with the space bar) as desired (you can move up, down, left, and right with the arrow keys).

# sysv-rc-conf

SysV Runlevel Config

SysV Runlevel Config

Then press q to quit.

3. We will restart the system and run again the command from STEP 1.

# ls -l /etc/rc[0-6].d | grep -E 'rc[0-6]|mdadm'

Verify Service Runlevel

Verify Service Runlevel

In the above image we can see that mdadm is configured to start only on runlevel 2.

What About systemd?

systemd is another service and system manager that is being adopted by several major Linux distributions. It aims to allow more processing to be done in parallel during system startup (unlike sysvinit, which always tends to be slower because it starts processes one at a time, checks whether one depends on another, and waits for daemons to launch so more services can start), and to serve as a dynamic resource management to a running system.

Thus, services are started when needed (to avoid consuming system resources) instead of being launched without a solid reason during boot.

Viewing the status of all the processes running on your system, both systemd native and SysV services, run the following command.

# systemctl

Check All Running Processes in Linux

Check All Running Processes

The LOAD column shows whether the unit definition (refer to the UNIT column, which shows the service or anything maintained by systemd) was properly loaded, while the ACTIVE and SUB columns show the current status of such unit.

Displaying information about the current status of a service

When the ACTIVE column indicates that an unit’s status is other than active, we can check what happened using.

# systemctl status [unit]

For example, in the image above, media-samba.mount is in failed state. Let’s run.

# systemctl status media-samba.mount

Check Linux Service Status

Check Service Status

We can see that media-samba.mount failed because the mount process on host dev1 was unable to find the network share at //192.168.0.10/gacanepa.

Starting or Stopping Services

Once the network share //192.168.0.10/gacanepa becomes available, let’s try to start, then stop, and finally restart the unit media-samba.mount. After performing each action, let’s run systemctl status media-samba.mount to check on its status.

# systemctl start media-samba.mount
# systemctl status media-samba.mount
# systemctl stop media-samba.mount
# systemctl restart media-samba.mount
# systemctl status media-samba.mount

Starting Stoping Services

Starting Stoping Services

Enabling or disabling a service to start during boot

Under systemd you can enable or disable a service when it boots.

# systemctl enable [service] 		# enable a service 
# systemctl disable [service] 		# prevent a service from starting at boot

The process of enabling or disabling a service to start automatically on boot consists in adding or removing symbolic links in the /etc/systemd/system/multi-user.target.wants directory.

Enabling Disabling Services

Enabling Disabling Services

Alternatively, you can find out a service’s current status (enabled or disabled) with the command.

# systemctl is-enabled [service]

For example,

# systemctl is-enabled postfix.service

In addition, you can reboot or shutdown the system with.

# systemctl reboot
# systemctl shutdown

Upstart

Upstart is an event-based replacement for the /sbin/init daemon and was born out of the need for starting services only, when they are needed (also supervising them while they are running), and handling events as they occur, thus surpassing the classic, dependency-based sysvinit system.

It was originally developed for the Ubuntu distribution, but is used in Red Hat Enterprise Linux 6.0. Though it was intended to be suitable for deployment in all Linux distributions as a replacement for sysvinit, in time it was overshadowed by systemd. On February 14, 2014, Mark Shuttleworth (founder of Canonical Ltd.) announced that future releases of Ubuntu would use systemd as the default init daemon.

Because the SysV startup script for system has been so common for so long, a large number of software packages include SysV startup scripts. To accommodate such packages, Upstart provides a compatibility mode: It runs SysV startup scripts in the usual locations (/etc/rc.d/rc?.d/etc/init.d/rc?.d/etc/rc?.d, or a similar location). Thus, if we install a package that doesn’t yet include an Upstart configuration script, it should still launch in the usual way.

Furthermore, if we have installed utilities such as chkconfig, you should be able to use them to manage your SysV-based services just as we would on sysvinit based systems.

Upstart scripts also support starting or stopping services based on a wider variety of actions than do SysV startup scripts; for example, Upstart can launch a service whenever a particular hardware device is attached.

A system that uses Upstart and its native scripts exclusively replaces the /etc/inittab file and the runlevel-specific SysV startup script directories with .conf scripts in the /etc/init directory.

These *.conf scripts (also known as job definitions) generally consists of the following:

    1. Description of the process.
    2. Runlevels where the process should run or events that should trigger it.
    3. Runlevels where process should be stopped or events that should stop it.
    4. Options.
    5. Command to launch the process.

For example,

# My test service - Upstart script demo description "Here goes the description of 'My test service'" author "Dave Null <dave.null@example.com>"
# Stanzas

#
# Stanzas define when and how a process is started and stopped
# See a list of stanzas here: http://upstart.ubuntu.com/wiki/Stanzas#respawn
# When to start the service
start on runlevel [2345]
# When to stop the service
stop on runlevel [016]
# Automatically restart process in case of crash
respawn
# Specify working directory
chdir /home/dave/myfiles
# Specify the process/command (add arguments if needed) to run
exec bash backup.sh arg1 arg2

To apply changes, you will need to tell upstart to reload its configuration.

# initctl reload-configuration

Then start your job by typing the following command.

$ sudo start yourjobname

Where yourjobname is the name of the job that was added earlier with the yourjobname.conf script.

A more complete and detailed reference guide for Upstart is available in the project’s web site under the menu “Cookbook”.

Summary

A knowledge of the Linux boot process is necessary to help you with troubleshooting tasks as well as with adapting the computer’s performance and running services to your needs.

In this article we have analyzed what happens from the moment when you press the Power switch to turn on the machine until you get a fully operational user interface. I hope you have learned reading it as much as I did while putting it together. Feel free to leave your comments or questions below. We always look forward to hearing from our readers!

Managing Users & Groups, File Permissions & Attributes and Enabling sudo Access on Accounts – Part 8

Last August, the Linux Foundation started the LFCS certification (Linux Foundation Certified Sysadmin), a brand new program whose purpose is to allow individuals everywhere and anywhere take an exam in order to get certified in basic to intermediate operational support for Linux systems, which includes supporting running systems and services, along with overall monitoring and analysis, plus intelligent decision-making to be able to decide when it’s necessary to escalate issues to higher level support teams.

Linux Users and Groups Management

Linux Foundation Certified Sysadmin – Part 8

Please have a quick look at the following video that describes an introduction to the Linux Foundation Certification Program.

This article is Part 8 of a 10-tutorial long series, here in this section, we will guide you on how to manage users and groups permissions in Linux system, that are required for the LFCS certification exam.

Since Linux is a multi-user operating system (in that it allows multiple users on different computers or terminals to access a single system), you will need to know how to perform effective user management: how to add, edit, suspend, or delete user accounts, along with granting them the necessary permissions to do their assigned tasks.

Adding User Accounts

To add a new user account, you can run either of the following two commands as root.

# adduser [new_account]
# useradd [new_account]

When a new user account is added to the system, the following operations are performed.

1. His/her home directory is created (/home/username by default).

2. The following hidden files are copied into the user’s home directory, and will be used to provide environment variables for his/her user session.

.bash_logout
.bash_profile
.bashrc

3. A mail spool is created for the user at /var/spool/mail/username.

4. A group is created and given the same name as the new user account.

Understanding /etc/passwd

The full account information is stored in the /etc/passwd file. This file contains a record per system user account and has the following format (fields are delimited by a colon).

[username]:[x]:[UID]:[GID]:[Comment]:[Home directory]:[Default shell]
  1. Fields [username] and [Comment] are self explanatory.
  2. The x in the second field indicates that the account is protected by a shadowed password (in /etc/shadow), which is needed to logon as [username].
  3. The [UID] and [GID] fields are integers that represent the User IDentification and the primary Group IDentification to which [username] belongs, respectively.
  4. The [Home directory] indicates the absolute path to [username]’s home directory, and
  5. The [Default shell] is the shell that will be made available to this user when he or she logins the system.
Understanding /etc/group

Group information is stored in the /etc/group file. Each record has the following format.

[Group name]:[Group password]:[GID]:[Group members]
  1. [Group name] is the name of group.
  2. An x in [Group password] indicates group passwords are not being used.
  3. [GID]: same as in /etc/passwd.
  4. [Group members]: a comma separated list of users who are members of [Group name].

Add User Accounts in Linux

Add User Accounts

After adding an account, you can edit the following information (to name a few fields) using the usermodcommand, whose basic syntax of usermod is as follows.

# usermod [options] [username]
Setting the expiry date for an account

Use the –expiredate flag followed by a date in YYYY-MM-DD format.

# usermod --expiredate 2014-10-30 tecmint
Adding the user to supplementary groups

Use the combined -aG, or –append –groups options, followed by a comma separated list of groups.

# usermod --append --groups root,users tecmint
Changing the default location of the user’s home directory

Use the -d, or –home options, followed by the absolute path to the new home directory.

# usermod --home /tmp tecmint
Changing the shell the user will use by default

Use –shell, followed by the path to the new shell.

# usermod --shell /bin/sh tecmint
Displaying the groups an user is a member of
# groups tecmint
# id tecmint

Now let’s execute all the above commands in one go.

# usermod --expiredate 2014-10-30 --append --groups root,users --home /tmp --shell /bin/sh tecmint

usermod Command Examples

usermod Command Examples

In the example above, we will set the expiry date of the tecmint user account to October 30th, 2014. We will also add the account to the root and users group. Finally, we will set sh as its default shell and change the location of the home directory to /tmp:

Read Also:

  1. 15 useradd Command Examples in Linux
  2. 15 usermod Command Examples in Linux

For existing accounts, we can also do the following.

Disabling account by locking password

Use the -L (uppercase L) or the –lock option to lock a user’s password.

# usermod --lock tecmint
Unlocking user password

Use the –u or the –unlock option to unlock a user’s password that was previously blocked.

# usermod --unlock tecmint

Lock User in Linux

Lock User Accounts

Creating a new group for read and write access to files that need to be accessed by several users

Run the following series of commands to achieve the goal.

# groupadd common_group # Add a new group
# chown :common_group common.txt # Change the group owner of common.txt to common_group
# usermod -aG common_group user1 # Add user1 to common_group
# usermod -aG common_group user2 # Add user2 to common_group
# usermod -aG common_group user3 # Add user3 to common_group
Deleting a group

You can delete a group with the following command.

# groupdel [group_name]

If there are files owned by group_name, they will not be deleted, but the group owner will be set to the GID of the group that was deleted.

Linux File Permissions

Besides the basic read, write, and execute permissions that we discussed in Archiving Tools and Setting File Attributes – Part 3 of this series, there are other less used (but not less important) permission settings, sometimes referred to as “special permissions”.

Like the basic permissions discussed earlier, they are set using an octal file or through a letter (symbolic notation) that indicates the type of permission.

Deleting user accounts

You can delete an account (along with its home directory, if it’s owned by the user, and all the files residing therein, and also the mail spool) using the userdel command with the –remove option.

# userdel --remove [username]

Group Management

Every time a new user account is added to the system, a group with the same name is created with the username as its only member. Other users can be added to the group later. One of the purposes of groups is to implement a simple access control to files and other system resources by setting the right permissions on those resources.

For example, suppose you have the following users.

  1. user1 (primary group: user1)
  2. user2 (primary group: user2)
  3. user3 (primary group: user3)

All of them need read and write access to a file called common.txt located somewhere on your local system, or maybe on a network share that user1 has created. You may be tempted to do something like,

# chmod 660 common.txt
OR
# chmod u=rw,g=rw,o= common.txt [notice the space between the last equal sign and the file name]

However, this will only provide read and write access to the owner of the file and to those users who are members of the group owner of the file (user1 in this case). Again, you may be tempted to add user2 and user3to group user1, but that will also give them access to the rest of the files owned by user user1 and group user1.

This is where groups come in handy, and here’s what you should do in a case like this.

Understanding Setuid

When the setuid permission is applied to an executable file, an user running the program inherits the effective privileges of the program’s owner. Since this approach can reasonably raise security concerns, the number of files with setuid permission must be kept to a minimum. You will likely find programs with this permission set when a system user needs to access a file owned by root.

Summing up, it isn’t just that the user can execute the binary file, but also that he can do so with root’s privileges. For example, let’s check the permissions of /bin/passwd. This binary is used to change the password of an account, and modifies the /etc/shadow file. The superuser can change anyone’s password, but all other users should only be able to change their own.

passwd Command Examples

passwd Command Examples

Thus, any user should have permission to run /bin/passwd, but only root will be able to specify an account. Other users can only change their corresponding passwords.

Change User Password in Linux

Change User Password

 

Understanding Setgid

When the setgid bit is set, the effective GID of the real user becomes that of the group owner. Thus, any user can access a file under the privileges granted to the group owner of such file. In addition, when the setgid bit is set on a directory, newly created files inherit the same group as the directory, and newly created subdirectories will also inherit the setgid bit of the parent directory. You will most likely use this approach whenever members of a certain group need access to all the files in a directory, regardless of the file owner’s primary group.

# chmod g+s [filename]

To set the setgid in octal form, prepend the number 2 to the current (or desired) basic permissions.

# chmod 2755 [directory]
Setting the SETGID in a directory

Add Setgid in Linux

Add Setgid to Directory

Understanding Sticky Bit

When the “sticky bit” is set on files, Linux just ignores it, whereas for directories it has the effect of preventing users from deleting or even renaming the files it contains unless the user owns the directory, the file, or is root.

# chmod o+t [directory]

To set the sticky bit in octal form, prepend the number 1 to the current (or desired) basic permissions.

# chmod 1755 [directory]

Without the sticky bit, anyone able to write to the directory can delete or rename files. For that reason, the sticky bit is commonly found on directories, such as /tmp, that are world-writable.

Add Stickybit in Linux

Add Stickybit to Directory

Special Linux File Attributes

There are other attributes that enable further limits on the operations that are allowed on files. For example, prevent the file from being renamed, moved, deleted, or even modified. They are set with the chattr commandand can be viewed using the lsattr tool, as follows.

# chattr +i file1
# chattr +a file2

After executing those two commands, file1 will be immutable (which means it cannot be moved, renamed, modified or deleted) whereas file2 will enter append-only mode (can only be open in append mode for writing).

Protect File from Deletion

Chattr Command to Protect Files

Accessing the root Account and Using sudo

One of the ways users can gain access to the root account is by typing.

$ su

and then entering root’s password.

If authentication succeeds, you will be logged on as root with the current working directory as the same as you were before. If you want to be placed in root’s home directory instead, run.

$ su -

and then enter root’s password.

Enable sudo Access on Linux

Enable Sudo Access on Users

The above procedure requires that a normal user knows root’s password, which poses a serious security risk. For that reason, the sysadmin can configure the sudo command to allow an ordinary user to execute commands as a different user (usually the superuser) in a very controlled and limited way. Thus, restrictions can be set on a user so as to enable him to run one or more specific privileged commands and no others.

Read AlsoDifference Between su and sudo User

To authenticate using sudo, the user uses his/her own password. After entering the command, we will be prompted for our password (not the superuser’s) and if the authentication succeeds (and if the user has been granted privileges to run the command), the specified command is carried out.

To grant access to sudo, the system administrator must edit the /etc/sudoers file. It is recommended that this file is edited using the visudo command instead of opening it directly with a text editor.

# visudo

This opens the /etc/sudoers file using vim (you can follow the instructions given in Install and Use vim as Editor – Part 2 of this series to edit the file).

These are the most relevant lines.

Defaults    secure_path="/usr/sbin:/usr/bin:/sbin"
root        ALL=(ALL) ALL
tecmint     ALL=/bin/yum update
gacanepa    ALL=NOPASSWD:/bin/updatedb
%admin      ALL=(ALL) ALL

Let’s take a closer look at them.

Defaults    secure_path="/usr/sbin:/usr/bin:/sbin:/usr/local/bin"

This line lets you specify the directories that will be used for sudo, and is used to prevent using user-specific directories, which can harm the system.

The next lines are used to specify permissions.

root        ALL=(ALL) ALL
  1. The first ALL keyword indicates that this rule applies to all hosts.
  2. The second ALL indicates that the user in the first column can run commands with the privileges of any user.
  3. The third ALL means any command can be run.
tecmint     ALL=/bin/yum update

If no user is specified after the = sign, sudo assumes the root user. In this case, user tecmint will be able to run yum update as root.

gacanepa    ALL=NOPASSWD:/bin/updatedb

The NOPASSWD directive allows user gacanepa to run /bin/updatedb without needing to enter his password.

%admin      ALL=(ALL) ALL

The % sign indicates that this line applies to a group called “admin”. The meaning of the rest of the line is identical to that of an regular user. This means that members of the group “admin” can run all commands as any user on all hosts.

To see what privileges are granted to you by sudo, use the “-l” option to list them.

Sudo Access Rules

Sudo Access Rules

PAM (Pluggable Authentication Modules)

Pluggable Authentication Modules (PAM) offer the flexibility of setting a specific authentication scheme on a per-application and / or per-service basis using modules. This tool present on all modern Linux distributions overcame the problem often faced by developers in the early days of Linux, when each program that required authentication had to be compiled specially to know how to get the necessary information.

For example, with PAM, it doesn’t matter whether your password is stored in /etc/shadow or on a separate server inside your network.

For example, when the login program needs to authenticate a user, PAM provides dynamically the library that contains the functions for the right authentication scheme. Thus, changing the authentication scheme for the login application (or any other program using PAM) is easy since it only involves editing a configuration file (most likely, a file named after the application, located inside /etc/pam.d, and less likely in /etc/pam.conf).

Files inside /etc/pam.d indicate which applications are using PAM natively. In addition, we can tell whether a certain application uses PAM by checking if it the PAM library (libpam) has been linked to it:

# ldd $(which login) | grep libpam # login uses PAM
# ldd $(which top) | grep libpam # top does not use PAM

Check Linux PAM Library

Check Linux PAM Library

In the above image we can see that the libpam has been linked with the login application. This makes sense since this application is involved in the operation of system user authentication, whereas top does not.

Let’s examine the PAM configuration file for passwd – yes, the well-known utility to change user’s passwords. It is located at /etc/pam.d/passwd:

# cat /etc/passwd

PAM Configuration File for Linux Password

PAM Configuration File for Linux Password

The first column indicates the type of authentication to be used with the module-path (third column). When a hyphen appears before the type, PAM will not record to the system log if the module cannot be loaded because it could not be found in the system.

The following authentication types are available:

  1. account: this module type checks if the user or service has supplied valid credentials to authenticate.
  2. auth: this module type verifies that the user is who he / she claims to be and grants any needed privileges.
  3. password: this module type allows the user or service to update their password.
  4. session: this module type indicates what should be done before and/or after the authentication succeeds.

The second column (called control) indicates what should happen if the authentication with this module fails:

  1. requisite: if the authentication via this module fails, overall authentication will be denied immediately.
  2. required is similar to requisite, although all other listed modules for this service will be called before denying authentication.
  3. sufficient: if the authentication via this module fails, PAM will still grant authentication even if a previous marked as required failed.
  4. optional: if the authentication via this module fails or succeeds, nothing happens unless this is the only module of its type defined for this service.
  5. include means that the lines of the given type should be read from another file.
  6. substack is similar to includes but authentication failures or successes do not cause the exit of the complete module, but only of the substack.

The fourth column, if it exists, shows the arguments to be passed to the module.

The first three lines in /etc/pam.d/passwd (shown above), load the system-auth module to check that the user has supplied valid credentials (account). If so, it allows him / her to change the authentication token (password) by giving permission to use passwd (auth).

For example, if you append

remember=2

to the following line

password    sufficient    pam_unix.so sha512 shadow nullok try_first_pass use_authtok

in /etc/pam.d/system-auth:

password    sufficient    pam_unix.so sha512 shadow nullok try_first_pass use_authtok remember=2

the last two hashed passwords of each user are saved in /etc/security/opasswd so that they cannot be reused:

Linux Password Fields

Linux Password Fields

Summary

Effective user and file management skills are essential tools for any system administrator. In this article we have covered the basics and hope you can use it as a good starting to point to build upon. Feel free to leave your comments or questions below, and we’ll respond quickly.

Linux Package Management with Yum, RPM, Apt, Dpkg, Aptitude and Zypper – Part 9

Last August, the Linux Foundation announced the LFCS certification (Linux Foundation Certified Sysadmin), a shiny chance for system administrators everywhere to demonstrate, through a performance-based exam, that they are capable of succeeding at overall operational support for Linux systems. A Linux Foundation Certified Sysadmin has the expertise to ensure effective system support, first-level troubleshooting and monitoring, including finally issue escalation, when needed, to engineering support teams.

Linux Package Management

Linux Foundation Certified Sysadmin – Part 9

Watch the following video that explains about the Linux Foundation Certification Program.

This article is a Part 9 of 10-tutorial long series, today in this article we will guide you about Linux Package Management, that are required for the LFCS certification exam.

Package Management

In few words, package management is a method of installing and maintaining (which includes updating and probably removing as well) software on the system.

In the early days of Linux, programs were only distributed as source code, along with the required man pages, the necessary configuration files, and more. Nowadays, most Linux distributors use by default pre-built programs or sets of programs called packages, which are presented to users ready for installation on that distribution. However, one of the wonders of Linux is still the possibility to obtain source code of a program to be studied, improved, and compiled.

How package management systems work

If a certain package requires a certain resource such as a shared library, or another package, it is said to have a dependency. All modern package management systems provide some method of dependency resolution to ensure that when a package is installed, all of its dependencies are installed as well.

Packaging Systems

Almost all the software that is installed on a modern Linux system will be found on the Internet. It can either be provided by the distribution vendor through central repositories (which can contain several thousands of packages, each of which has been specifically built, tested, and maintained for the distribution) or be available in source code that can be downloaded and installed manually.

Because different distribution families use different packaging systems (Debian: *.deb / CentOS: *.rpm / openSUSE: *.rpm built specially for openSUSE), a package intended for one distribution will not be compatible with another distribution. However, most distributions are likely to fall into one of the three distribution families covered by the LFCS certification.

High and low-level package tools

In order to perform the task of package management effectively, you need to be aware that you will have two types of available utilities: low-level tools (which handle in the backend the actual installation, upgrade, and removal of package files), and high-level tools (which are in charge of ensuring that the tasks of dependency resolution and metadata searching -”data about the data”- are performed).

DISTRIBUTION LOW-LEVEL TOOL HIGH-LEVEL TOOL
 Debian and derivatives  dpkg  apt-get / aptitude
 CentOS  rpm  yum
 openSUSE  rpm  zypper

Let us see the descrption of the low-level and high-level tools.

dpkg is a low-level package manager for Debian-based systems. It can install, remove, provide information about and build *.deb packages but it can’t automatically download and install their corresponding dependencies.

Read More15 dpkg Command Examples

apt-get is a high-level package manager for Debian and derivatives, and provides a simple way to retrieve and install packages, including dependency resolution, from multiple sources using the command line. Unlike dpkg, apt-get does not work directly with *.deb files, but with the package proper name.

Read More25 apt-get Command Examples

aptitude is another high-level package manager for Debian-based systems, and can be used to perform management tasks (installing, upgrading, and removing packages, also handling dependency resolution automatically) in a fast and easy way. It provides the same functionality as apt-get and additional ones, such as offering access to several versions of a package.

rpm is the package management system used by Linux Standard Base (LSB)-compliant distributions for low-level handling of packages. Just like dpkg, it can query, install, verify, upgrade, and remove packages, and is more frequently used by Fedora-based distributions, such as RHEL and CentOS.

Read More20 rpm Command Examples

yum adds the functionality of automatic updates and package management with dependency management to RPM-based systems. As a high-level tool, like apt-get or aptitude, yum works with repositories.

Read More20 yum Command Examples

Common Usage of Low-Level Tools

The most frequent tasks that you will do with low level tools are as follows:

1. Installing a package from a compiled (*.deb or *.rpm) file

The downside of this installation method is that no dependency resolution is provided. You will most likely choose to install a package from a compiled file when such package is not available in the distribution’s repositories and therefore cannot be downloaded and installed through a high-level tool. Since low-level tools do not perform dependency resolution, they will exit with an error if we try to install a package with unmet dependencies.

# dpkg -i file.deb 		[Debian and derivative]
# rpm -i file.rpm 		[CentOS / openSUSE]

Note: Do not attempt to install on CentOS a *.rpm file that was built for openSUSE, or vice-versa!

2. Upgrading a package from a compiled file

Again, you will only upgrade an installed package manually when it is not available in the central repositories.

# dpkg -i file.deb 		[Debian and derivative]
# rpm -U file.rpm 		[CentOS / openSUSE]
3. Listing installed packages

When you first get your hands on an already working system, chances are you’ll want to know what packages are installed.

# dpkg -l 		[Debian and derivative]
# rpm -qa 		[CentOS / openSUSE]

If you want to know whether a specific package is installed, you can pipe the output of the above commands to grep, as explained in manipulate files in Linux – Part 1 of this series. Suppose we need to verify if package mysql-common is installed on an Ubuntu system.

# dpkg -l | grep mysql-common

Check Installed Packages in Linux

Check Installed Packages

Another way to determine if a package is installed.

# dpkg --status package_name 		[Debian and derivative]
# rpm -q package_name 			[CentOS / openSUSE]

For example, let’s find out whether package sysdig is installed on our system.

# rpm -qa | grep sysdig

Check sysdig Package

Check sysdig Package

4. Finding out which package installed a file
# dpkg --search file_name
# rpm -qf file_name

For example, which package installed pw_dict.hwm?

# rpm -qf /usr/share/cracklib/pw_dict.hwm

Query File in Linux

Query File in Linux

Common Usage of High-Level Tools

The most frequent tasks that you will do with high level tools are as follows.

1. Searching for a package

aptitude update will update the list of available packages, and aptitude search will perform the actual search for package_name.

# aptitude update && aptitude search package_name 

In the search all option, yum will search for package_name not only in package names, but also in package descriptions.

# yum search package_name
# yum search all package_name
# yum whatprovides “*/package_name”

Let’s supposed we need a file whose name is sysdig. To know that package we will have to install, let’s run.

# yum whatprovides “*/sysdig”

Check Package Description in Linux

Check Package Description

whatprovides tells yum to search the package the will provide a file that matches the above regular expression.

# zypper refresh && zypper search package_name		[On openSUSE]
2. Installing a package from a repository

While installing a package, you may be prompted to confirm the installation after the package manager has resolved all dependencies. Note that running update or refresh (according to the package manager being used) is not strictly necessary, but keeping installed packages up to date is a good sysadmin practice for security and dependency reasons.

# aptitude update && aptitude install package_name 		[Debian and derivatives]
# yum update && yum install package_name 			[CentOS]
# zypper refresh && zypper install package_name 		[openSUSE]
3. Removing a package

The option remove will uninstall the package but leaving configuration files intact, whereas purge will erase every trace of the program from your system.
# aptitude remove / purge package_name
# yum erase package_name

---Notice the minus sign in front of the package that will be uninstalled, openSUSE ---

# zypper remove -package_name 

Most (if not all) package managers will prompt you, by default, if you’re sure about proceeding with the uninstallation before actually performing it. So read the onscreen messages carefully to avoid running into unnecessary trouble!

4. Displaying information about a package

The following command will display information about the birthday package.

# aptitude show birthday 
# yum info birthday
# zypper info birthday

Check Package Information in Linux

Check Package Information

Summary

Package management is something you just can’t sweep under the rug as a system administrator. You should be prepared to use the tools described in this article at a moment’s notice. Hope you find it useful in your preparation for the LFCS exam and for your daily tasks. Feel free to leave your comments or questions below. We will be more than glad to get back to you as soon as possible.

Understanding & Learning Basic Shell Scripting and Linux Filesystem Troubleshooting – Part 10

The Linux Foundation launched the LFCS certification (Linux Foundation Certified Sysadmin), a brand new initiative whose purpose is to allow individuals everywhere (and anywhere) to get certified in basic to intermediate operational support for Linux systems, which includes supporting running systems and services, along with overall monitoring and analysis, plus smart decision-making when it comes to raising issues to upper support teams.

Basic Shell Scripting and Filesystem Troubleshooting

Linux Foundation Certified Sysadmin – Part 10

Check out the following video that guides you an introduction to the Linux Foundation Certification Program.

This is the last article (Part 10) of the present 10-tutorial long series. In this article we will focus on basic shell scripting and troubleshooting Linux file systems. Both topics are required for the LFCS certification exam.

Understanding Terminals and Shells

Let’s clarify a few concepts first.

  1. A shell is a program that takes commands and gives them to the operating system to be executed.
  2. A terminal is a program that allows us as end users to interact with the shell. One example of a terminal is GNOME terminal, as shown in the below image.

Gnome Terminal

Gnome Terminal

When we first start a shell, it presents a command prompt (also known as the command line), which tells us that the shell is ready to start accepting commands from its standard input device, which is usually the keyboard.

You may want to refer to another article in this series (Use Command to Create, Edit, and Manipulate files – Part 1) to review some useful commands.

Linux provides a range of options for shells, the following being the most common:

bash Shell

Bash stands for Bourne Again SHell and is the GNU Project’s default shell. It incorporates useful features from the Korn shell (ksh) and C shell (csh), offering several improvements at the same time. This is the default shell used by the distributions covered in the LFCS certification, and it is the shell that we will use in this tutorial.

sh Shell

The Bourne SHell is the oldest shell and therefore has been the default shell of many UNIX-like operating systems for many years.

ksh Shell

The Korn SHell is a Unix shell which was developed by David Korn at Bell Labs in the early 1980s. It is backward-compatible with the Bourne shell and includes many features of the C shell.

A shell script is nothing more and nothing less than a text file turned into an executable program that combines commands that are executed by the shell one after another.

Basic Shell Scripting

As mentioned earlier, a shell script is born as a plain text file. Thus, can be created and edited using our preferred text editor. You may want to consider using vi/m (refer to Usage of vi Editor – Part 2 of this series), which features syntax highlighting for your convenience.

Type the following command to create a file named myscript.sh and press Enter.

# vim myscript.sh

The very first line of a shell script must be as follows (also known as a shebang).

#!/bin/bash

It “tells” the operating system the name of the interpreter that should be used to run the text that follows.

Now it’s time to add our commands. We can clarify the purpose of each command, or the entire script, by adding comments as well. Note that the shell ignores those lines beginning with a pound sign # (explanatory comments).

#!/bin/bash
echo This is Part 10 of the 10-article series about the LFCS certification
echo Today is $(date +%Y-%m-%d)

Once the script has been written and saved, we need to make it executable.

# chmod 755 myscript.sh

Before running our script, we need to say a few words about the $PATH environment variable. If we run,

echo $PATH

from the command line, we will see the contents of $PATH: a colon-separated list of directories that are searched when we enter the name of a executable program. It is called an environment variable because it is part of the shell environment – a set of information that becomes available for the shell and its child processes when the shell is first started.

When we type a command and press Enter, the shell searches in all the directories listed in the $PATH variable and executes the first instance that is found. Let’s see an example,

Linux Environment Variables

Environment Variables

If there are two executable files with the same name, one in /usr/local/bin and another in /usr/bin, the one in the first directory will be executed first, whereas the other will be disregarded.

If we haven’t saved our script inside one of the directories listed in the $PATH variable, we need to append ./ to the file name in order to execute it. Otherwise, we can run it just as we would do with a regular command.

# pwd
# ./myscript.sh
# cp myscript.sh ../bin
# cd ../bin
# pwd
# myscript.sh

Execute Script in Linux

Execute Script

Conditionals

Whenever you need to specify different courses of action to be taken in a shell script, as result of the success or failure of a command, you will use the if construct to define such conditions. Its basic syntax is:

if CONDITION; then 
	COMMANDS;
else
	OTHER-COMMANDS 
fi

Where CONDITION can be one of the following (only the most frequent conditions are cited here) and evaluates to true when:

  1. [ -a file ] → file exists.
  2. [ -d file ] → file exists and is a directory.
  3. [ -f file ] →file exists and is a regular file.
  4. [ -u file ] →file exists and its SUID (set user ID) bit is set.
  5. [ -g file ] →file exists and its SGID bit is set.
  6. [ -k file ] →file exists and its sticky bit is set.
  7. [ -r file ] →file exists and is readable.
  8. [ -s file ]→ file exists and is not empty.
  9. [ -w file ]→file exists and is writable.
  10. [ -x file ] is true if file exists and is executable.
  11. [ string1 = string2 ] → the strings are equal.
  12. [ string1 != string2 ] →the strings are not equal.

[ int1 op int2 ] should be part of the preceding list, while the items that follow (for example, -eq –> is true if int1is equal to int2.) should be a “children” list of [ int1 op int2 ] where op is one of the following comparison operators.

  1. -eq –> is true if int1 is equal to int2.
  2. -ne –> true if int1 is not equal to int2.
  3. -lt –> true if int1 is less than int2.
  4. -le –> true if int1 is less than or equal to int2.
  5. -gt –> true if int1 is greater than int2.
  6. -ge –> true if int1 is greater than or equal to int2.

For Loops

This loop allows to execute one or more commands for each value in a list of values. Its basic syntax is:

for item in SEQUENCE; do 
		COMMANDS; 
done

Where item is a generic variable that represents each value in SEQUENCE during each iteration.

While Loops

This loop allows to execute a series of repetitive commands as long as the control command executes with an exit status equal to zero (successfully). Its basic syntax is:

while EVALUATION_COMMAND; do 
		EXECUTE_COMMANDS; 
done

Where EVALUATION_COMMAND can be any command(s) that can exit with a success (0) or failure (other than 0) status, and EXECUTE_COMMANDS can be any program, script or shell construct, including other nested loops.

Putting It All Together

We will demonstrate the use of the if construct and the for loop with the following example.

Determining if a service is running in a systemd-based distro

Let’s create a file with a list of services that we want to monitor at a glance.

# cat myservices.txt

sshd
mariadb
httpd
crond
firewalld

Script to Monitor Linux Services

Script to Monitor Linux Services

Our shell script should look like.

#!/bin/bash

# This script iterates over a list of services and
# is used to determine whether they are running or not.

for service in $(cat myservices.txt); do
    	systemctl status $service | grep --quiet "running"
    	if [ $? -eq 0 ]; then
            	echo $service "is [ACTIVE]"
    	else
            	echo $service "is [INACTIVE or NOT INSTALLED]"
    	fi
done

Linux Service Monitoring Script

Linux Service Monitoring Script

Let’s explain how the script works.

1). The for loop reads the myservices.txt file one element of LIST at a time. That single element is denoted by the generic variable named service. The LIST is populated with the output of,

# cat myservices.txt

2). The above command is enclosed in parentheses and preceded by a dollar sign to indicate that it should be evaluated to populate the LIST that we will iterate over.

3). For each element of LIST (meaning every instance of the service variable), the following command will be executed.

# systemctl status $service | grep --quiet "running"

This time we need to precede our generic variable (which represents each element in LIST) with a dollar sign to indicate it’s a variable and thus its value in each iteration should be used. The output is then piped to grep.

The –quiet flag is used to prevent grep from displaying to the screen the lines where the word running appears. When that happens, the above command returns an exit status of 0 (represented by $? in the if construct), thus verifying that the service is running.

An exit status different than 0 (meaning the word running was not found in the output of systemctl status $service) indicates that the service is not running.

Services Monitoring Script

Services Monitoring Script

We could go one step further and check for the existence of myservices.txt before even attempting to enter the for loop.

#!/bin/bash

# This script iterates over a list of services and
# is used to determine whether they are running or not.

if [ -f myservices.txt ]; then
    	for service in $(cat myservices.txt); do
            	systemctl status $service | grep --quiet "running"
            	if [ $? -eq 0 ]; then
                    	echo $service "is [ACTIVE]"
            	else
                    	echo $service "is [INACTIVE or NOT INSTALLED]"
            	fi
    	done
else
    	echo "myservices.txt is missing"
fi
Pinging a series of network or internet hosts for reply statistics

You may want to maintain a list of hosts in a text file and use a script to determine every now and then whether they’re pingable or not (feel free to replace the contents of myhosts and try for yourself).

The read shell built-in command tells the while loop to read myhosts line by line and assigns the content of each line to variable host, which is then passed to the ping command.

#!/bin/bash

# This script is used to demonstrate the use of a while loop

while read host; do
    	ping -c 2 $host
done < myhosts

Script to Ping Servers

Script to Ping Servers

Read Also:

  1. Learn Shell Scripting: A Guide from Newbies to System Administrator
  2. 5 Shell Scripts to Learn Shell Programming

Filesystem Troubleshooting

Although Linux is a very stable operating system, if it crashes for some reason (for example, due to a power outage), one (or more) of your file systems will not be unmounted properly and thus will be automatically checked for errors when Linux is restarted.

In addition, each time the system boots during a normal boot, it always checks the integrity of the filesystems before mounting them. In both cases this is performed using a tool named fsck (“file system check”).

fsck will not only check the integrity of file systems, but also attempt to repair corrupt file systems if instructed to do so. Depending on the severity of damage, fsck may succeed or not; when it does, recovered portions of files are placed in the lost+found directory, located in the root of each file system.

Last but not least, we must note that inconsistencies may also happen if we try to remove an USB drive when the operating system is still writing to it, and may even result in hardware damage.

The basic syntax of fsck is as follows:

# fsck [options] filesystem
Checking a filesystem for errors and attempting to repair automatically

In order to check a filesystem with fsck, we must first unmount it.

# mount | grep sdg1
# umount /mnt
# fsck -y /dev/sdg1

Scan Linux Filesystem for Errors

Check Filesystem Errors

Besides the -y flag, we can use the -a option to automatically repair the file systems without asking any questions, and force the check even when the filesystem looks clean.

# fsck -af /dev/sdg1

If we’re only interested in finding out what’s wrong (without trying to fix anything for the time being) we can run fsck with the -n option, which will output the filesystem issues to standard output.

# fsck -n /dev/sdg1

Depending on the error messages in the output of fsck, we will know whether we can try to solve the issue ourselves or escalate it to engineering teams to perform further checks on the hardware.

Summary

We have arrived at the end of this 10-article series where have tried to cover the basic domain competencies required to pass the LFCS exam.

For obvious reasons, it is not possible to cover every single aspect of these topics in any single tutorial, and that’s why we hope that these articles have put you on the right track to try new stuff yourself and continue learning.

If you have any questions or comments, they are always welcome – so don’t hesitate to drop us a line via the form below!

LFCS: How to Manage and Create LVM Using vgcreate, lvcreate and lvextend Commands – Part 11

Because of the changes in the LFCS exam requirements effective Feb. 2, 2016, we are adding the necessary topics to the LFCS series published here. To prepare for this exam, your are highly encouraged to use the LFCE series as well.

Manage LVM and Create LVM Partition

LFCS: Manage LVM and Create LVM Partition – Part 11

One of the most important decisions while installing a Linux system is the amount of storage space to be allocated for system files, home directories, and others. If you make a mistake at that point, growing a partition that has run out of space can be burdensome and somewhat risky.

Logical Volumes Management (also known as LVM), which have become a default for the installation of most (if not all) Linux distributions, have numerous advantages over traditional partitioning management. Perhaps the most distinguishing feature of LVM is that it allows logical divisions to be resized (reduced or increased) at will without much hassle.

The structure of the LVM consists of:

  1. One or more entire hard disks or partitions are configured as physical volumes (PVs).
  2. A volume group (VG) is created using one or more physical volumes. You can think of a volume group as a single storage unit.
  3. Multiple logical volumes can then be created in a volume group. Each logical volume is somewhat equivalent to a traditional partition – with the advantage that it can be resized at will as we mentioned earlier.

In this article we will use three disks of 8 GB each (/dev/sdb/dev/sdc, and /dev/sdd) to create three physical volumes. You can either create the PVs directly on top of the device, or partition it first.

Although we have chosen to go with the first method, if you decide to go with the second (as explained in Part 4 – Create Partitions and File Systems in Linux of this series) make sure to configure each partition as type 8e.

Creating Physical Volumes, Volume Groups, and Logical Volumes

To create physical volumes on top of /dev/sdb/dev/sdc, and /dev/sdd, do:

# pvcreate /dev/sdb /dev/sdc /dev/sdd

You can list the newly created PVs with:

# pvs

and get detailed information about each PV with:

# pvdisplay /dev/sdX

(where X is b, c, or d)

If you omit /dev/sdX as parameter, you will get information about all the PVs.

To create a volume group named vg00 using /dev/sdb and /dev/sdc (we will save /dev/sdd for later to illustrate the possibility of adding other devices to expand storage capacity when needed):

# vgcreate vg00 /dev/sdb /dev/sdc

As it was the case with physical volumes, you can also view information about this volume group by issuing:

# vgdisplay vg00

Since vg00 is formed with two 8 GB disks, it will appear as a single 16 GB drive:

List LVM Volume Groups

List LVM Volume Groups

When it comes to creating logical volumes, the distribution of space must take into consideration both current and future needs. It is considered good practice to name each logical volume according to its intended use.

For example, let’s create two LVs named vol_projects (10 GB) and vol_backups (remaining space), which we can use later to store project documentation and system backups, respectively.

The -n option is used to indicate a name for the LV, whereas -L sets a fixed size and -l (lowercase L) is used to indicate a percentage of the remaining space in the container VG.

# lvcreate -n vol_projects -L 10G vg00
# lvcreate -n vol_backups -l 100%FREE vg00

As before, you can view the list of LVs and basic information with:

# lvs

and detailed information with

# lvdisplay

To view information about a single LV, use lvdisplay with the VG and LV as parameters, as follows:

# lvdisplay vg00/vol_projects

List Logical Volume

List Logical Volume

In the image above we can see that the LVs were created as storage devices (refer to the LV Path line). Before each logical volume can be used, we need to create a filesystem on top of it.

We’ll use ext4 as an example here since it allows us both to increase and reduce the size of each LV (as opposed to xfs that only allows to increase the size):

# mkfs.ext4 /dev/vg00/vol_projects
# mkfs.ext4 /dev/vg00/vol_backups

In the next section we will explain how to resize logical volumes and add extra physical storage space when the need arises to do so.

Resizing Logical Volumes and Extending Volume Groups

Now picture the following scenario. You are starting to run out of space in vol_backups, while you have plenty of space available in vol_projects. Due to the nature of LVM, we can easily reduce the size of the latter (say 2.5 GB) and allocate it for the former, while resizing each filesystem at the same time.

Fortunately, this is as easy as doing:

# lvreduce -L -2.5G -r /dev/vg00/vol_projects
# lvextend -l +100%FREE -r /dev/vg00/vol_backups

Resize Reduce Logical Volume and Volume Group

Resize Reduce Logical Volume and Volume Group

It is important to include the minus (-) or plus (+) signs while resizing a logical volume. Otherwise, you’re setting a fixed size for the LV instead of resizing it.

It can happen that you arrive at a point when resizing logical volumes cannot solve your storage needs anymore and you need to buy an extra storage device. Keeping it simple, you will need another disk. We are going to simulate this situation by adding the remaining PV from our initial setup (/dev/sdd).

To add /dev/sdd to vg00, do

# vgextend vg00 /dev/sdd

If you run vgdisplay vg00 before and after the previous command, you will see the increase in the size of the VG:

# vgdisplay vg00

Check Volume Group Disk Size

Check Volume Group Disk Size

Now you can use the newly added space to resize the existing LVs according to your needs, or to create additional ones as needed.

Mounting Logical Volumes on Boot and on Demand

Of course there would be no point in creating logical volumes if we are not going to actually use them! To better identify a logical volume we will need to find out what its UUID (a non-changing attribute that uniquely identifies a formatted storage device) is.

To do that, use blkid followed by the path to each device:

# blkid /dev/vg00/vol_projects
# blkid /dev/vg00/vol_backups

Find Logical Volume UUID

Find Logical Volume UUID

Create mount points for each LV:

# mkdir /home/projects
# mkdir /home/backups

and insert the corresponding entries in /etc/fstab (make sure to use the UUIDs obtained before):

UUID=b85df913-580f-461c-844f-546d8cde4646 /home/projects	ext4 defaults 0 0
UUID=e1929239-5087-44b1-9396-53e09db6eb9e /home/backups ext4	defaults 0 0

Then save the changes and mount the LVs:

# mount -a
# mount | grep home

Mount Logical Volumes on Linux

Mount Logical Volumes on Linux

When it comes to actually using the LVs, you will need to assign proper ugo+rwx permissions as explained in Part 8 – Manage Users and Groups in Linux of this series.

Summary

In this article we have introduced Logical Volume Management, a versatile tool to manage storage devices that provides scalability. When combined with RAID (which we explained in Part 6 – Create and Manage RAID in Linux of this series), you can enjoy not only scalability (provided by LVM) but also redundancy (offered by RAID).

In this type of setup, you will typically find LVM on top of RAID, that is, configure RAID first and then configure LVM on top of it.

If you have questions about this article, or suggestions to improve it, feel free to reach us using the comment form below.

LFCS: How to Explore Linux with Installed Help Documentations and Tools – Part 12

Because of the changes in the LFCS exam objectives effective February 2nd, 2016, we are adding the needed topics to the LFCS series published here. To prepare for this exam, your are highly encouraged to use the LFCE series as well.

Explore Linux with Installed Documentations and Tools

LFCS: Explore Linux with Installed Documentations and Tools – Part 12

Once you get used to working with the command line and feel comfortable doing so, you realize that a regular Linux installation includes all the documentation you need to use and configure the system.

Another good reason to become familiar with command line help tools is that in the LFCS and LFCE exams, those are the only sources of information you can use – no internet browsing and no googling. It’s just you and the command line.

For that reason, in this article we will give you some tips to effectively use the installed docs and tools in order to prepare to pass the Linux Foundation Certification exams.

Linux Man Pages

A man page, short for manual page, is nothing less and nothing more than what the word suggests: a manual for a given tool. It contains the list of options (with explanation) that the command supports, and some man pages even include usage examples as well.

To open a man page, use the man command followed by the name of the tool you want to learn more about. For example:

# man diff

will open the manual page for diff, a tool used to compare text files line by line (to exit, simply hit the q key.).

Let’s say we want to compare two text files named file1 and file2 in Linux. These files contain the list of packages that are installed in two Linux boxes with the same distribution and version.

Doing a diff between file1 and file2 will tell us if there is a difference between those lists:

# diff file1 file2

Compare Two Text Files in Linux

Compare Two Text Files in Linux

where the < sign indicates lines missing in file2. If there were lines missing in file1, they would be indicated by the > sign instead.

On the other hand, 7d6 means line #7 in file should be deleted in order to match file2 (same with 24d22 and 41d38), and 65,67d61 tells us we need to remove lines 65 through 67 in file one. If we make these corrections, both files will then be identical.

Alternatively, you can display both files side by side using the -y option, according to the man page. You may find this helpful to more easily identify missing lines in files:

# diff -y file1 file2

Compare and List Difference of Two Files

Compare and List Difference of Two Files

Also, you can use diff to compare two binary files. If they are identical, diff will exit silently without output. Otherwise, it will return the following message: “Binary files X and Y differ”.

The –help Option

The --help option, available in many (if not all) commands, can be considered a short manual page for that specific command. Although it does not provide a comprehensive description of the tool, it is an easy way to obtain information on the usage of a program and a list of its available options at a quick glance.

For example,

# sed --help

shows the usage of each option available in sed (the stream editor).

One of the classic examples of using sed consists of replacing characters in files. Using the -i option (described as “edit files in place”), you can edit a file without opening it. If you want to make a backup of the original contents as well, use the -i option followed by a SUFFIX to create a separate file with the original contents.

For example, to replace each occurrence of the word Lorem with Tecmint (case insensitive) in lorem.txtand create a new file with the original contents of the file, do:

# less lorem.txt | grep -i lorem
# sed -i.orig 's/Lorem/Tecmint/gI' lorem.txt
# less lorem.txt | grep -i lorem
# less lorem.txt.orig | grep -i lorem

Please note that every occurrence of Lorem has been replaced with Tecmint in lorem.txt, and the original contents of lorem.txt has been saved to lorem.txt.orig.

Replace A String in Files

Replace A String in Files

Installed Documentation in /usr/share/doc

This is probably my favorite pick. If you go to /usr/share/doc and do a directory listing, you will see lots of directories with the names of the installed tools in your Linux system.

According to the Filesystem Hierarchy Standard, these directories contain useful information that might not be in the man pages, along with templates and configuration files to make configuration easier.

For example, let’s consider squid-3.3.8 (version may vary from distribution to distribution) for the popular HTTP proxy and squid cache server.

Let’s cd into that directory:

# cd /usr/share/doc/squid-3.3.8

and do a directory listing:

# ls

Linux Directory Listing with ls Command

Linux Directory Listing with ls Command

You may want to pay special attention to QUICKSTART and squid.conf.documented. These files contain an extensive documentation about Squid and a heavily commented configuration file, respectively. For other packages, the exact names may differ (as QuickRef or 00QUICKSTART, for example), but the principle is the same.

Other packages, such as the Apache web server, provide configuration file templates inside /usr/share/doc, that will be helpful when you have to configure a standalone server or a virtual host, to name a few cases.

GNU info Documentation

You can think of info documents as man pages on steroids. As such, they not only provide help for a specific tool, but also they do so with hyperlinks (yes, hyperlinks in the command line!) that allow you to navigate from a section to another using the arrow keys and Enter to confirm.

Perhaps the most illustrative example is:

# info coreutils

Since coreutils contains the basic file, shell and text manipulation utilities which are expected to exist on every operating system, you can reasonably expect a detailed description for each one of those categories in info coreutils.

Info Coreutils

Info Coreutils

As it is the case with man pages, you can exit an info document by pressing the q key.

Additionally, GNU info can be used to display regular man pages as well when followed by the tool name. For example:

# info tune2fs

will return the man page of tune2fs, the ext2/3/4 filesystems management tool.

And now that we’re at it, let’s review some of the uses of tune2fs:

Display information about the filesystem on top of /dev/mapper/vg00-vol_backups:

# tune2fs -l /dev/mapper/vg00-vol_backups

Set a filesystem volume name (Backups in this case):

# tune2fs -L Backups /dev/mapper/vg00-vol_backups

Change the check intervals and / or mount counts (use the -c option to set a number of mount counts and /or the -i option to set a check interval, where d=daysw=weeks, and m=months).

# tune2fs -c 150 /dev/mapper/vg00-vol_backups # Check every 150 mounts
# tune2fs -i 6w /dev/mapper/vg00-vol_backups # Check every 6 weeks

All of the above options can be listed with the --help option, or viewed in the man page.

Summary

Regardless of the method that you choose to invoke help for a given tool, knowing that they exist and how to use them will certainly come in handy in the exam. Do you know of any other tools that can be used to look up documentation?  Questions and other comments are more than welcome as well.

LFCS: How to Configure and Troubleshoot Grand Unified Bootloader (GRUB) – Part 13

Because of the recent changes in the LFCS certification exam objectives effective from February 2nd, 2016, we are adding the needed topics to the LFCS series published here. To prepare for this exam, you are highly encouraged to follow the LFCE series as well.

Configure and Troubleshoot Grub Boot Loader

LFCS: Configure and Troubleshoot Grub Boot Loader – Part 13

In this article we will introduce you to GRUB and explain why a boot loader is necessary, and how it adds versatility to the system.

The Linux boot process from the time you press the power button of your computer until you get a fully-functional system follows this high-level sequence:

  1. 1. A process known as POST (Power-On Self Test) performs an overall check on the hardware components of your computer.
  2. 2. When POST completes, it passes the control over to the boot loader, which in turn loads the Linux kernel in memory (along with initramfs) and executes it. The most used boot loader in Linux is the GRand Unified Boot loader, or GRUB for short.
  3. 3. The kernel checks and accesses the hardware, and then runs the initial process (mostly known by its generic name “init”) which in turn completes the system boot by starting services.

In Part 7 of this series (“SysVinit, Upstart, and Systemd”) we introduced the service management systems and tools used by modern Linux distributions. You may want to review that article before proceeding further.

Introducing GRUB Boot Loader

Two major GRUB versions (v1 sometimes called GRUB Legacy and v2) can be found in modern systems, although most distributions use v2 by default in their latest versions. Only Red Hat Enterprise Linux 6 and its derivatives still use v1 today.

Thus, we will focus primarily on the features of v2 in this guide.

Regardless of the GRUB version, a boot loader allows the user to:

  1. 1). modify the way the system behaves by specifying different kernels to use,
  2. 2). choose between alternate operating systems to boot, and
  3. 3). add or edit configuration stanzas to change boot options, among other things.

Today, GRUB is maintained by the GNU project and is well documented in their website. You are encouraged to use the GNU official documentation while going through this guide.

When the system boots you are presented with the following GRUB screen in the main console. Initially, you are prompted to choose between alternate kernels (by default, the system will boot using the latest kernel) and are allowed to enter a GRUB command line (with c) or edit the boot options (by pressing the e key).

GRUB Boot Screen

GRUB Boot Screen

One of the reasons why you would consider booting with an older kernel is a hardware device that used to work properly and has started “acting up” after an upgrade (refer to this link in the AskUbuntu forums for an example).

The GRUB v2 configuration is read on boot from /boot/grub/grub.cfg or /boot/grub2/grub.cfg, whereas /boot/grub/grub.conf or /boot/grub/menu.lst are used in v1. These files are NOT to be edited by hand, but are modified based on the contents of /etc/default/grub and the files found inside /etc/grub.d.

In a CentOS 7, here’s the configuration file that is created when the system is first installed:

GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="vconsole.keymap=la-latin1 rd.lvm.lv=centos_centos7-2/swap crashkernel=auto  vconsole.font=latarcyrheb-sun16 rd.lvm.lv=centos_centos7-2/root rhgb quiet"
GRUB_DISABLE_RECOVERY="true"

In addition to the online documentation, you can also find the GNU GRUB manual using info as follows:

# info grub

If you’re interested specifically in the options available for /etc/default/grub, you can invoke the configuration section directly:

# info -f grub -n 'Simple configuration'

Using the command above you will find out that GRUB_TIMEOUT sets the time between the moment when the initial screen appears and the system automatic booting begins unless interrupted by the user. When this variable is set to -1, boot will not be started until the user makes a selection.

When multiple operating systems or kernels are installed in the same machine, GRUB_DEFAULT requires an integer value that indicates which OS or kernel entry in the GRUB initial screen should be selected to boot by default. The list of entries can be viewed not only in the splash screen shown above, but also using the following command:

In CentOS and openSUSE:

# awk -F\' '$1=="menuentry " {print $2}' /boot/grub2/grub.cfg

In Ubuntu:

# awk -F\' '$1=="menuentry " {print $2}' /boot/grub/grub.cfg

In the example shown in the below image, if we wish to boot with the kernel version 3.10.0-123.el7.x86_64 (4th entry), we need to set GRUB_DEFAULT to 3 (entries are internally numbered beginning with zero) as follows:

GRUB_DEFAULT=3

Boot System with Old Kernel Version

Boot System with Old Kernel Version

One final GRUB configuration variable that is of special interest is GRUB_CMDLINE_LINUX, which is used to pass options to the kernel. The options that can be passed through GRUB to the kernel are well documented in the Kernel Parameters file and in man 7 bootparam.

Current options in my CentOS 7 server are:

GRUB_CMDLINE_LINUX="vconsole.keymap=la-latin1 rd.lvm.lv=centos_centos7-2/swap crashkernel=auto  vconsole.font=latarcyrheb-sun16 rd.lvm.lv=centos_centos7-2/root rhgb quiet"

Why would you want to modify the default kernel parameters or pass extra options? In simple terms, there may be times when you need to tell the kernel certain hardware parameters that it may not be able to determine on its own, or to override the values that it would detect.

This happened to me not too long ago when I tried Vector Linux, a derivative of Slackware, on my 10-year old laptop. After installation it did not detect the right settings for my video card so I had to modify the kernel options passed through GRUB in order to make it work.

Another example is when you need to bring the system to single-user mode to perform maintenance tasks. You can do this by appending the word single to GRUB_CMDLINE_LINUX and rebooting:

GRUB_CMDLINE_LINUX="vconsole.keymap=la-latin1 rd.lvm.lv=centos_centos7-2/swap crashkernel=auto  vconsole.font=latarcyrheb-sun16 rd.lvm.lv=centos_centos7-2/root rhgb quiet single"

After editing /etc/defalt/grub, you will need to run update-grub (Ubuntu) or grub2-mkconfig -o /boot/grub2/grub.cfg (CentOS and openSUSE) afterwards to update grub.cfg(otherwise, changes will be lost upon boot).

This command will process the boot configuration files mentioned earlier to update grub.cfg. This method ensures changes are permanent, while options passed through GRUB at boot time will only last during the current session.

Fixing Linux GRUB Issues

If you install a second operating system or if your GRUB configuration file gets corrupted due to human error, there are ways you can get your system back on its feet and be able to boot again.

In the initial screen, press c to get a GRUB command line (remember that you can also press e to edit the default boot options), and use help to bring the available commands in the GRUB prompt:

Fix Grub Configuration Issues in Linux

Fix Grub Configuration Issues in Linux

We will focus on ls, which will list the installed devices and filesystems, and we will examine what it finds. In the image below we can see that there are 4 hard drives (hd0 through hd3).

Only hd0 seems to have been partitioned (as evidenced by msdos1 and msdos2, where 1 and 2 are the partition numbers and msdos is the partitioning scheme).

Let’s now examine the first partition on hd0 (msdos1) to see if we can find GRUB there. This approach will allow us to boot Linux and there use other high level tools to repair the configuration file or reinstall GRUB altogether if it is needed:

# ls (hd0,msdos1)/

As we can see in the highlighted area, we found the grub2 directory in this partition:

Find Grub Configuration

Find Grub Configuration

Once we are sure that GRUB resides in (hd0,msdos1), let’s tell GRUB where to find its configuration file and then instruct it to attempt to launch its menu:

set prefix=(hd0,msdos1)/grub2
set root=(hd0,msdos1)
insmod normal
normal

Find and Launch Grub Menu

Find and Launch Grub Menu

Then in the GRUB menu, choose an entry and press Enter to boot using it. Once the system has booted you can issue the grub2-install /dev/sdX command (change sdX with the device you want to install GRUB on). The boot information will then be updated and all related files be restored.

# grub2-install /dev/sdX

Other more complex scenarios are documented, along with their suggested fixes, in the Ubuntu GRUB2 Troubleshooting guide. The concepts explained there are valid for other distributions as well.

Summary

In this article we have introduced you to GRUB, indicated where you can find documentation both online and offline, and explained how to approach an scenario where a system has stopped booting properly due to a bootloader-related issue.
Fortunately, GRUB is one of the tools that is best documented and you can easily find help either in the installed docs or online using the resources we have shared in this article.
Do you have questions or comments? Don’t hesitate to let us know using the comment form below. We look forward to hearing from you!

LFCS: Monitor Linux Processes Resource Usage and Set Process Limits on a Per-User Basis – Part 14

Due to recent modifications in the LFCS certification exam objectives effective from February 2nd, 2016, we are adding the needed articles to the LFCS series published here. To prepare for this exam, you are strongly encouraged to go through the LFCE series as well.

Linux Process Monitoring and Set Process Limits Per User

Monitor Linux Processes and Set Process Limits Per User – Part 14

Every Linux system administrator needs to know how to verify the integrity and availability of hardware, resources, and key processes. In addition, setting resource limits on a per-user basis must also be a part of his / her skill set.

In this article we will explore a few ways to ensure that the system both hardware and the software is behaving correctly to avoid potential issues that may cause unexpected production downtime and money loss.

Linux Reporting Processors Statistics

With mpstat you can view the activities for each processor individually or the system as a whole, both as a one-time snapshot or dynamically.

In order to use this tool, you will need to install sysstat:

# yum update && yum install sysstat              [On CentOS based systems]
# aptitutde update && aptitude install sysstat   [On Ubuntu based systems]
# zypper update && zypper install sysstat        [On openSUSE systems]

Read more about sysstat and it’s utilities at Learn Sysstat and Its Utilities mpstat, pidstat, iostat and sar in Linux

Once you have installed mpstat, use it to generate reports of processors statistics.

To display 3 global reports of CPU utilization (-u) for all CPUs (as indicated by -P ALL) at a 2-second interval, do:

# mpstat -P ALL -u 2 3
Sample Output
Linux 3.19.0-32-generic (tecmint.com) 	Wednesday 30 March 2016 	_x86_64_	(4 CPU)

11:41:07  IST  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest  %gnice   %idle
11:41:09  IST  all    5.85    0.00    1.12    0.12    0.00    0.00    0.00    0.00    0.00   92.91
11:41:09  IST    0    4.48    0.00    1.00    0.00    0.00    0.00    0.00    0.00    0.00   94.53
11:41:09  IST    1    2.50    0.00    0.50    0.00    0.00    0.00    0.00    0.00    0.00   97.00
11:41:09  IST    2    6.44    0.00    0.99    0.00    0.00    0.00    0.00    0.00    0.00   92.57
11:41:09  IST    3   10.45    0.00    1.99    0.00    0.00    0.00    0.00    0.00    0.00   87.56

11:41:09  IST  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest  %gnice   %idle
11:41:11  IST  all   11.60    0.12    1.12    0.50    0.00    0.00    0.00    0.00    0.00   86.66
11:41:11  IST    0   10.50    0.00    1.00    0.00    0.00    0.00    0.00    0.00    0.00   88.50
11:41:11  IST    1   14.36    0.00    1.49    2.48    0.00    0.00    0.00    0.00    0.00   81.68
11:41:11  IST    2    2.00    0.50    1.00    0.00    0.00    0.00    0.00    0.00    0.00   96.50
11:41:11  IST    3   19.40    0.00    1.00    0.00    0.00    0.00    0.00    0.00    0.00   79.60

11:41:11  IST  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest  %gnice   %idle
11:41:13  IST  all    5.69    0.00    1.24    0.00    0.00    0.00    0.00    0.00    0.00   93.07
11:41:13  IST    0    2.97    0.00    1.49    0.00    0.00    0.00    0.00    0.00    0.00   95.54
11:41:13  IST    1   10.78    0.00    1.47    0.00    0.00    0.00    0.00    0.00    0.00   87.75
11:41:13  IST    2    2.00    0.00    1.00    0.00    0.00    0.00    0.00    0.00    0.00   97.00
11:41:13  IST    3    6.93    0.00    0.50    0.00    0.00    0.00    0.00    0.00    0.00   92.57

Average:     CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest  %gnice   %idle
Average:     all    7.71    0.04    1.16    0.21    0.00    0.00    0.00    0.00    0.00   90.89
Average:       0    5.97    0.00    1.16    0.00    0.00    0.00    0.00    0.00    0.00   92.87
Average:       1    9.24    0.00    1.16    0.83    0.00    0.00    0.00    0.00    0.00   88.78
Average:       2    3.49    0.17    1.00    0.00    0.00    0.00    0.00    0.00    0.00   95.35
Average:       3   12.25    0.00    1.16    0.00    0.00    0.00    0.00    0.00    0.00   86.59

To view the same statistics for a specific CPU (CPU 0 in the following example), use:

# mpstat -P 0 -u 2 3
Sample Output
Linux 3.19.0-32-generic (tecmint.com) 	Wednesday 30 March 2016 	_x86_64_	(4 CPU)

11:42:08  IST  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest  %gnice   %idle
11:42:10  IST    0    3.00    0.00    0.50    0.00    0.00    0.00    0.00    0.00    0.00   96.50
11:42:12  IST    0    4.08    0.00    0.00    2.55    0.00    0.00    0.00    0.00    0.00   93.37
11:42:14  IST    0    9.74    0.00    0.51    0.00    0.00    0.00    0.00    0.00    0.00   89.74
Average:       0    5.58    0.00    0.34    0.85    0.00    0.00    0.00    0.00    0.00   93.23

The output of the above commands shows these columns:

  1. CPU: Processor number as an integer, or the word all as an average for all processors.
  2. %usr: Percentage of CPU utilization while running user level applications.
  3. %nice: Same as %usr, but with nice priority.
  4. %sys: Percentage of CPU utilization that occurred while executing kernel applications. This does not include time spent dealing with interrupts or handling hardware.
  5. %iowait: Percentage of time when the given CPU (or all) was idle, during which there was a resource-intensive I/O operation scheduled on that CPU. A more detailed explanation (with examples) can be found here.
  6. %irq: Percentage of time spent servicing hardware interrupts.
  7. %soft: Same as %irq, but with software interrupts.
  8. %steal: Percentage of time spent in involuntary wait (steal or stolen time) when a virtual machine, as guest, is “winning” the hypervisor’s attention while competing for the CPU(s). This value should be kept as small as possible. A high value in this field means the virtual machine is stalling – or soon will be.
  9. %guest: Percentage of time spent running a virtual processor.
  10. %idle: percentage of time when CPU(s) were not executing any tasks. If you observe a low value in this column, that is an indication of the system being placed under a heavy load. In that case, you will need to take a closer look at the process list, as we will discuss in a minute, to determine what is causing it.

To put the place the processor under a somewhat high load, run the following commands and then execute mpstat (as indicated) in a separate terminal:

# dd if=/dev/zero of=test.iso bs=1G count=1
# mpstat -u -P 0 2 3
# ping -f localhost # Interrupt with Ctrl + C after mpstat below completes
# mpstat -u -P 0 2 3

Finally, compare to the output of mpstat under “normal” circumstances:

Report Linux Processors Related Statistics

Report Linux Processors Related Statistics

As you can see in the image above, CPU 0 was under a heavy load during the first two examples, as indicated by the %idle column.

In the next section we will discuss how to identify these resource-hungry processes, how to obtain more information about them, and how to take appropriate action.

Reporting Linux Processes

To list processes sorting them by CPU usage, we will use the well known ps command with the -eo (to select all processes with user-defined format) and --sort (to specify a custom sorting order) options, like so:

# ps -eo pid,ppid,cmd,%cpu,%mem --sort=-%cpu

The above command will only show the PIDPPID, the command associated with the process, and the percentage of CPU and RAM usage sorted by the percentage of CPU usage in descending order. When executed during the creation of the .iso file, here’s the first few lines of the output:

Find Linux Processes By CPU Usage

Find Linux Processes By CPU Usage

Once we have identified a process of interest (such as the one with PID=2822), we can navigate to /proc/PID (/proc/2822 in this case) and do a directory listing.

This directory is where several files and subdirectories with detailed information about this particular process are kept while it is running.

For example:
  1. /proc/2822/io contains IO statistics for the process (number of characters and bytes read and written, among others, during IO operations).
  2. /proc/2822/attr/current shows the current SELinux security attributes of the process.
  3. /proc/2822/cgroup describes the control groups (cgroups for short) to which the process belongs if the CONFIG_CGROUPS kernel configuration option is enabled, which you can verify with:
# cat /boot/config-$(uname -r) | grep -i cgroups

If the option is enabled, you should see:

CONFIG_CGROUPS=y

Using cgroups you can manage the amount of allowed resource usage on a per-process basis as explained in Chapters 1 through 4 of the Red Hat Enterprise Linux 7 Resource Management guide, in Chapter 9 of the openSUSE System Analysis and Tuning guide, and in the Control Groups section of the Ubuntu 14.04 Server documentation.

The /proc/2822/fd is a directory that contains one symbolic link for each file descriptor the process has opened. The following image shows this information for the process that was started in tty1 (the first terminal) to create the .iso image:

Find Linux Process Information

Find Linux Process Information

The above image shows that stdin (file descriptor 0), stdout (file descriptor 1), and stderr (file descriptor 2) are mapped to /dev/zero/root/test.iso, and /dev/tty1, respectively.

More information about /proc can be found in “The /proc filesystem” document kept and maintained by Kernel.org, and in the Linux Programmer’s Manual.

Setting Resource Limits on a Per-User Basis in Linux

If you are not careful and allow any user to run an unlimited number of processes, you may eventually experience an unexpected system shutdown or get locked out as the system enters an unusable state. To prevent this from happening, you should place a limit on the number of processes users can start.

To do this, edit /etc/security/limits.conf and add the following line at the bottom of the file to set the limit:

*   	hard	nproc   10

The first field can be used to indicate either a user, a group, or all of them (*), whereas the second field enforces a hard limit on the number of process (nproc) to 10. To apply changes, logging out and back in is enough.

Thus, let’s see what happens if a certain user other than root (either a legitimate one or not) attempts to start a shell fork bomb. If we had not implemented limits, this would initially launch two instances of a function, and then duplicate each of them in a neverending loop. Thus, it would eventually bringing your system to a crawl.

However, with the above restriction in place, the fork bomb does not succeed but the user will still get locked out until the system administrator kills the process associated with it:

Run Shell Fork Bomb

Run Shell Fork Bomb

TIP: Other possible restrictions made possible by ulimit are documented in the limits.conf file.

Linux Other Process Management Tools

In addition to the tools discussed previously, a system administrator may also need to:

a) Modify the execution priority (use of system resources) of a process using renice. This means that the kernel will allocate more or less system resources to the process based on the assigned priority (a number commonly known as “niceness” in a range from -20 to 19).

The lower the value, the greater the execution priority. Regular users (other than root) can only modify the niceness of processes they own to a higher value (meaning a lower execution priority), whereas root can modify this value for any process, and may increase or decrease it.

The basic syntax of renice is as follows:

# renice [-n] <new priority> <UID, GID, PGID, or empty> identifier

If the argument after the new priority value is not present (empty), it is set to PID by default. In that case, the niceness of process with PID=identifier is set to <new priority>.

b) Interrupt the normal execution of a process when needed. This is commonly known as “killing” the process. Under the hood, this means sending the process a signal to finish its execution properly and release any used resources in an orderly manner.

To kill a process, use the kill command as follows:

# kill PID

Alternatively, you can use pkill to terminate all processes of a given owner (-u), or a group owner (-G), or even those processes which have a PPID in common (-P). These options may be followed by the numeric representation or the actual name as identifier:

# pkill [options] identifier

For example,

# pkill -G 1000

will kill all processes owned by group with GID=1000.

And,

# pkill -P 4993 

will kill all processes whose PPID is 4993.

Before running a pkill, it is a good idea to test the results with pgrep first, perhaps using the -l option as well to list the processes’ names. It takes the same options but only returns the PIDs of processes (without taking any further action) that would be killed if pkill is used.

# pgrep -l -u gacanepa

This is illustrated in the next image:

Find User Running Processes in Linux

Find User Running Processes in Linux

Summary

In this article we have explored a few ways to monitor resource usage in order to verify the integrity and availability of critical hardware and software components in a Linux system.

We have also learned how to take appropriate action (either by adjusting the execution priority of a given process or by terminating it) under unusual circumstances.

We hope the concepts explained in this tutorial have been helpful. If you have any questions or comments, feel free to reach us using the contact form below.

How to Change Kernel Runtime Parameters in a Persistent and Non-Persistent Way

In Part 13 of this LFCS (Linux Foundation Certified Sysadmin) series we explained how to use GRUB to modify the behavior of the system by passing options to the kernel for the ongoing boot process.

Similarly, you can use the command line in a running Linux system to alter certain runtime kernel parameters as a one-time modification, or permanently by editing a configuration file.

Thus, you are allowed to enable or disable kernel parameters on-the-fly without much difficulty when it is needed due to a required change in the way the system is expected to operate.

Introducing the /proc Filesystem

The latest specification of the Filesystem Hierarchy Standard indicates that /proc represents the default method for handling process and system information as well as other kernel and memory information. Particularly, /proc/sys is where you can find all the information about devices, drivers, and some kernel features.

The actual internal structure of /proc/sys depends heavily on the kernel being used, but you are likely to find the following directories inside. In turn, each of them will contain other subdirectories where the values for each parameter category are maintained:

  1. dev: parameters for specific devices connected to the machine.
  2. fs: filesystem configuration (quotas and inodes, for example).
  3. kernel: kernel-specific configuration.
  4. net: network configuration.
  5. vm: use of the kernel’s virtual memory.

To modify the kernel runtime parameters we will use the sysctl command. The exact number of parameters that can be modified can be viewed with:

# sysctl -a | wc -l

If you want to view the complete list of Kernel parameters, just do:

# sysctl -a 

As the the output of the above command will consist of A LOT of lines, we can use a pipeline followed by less to inspect it more carefully:

# sysctl -a | less

Let’s take a look at the first few lines. Please note that the first characters in each line match the names of the directories inside /proc/sys:

Understand Linux /proc Filesystem

Understand Linux /proc Filesystem

For example, the highlighted line:

dev.cdrom.info = drive name:        	sr0

indicates that sr0 is an alias for the optical drive. In other words, that is how the kernel “sees” that drive and uses that name to refer to it.

In the following section we will explain how to change other “more important” kernel runtime parameters in Linux.

How to Change or Modify Linux Kernel Runtime Parameteres

Based on what we have explained so far, it is easy to see that the name of a parameter matches the directory structure inside /proc/sys where it can be found.

For example:

dev.cdrom.autoclose → /proc/sys/dev/cdrom/autoclose
net.ipv4.ip_forward → /proc/sys/net/ipv4/ip_forward

Check Linux Kernel Parameters

That said, we can view the value of a particular Linux kernel parameter using either sysctl followed by the name of the parameter or reading the associated file:

# sysctl dev.cdrom.autoclose
# cat /proc/sys/dev/cdrom/autoclose
# sysctl net.ipv4.ip_forward
# cat /proc/sys/net/ipv4/ip_forward

Check Linux Kernel Parameters

Check Linux Kernel Parameters

Set or Modify Linux Kernel Parameters

To set the value for a kernel parameter we can also use sysctl, but using the -w option and followed by the parameter’s name, the equal sign, and the desired value.

Another method consists of using echo to overwrite the file associated with the parameter. In other words, the following methods are equivalent to disable the packet forwarding functionality in our system (which, by the way, should be the default value when a box is not supposed to pass traffic between networks):

# echo 0 > /proc/sys/net/ipv4/ip_forward
# sysctl -w net.ipv4.ip_forward=0

It is important to note that kernel parameters that are set using sysctl will only be enforced during the current session and will disappear when the system is rebooted.

To set these values permanently, edit /etc/sysctl.conf with the desired values. For example, to disable packet forwarding in /etc/sysctl.conf make sure this line appears in the file:

net.ipv4.ip_forward=0

Then run following command to apply the changes to the running configuration.

# sysctl -p

Other examples of important kernel runtime parameters are:

fs.file-max specifies the maximum number of file handles the kernel can allocate for the system. Depending on the intended use of your system (web / database / file server, to name a few examples), you may want to change this value to meet the system’s needs.

Otherwise, you will receive a “Too many open files” error message at best, and may prevent the operating system to boot at the worst.

If due to an innocent mistake you find yourself in this last situation, boot in single user mode (as explained in Part 13 – Configure and Troubleshoot Linux Grub Boot Loader) and edit /etc/sysctl.conf as instructed earlier. To set the same restriction on a per-user basis, refer to Part 14 – Monitor and Set Linux Process Limit Usage of this series.

kernel.sysrq is used to enable the SysRq key in your keyboard (also known as the print screen key) so as to allow certain key combinations to invoke emergency actions when the system has become unresponsive.

The default value (16) indicates that the system will honor the Alt+SysRq+key combination and perform the actions listed in the sysrq.c documentation found in kernel.org (where key is one letter in the b-z range). For example, Alt+SysRq+b will reboot the system forcefully (use this as a last resort if your server is unresponsive).

Warning! Do not attempt to press this key combination on a virtual machine because it may force your host system to reboot!

When set to 1net.ipv4.icmp_echo_ignore_all will ignore ping requests and drop them at the kernel level. This is shown in the below image – note how ping requests are lost after setting this kernel parameter:

Block Ping Requests in Linux

Block Ping Requests in Linux

A better and easier way to set individual runtime parameters is using .conf files inside /etc/sysctl.d, grouping them by categories.

For example, instead of setting net.ipv4.ip_forward=0 and net.ipv4.icmp_echo_ignore_all=1 in /etc/sysctl.conf, we can create a new file named net.conf inside /etc/sysctl.d:

# echo "net.ipv4.ip_forward=0" > /etc/sysctl.d/net.conf
# echo "net.ipv4.icmp_echo_ignore_all=1" >> /etc/sysctl.d/net.conf

If you choose to use this approach, do not forget to remove those same lines from /etc/sysctl.conf.

Summary

In this article we have explained how to modify kernel runtime parameters, both persistent and non persistently, using sysctl/etc/sysctl.conf, and files inside /etc/sysctl.d.

In the sysctl docs you can find more information on the meaning of more variables. Those files represent the most complete source of documentation about the parameters that can be set via sysctl.

Did you find this article useful? We surely hope you did. Don’t hesitate to let us know if you have any questions or suggestions to improve.

How to Set Access Control Lists (ACL’s) and Disk Quotas for Users and Groups

Access Control Lists (also known as ACLs) are a feature of the Linux kernel that allows to define more fine-grained access rights for files and directories than those specified by regular ugo/rwx permissions.

For example, the standard ugo/rwx permissions does not allow to set different permissions for different individual users or groups. With ACLs this is relatively easy to do, as we will see in this article.

Checking File System Compatibility with ACLs

To ensure that your file systems are currently supporting ACLs, you should check that they have been mounted using the acl option. To do that, we will use tune2fs for ext2/3/4 file systems as indicated below. Replace /dev/sda1 with the device or file system you want to check:

# tune2fs -l /dev/sda1 | grep "Default mount options:"

Note: With XFS, Access Control Lists are supported out of the box.

In the following ext4 file system, we can see that ACLs have been enabled for /dev/xvda2:

# tune2fs -l /dev/xvda2 | grep "Default mount options:"

Check ACL Enabled on Linux Filesystem

Check ACL Enabled on Linux Filesystem

If the above command does not indicate that the file system has been mounted with support for ACLs, it is most likely due to the noacl option being present in /etc/fstab.

In that case, remove it, unmount the file system, and then mount it again, or simply reboot your system after saving the changes to /etc/fstab.

Introducing ACLs in Linux

To illustrate how ACLs work, we will use a group named developers and add users walterwhite and saulgoodman (yes, I am a Breaking Bad fan!) to it.:

# groupadd developers
# useradd walterwhite
# useradd saulgoodman
# usermod -a -G developers walterwhite
# usermod -a -G developers saulgoodman

Before we proceed, let’s verify that both users have been added to the developers group:

# id walterwhite
# id saulgoodman

Find User ID in Linux

Find User ID in Linux

Let’s now create a directory called test in /mnt, and a file named acl.txt inside (/mnt/test/acl.txt).

Then we will set the group owner to developers and change its default ugo/rwx permissions recursively to 770(thus granting read, write, and execute permissions granted to both the owner and the group owner of the file):

# mkdir /mnt/test
# touch /mnt/test/acl.txt
# chgrp -R developers /mnt/test
# chmod -R 770 /mnt/test

As expected, you can write to /mnt/test/acl.txt as walterwhite or saulgoodman:

# su - walterwhite
# echo "My name is Walter White" > /mnt/test/acl.txt
# exit
# su - saulgoodman
# echo "My name is Saul Goodman" >> /mnt/test/acl.txt
# exit

Verify ACL Rules on Users

Verify ACL Rules on Users

So far so good. However, we will soon see a problem when we need to grant write access to /mnt/test/acl.txtfor another user that is not in the developers group.

Standard ugo/rwx permissions would require that the new user be added to the developers group, but that would give him/her the same permissions over all the objects owned by the group. That is precisely where ACLs come in handy.

Setting ACL’s in Linux

There are two types of ACLs: access ACLs are (which are applied to a file or directory), and default (optional) ACLs, which can only be applied to a directory.

If files inside a directory where a default ACL has been set do not have a ACL of their own, they inherit the default ACL of their parent directory.

Let’s give user gacanepa read and write access to /mnt/test/acl.txt. Before doing that, let’s take a look at the current ACL settings in that directory with:

# getfacl /mnt/test/acl.txt

Then change the ACLs on the file, use u: followed by the username and :rw to indicate read / write permissions:

# setfacl -m u:gacanepa:rw /mnt/test/acl.txt

And run getfacl on the file again to compare. The following image shows the “Before” and “After”:

# getfacl /mnt/test/acl.txt

Set ACL on Linux Users

Set ACL on Linux Users

Next, we will need to give others execute permissions on the /mnt/test directory:

# chmod +x /mnt/test

Keep in mind that in order to access the contents of a directory, a regular user needs execute permissions on that directory.

User gacanepa should now be able to write to the file. Switch to that user account and execute the following command to confirm:

# echo "My name is Gabriel Cánepa" >> /mnt/test/acl.txt

To set a default ACL to a directory (which its contents will inherit unless overwritten otherwise), add d: before the rule and specify a directory instead of a file name:

# setfacl -m d:o:r /mnt/test
# getfacl /mnt/test/

The ACL above will allow users not in the owner group to have read access to the future contents of the /mnt/test directory. Note the difference in the output of getfacl /mnt/test before and after the change:

Set Default ACL to Linux Directory

Set Default ACL to Linux Directory

To remove a specific ACL, replace -m in the commands above with -x. For example,

# setfacl -x d:o /mnt/test

Alternatively, you can also use the -b option to remove ALL ACLs in one step:

# setfacl -b /mnt/test

For more information and examples on the use of ACLs, please refer to chapter 10section 2, of the openSUSE Security Guide (also available for download at no cost in PDF format).

Set Linux Disk Quotas on Users and Filesystems

Storage space is another resource that must be carefully used and monitored. To do that, quotas can be set on a file system basis, either for individual users or for groups.

Thus, a limit is placed on the disk usage allowed for a given user or a specific group, and you can rest assured that your disks will not be filled to capacity by a careless (or malintentioned) user.

The first thing you must do in order to enable quotas on a file system is to mount it with the usrquota or grpquota (for user and group quotas, respectively) options in /etc/fstab.

For example, let’s enable user-based quotas on /dev/vg00/vol_backups and group-based quotas on /dev/vg00/vol_projects.

Note that the UUID is used to identify each file system.

UUID=f6d1eba2-9aed-40ea-99ac-75f4be05c05a /home/projects ext4 defaults,grpquota 0 0
UUID=e1929239-5087-44b1-9396-53e09db6eb9e /home/backups ext4 defaults,usrquota 0 0

Unmount and remount both file systems:

# umount /home/projects
# umount /home/backups
# mount -o remount /home/projects
# mount -o remount /home/backups 

Then check that the usrquota and grpquota options are present in the output of mount (see highlighted below):

# mount | grep vg00

Check Linux User Quota and Group Quota

Check Linux User Quota and Group Quota

Finally, run the following commands to initialize and enable quotas:

# quotacheck -avugc
# quotaon -vu /home/backups
# quotaon -vg /home/projects

That said, let’s now assign quotas to the username and group we mentioned earlier. You can later disable quotas with quotaoff.

Setting Linux Disk Quotas

Let’s begin by setting an ACL on /home/backups for user gacanepa, which will give him read, write, and execute permissions on that directory:

# setfacl -m u:gacanepa:rwx /home/backups/

Then with,

# edquota -u gacanepa

We will make the soft limit=900 and the hard limit=1000 blocks (1024 bytes/block * 1000 blocks = 1024000 bytes = 1 MB) of disk space usage.

We can also place a limit of 20 and 25 as soft and hard limites on the number of files this user can create.

The above command will launch the text editor ($EDITOR) with a temporary file where we can set the limits mentioned previously:

Linux Disk Quota For User

Linux Disk Quota For User

These settings will cause a warning to be shown to user gacanepa when he has either reached the 900-block or 20-inode limits for a default grace period of 7 days.

If the over-quota situation has not been eliminated by then (for example, by removing files), the soft limit will become the hard limit and this user will be prevented from using more storage space or creating more files.

To test, let’s have user gacanepa try to create an empty 2 MB file named test1 inside /home/backups:

# dd if=/dev/zero of=/home/backups/test1 bs=2M count=1
# ls -lh /home/backups/test1

Verify Linux User Quota on Disk

Verify Linux User Quota on Disk

As you can see, the write operation file fails due to the disk quota having been exceeded. Since only the first 1000 KB are written to disk, the result in this case will most likely be a corrupt file.

Similarly, you can create an ACL for the developers groups in order to give members of that group rwx access to /home/projects:

# setfacl -m g:developers:rwx /home/projects/

And set the quota limits with:

# edquota -g developers

Just like we did with user gacanepa earlier.

The grace period can be specified for any number of seconds, minutes, hours, days, weeks, or months by executing.

# edquota -t

and updating the values under Block grace period and Inode grace period.

As opposed to block or inode usage (which are set on an user or group-basis), the grace period is set system-wide.

To report quotas, you can use quota -u [user] or quota -g [group] for a quick list or repquota -v [/path/to/filesystem] for a more detailed (verbose) and nicely formatted report.

Of course, you will want to replace [user][group], and [/path/to/filesystem] with specific user / group names and file system you want to check.

Summary

In this article we have explained how to set Access Control Lists and disk quotas for users and groups. Using both, you will be able to manage permissions and disk usage more effectively.

If you want to learn more about quotas, you can refer to the Quota Mini-HowTo in The Linux Documentation Project.

Needless to say, you can also count on us to answer questions. Just submit them using the comment form below and we will be more than glad to take a look.

How to Install Cygwin, a Linux-like Commandline Environment for Windows

During the last Microsoft Build Developer Conference held from March 30th to April 1st, Microsoft released an announcement and gave a presentation that surprised the industry: beginning with Windows 10 update #14136, it would be possible to run bash on Ubuntu on top of Windows.

Although this update has already been released by now, it is still in beta and is only available for insiders / developers and not for the public in general.

Without a doubt, when this feature reaches stable status and is available for everyone to use, it will be welcome with open arms – especially by FOSS professionals who work with technologies (Python, Ruby, etc) that are native to the Linux command line environment. Unfortunately, it will only be available in Windows 10 and not on previous versions.

However, Cygwin a well-known and widely-used Linux environment for Windows has been around for quite some time and has been extensively utilized by Linux pros whenever they’ve had the need to work on a Windows computer.

While foundationally different from “Bash on Ubuntu on Windows”, Cygwin is free software and provides a large set of GNU and Open Source tools that you can use as if you were on Linux, and a DLL that which contributes with substantial POSIX API functionality. On top of that, you can use Cygwin on all 32 and 64-bit Windows versions starting with XP SP3.

Downloading and Installing Cygwin

In this article we will guide you how to set up Cygwin with the most frequently used tools in the Linux command line. Depending on the available storage space and on your specific needs, you can later choose to install others very easily.

To install Cygwin (note that the same instructions apply to updating the software), we will need to download the Cygwin setup, depending on your version of Microsoft Windows. Once downloaded, double click on the .exe file to begin with the installation and follow the steps outlined below to complete it.

Step 1 – Launch the installation process and choose “Install from Internet”:

Installing Cygwin

Installing Cygwin

Step 2 – Select an existing directory where you want to install Cygwin and its installation file (Warning: don’t choose folders with spaces on their names):

Select Cygwin Installation Directory

Select Cygwin Installation Directory

Step 3 – Choose your Internet connection type and a select a FTP or HTTP mirror (go to https://cygwin.com/mirrors.html to select a mirror near your geographical location and then click Add to insert the desired mirror in the site list) to proceed with the download:

Select Cygwin Connection Type

Select Cygwin Connection Type

After you click next in the last screen, some preliminary packages -which will guide the actual installation process- will be retrieved first. If the chosen mirror is not operational or does not contain all the necessary files, you will be prompted to use another one. You can also choose a FTP server if the HTTP counterpart does not work.

If everything goes as expected, within a matter of minutes you will be presented with the package selection screen. In my case, I ended up choosing ftp://mirrors.kernel.org after others failed.

Step 4 – Select the packages you want to install by clicking on each desired category. Note you can also choose to install the source code as well. You can also search for packages using the input textbox. When you’re done selecting the packages you need, click Next.

Select Packages to Install under Cygwin

Select Packages to Install under Cygwin

If you selected a package that has dependencies, you will be prompted to confirm the installation of dependencies as well.

Cygwin Setup

Cygwin Setup

As it is to be expected, the download time will depend on the number of packages you selected previously and their required dependencies. In any event, you should see the following screen after 15-20 minutes.

Select the desired options (Create icon on Desktop / Add icon to Start Menu) and click Finish to complete the installation:

Cygwin Installation Setup

Cygwin Installation Setup

After you have successfully completed steps 1 through 4, we can open Cygwin by double clicking its icon on the Windows desktop, as we will see in the next section.

Launching and using Cygwin

Once you have launched Cygwin you can start typing commands as if you would do in a Linux terminal. However, you should note that just like it is the case in Linux, the initial directory is a virtual folder called /home/username.

The image below shows the result of running the following commands in my recently finished Cygwin installation.

Print current date:

$ echo "Today is $(date +%F)" 

The initial directory is found inside a folder named home that is located in the directory where Cygwin was installed (/cygdrive/c = C:\Cygwin\home in my case):

$ pwd

Change directory to the root of the C: drive:

$ cd C:

Create a directory:

$ mkdir 'C:\Users\Gabriel\test'

Redirect the output of the command to a file:

$ ls -l > 'C:\Users\Gabriel\test\files.txt'

View contents of the root directory of the C: drive.

$ cat 'C:\Users\Gabriel\test\files.txt' 

Running Linux Commands on Cygwin

Running Linux Commands on Cygwin

If you installed vim or another text editor, you can also invoke it as usual to create a bash shell script. The following example will search for files with permissions set to 777 beginning at the directory given as parameter and will 1) print the file names to the screen, and 2) change the permissions to 644, and delete them if they are empty.

# vim fixperms.sh

Add the following content to file:

#!/bin/bash
DIR=$1
echo "The permissions of the following files are being changed to 644: "
find $DIR -type f -perm 777 -print -exec chmod 644 {} +
echo "The following empty files are being removed: "
find $DIR -type f -empty -print -empty -delete

Feel free to add to the above script other commands if you wish, then give it execute permissions and run it:

$ chmod +x fixperms.sh
$ ./fixperms.sh .

Let’s see the script in action:

Run Shell Scripts in Cygwin

Run Shell Scripts in Cygwin

As you can see, we were able to run a bash shell script in Windows (using the GNU version of find command) with the help of Cygwin – and that was just an example.

Think for a minute. What other examples of classic Linux commands would you like to see? Feel free to give it a try, let us know how it goes, and don’t hesitate to ask us for help.

Summary

In this article we have explained how to install Cygwin, a Linux-like command line environment for Windows. As such, keep in mind that it is NOT a method to run native Linux applications in Windows, but you can compile applications using the source code if you want to do so.

If you find out that some of the commands you use more frequently are not available, restart the installation and search for the specific packages when you reach Step 4 (feel free to repeat this process as many times as needed). The number of packages available is amazing and the chances of not being able to find what you need are next to zero.

If you have had the chance to use Cygwin already, we would appreciate it if you can leave a comment using the form below to tell us about your experience. If not, we certainly hope we ignited a spark of interest with this article, and your feedback is highly appreciated as well.

An Ultimate Guide to Setting Up FTP Server to Allow Anonymous Logins

In a day where massive remote storage is rather common, it may be strange to talk about sharing files using FTP (File Transfer Protocol).

However, it is still used for file exchange where security does not represent an important consideration and for public downloads of documents, for example.

It’s for that reason that learning how to configure a FTP server and enable anonymous downloads (not requiring authentication) is still a relevant topic.

In this article we will explain how to set up a FTP server to allow connections on passive mode where the client initiates both channels of communication to the server (one for commands and the other for the actual transmission of files, also known as the control and data channels, respectively).

You can read more about passive and active modes (which we will not cover here) in Active FTP vs. Passive FTP, a Definitive Explanation.

That said, let’s begin!

Setting up a FTP Server in Linux

To set up FTP in our server we will install the following packages:

# yum install vsftpd ftp         [CentOS]
# aptitude install vsftpd ftp    [Ubuntu]
# zypper install vsftpd ftp      [openSUSE]

The vsftpd package is an implementation of a FTP server. The name of the package stands for Very Secure FTP Daemon. On the other hand, ftp is the client program that will be used to access the server.

Keep in mind that during the exam, you will be given only one VPS where you will need to install both client and server, so that is precisely the same approach that we will follow in this article.

In CentOS and openSUSE, you will be required to start and enable the vsftpd service:

# systemctl start vsftpd && systemctl enable vsftpd

In Ubuntuvsftpd should be started and set to start on subsequent boots automatically after the installation. If not, you can start it manually with:

$ sudo service vsftpd start

Once vsftpd is installed and running, we can proceed to configure our FTP server.

Configuring the FTP Server in Linux

At any point, you can refer to man vsftpd.conf for further configuration options. We will set the most common options and mention their purpose in this guide.

As with any other configuration file, it is important to make a backup copy of the original before making changes:

# cp /etc/vsftpd/vsftpd.conf /etc/vsftpd/vsftpd.conf.orig

Then open /etc/vsftpd/vsftpd.conf (the main configuration file) and edit the following options as indicated:

1. Make sure you allow anonymous access to the server (we will use the /storage/ftp directory for this example – that’s where we will store documents for anonymous users to access) without password:

anonymous_enable=YES
no_anon_password=YES
anon_root=/storage/ftp/

If you omit the last setting, the ftp directory will default to /var/ftp (the home directory of the dedicated ftp user that was created during installation).

2. To enable read-only access (thus disabling file uploads to the server), set the following variable to NO:

write_enable=NO

Important: Only use steps #3 and #4 if you choose to disable the anonymous logins.

3. Likewise, you may want to also allow local users to login with their system credentials to the FTP server. Later on this article we will show you how to restrict them to their respective home directories to store and retrieve files using FTP:

local_enable=YES

If SELinux is in enforcing mode, you will also need to set the ftp_home_dir flag to on so that FTP is allowed to write and read files to and from their home directories:

# getsebool ftp_home_dir

If not, you can enable it permanently with:

# setsebool -P ftp_home_dir 1

The expected output is shown below:

SELinux - Enable FTP on Home Directories

SELinux – Enable FTP on Home Directories

4. In order to restrict authenticated system users to their home directories, we will use:

chroot_local_user=YES
chroot_list_enable=YES
chroot_list_file=/etc/vsftpd/chroot_list

With the above chroot settings and an empty /etc/vsftpd/chroot_list file (which YOU need to create), you will restrict ALL system users to their home directories.

Important: Please note this still requires that you ensure that none of them has write permissions to the top directory.

If you want to allow a specific user (or more) outside their home directories, insert the usernames in /etc/vsftpd/chroot_list, one per line.

5. In addition, the following settings will allow you to limit the available bandwidth for anonymous logins (10 KB) and authenticated users (20 KB) in bytes per second, and restrict the number of simultaneous connections per IP address to 5:

anon_max_rate=10240
local_max_rate=20480
max_per_ip=5

6. We will restrict the data channel to TCP ports 15000 through 15500 in the server. Note this is an arbitrary choice and you can use a different range if you wish.

Add the following lines to /etc/vsftpd/vsftpd.conf if they are not already present:

pasv_enable=YES
pasv_max_port=15500
pasv_min_port=15000

7. Finally, you can set a welcome message to be shown each time a user access the server. A little information without further details will do:

ftpd_banner=This is a test FTP server brought to you by Tecmint.com

.
8. Now don’t forget to restart the service in order to apply the new configuration:

# systemctl restart vsftpd      [CentOS]
$ sudo service vsftpd restart   [Ubuntu]

9. Allow FTP traffic through the firewall (for firewalld):

On FirewallD

# firewall-cmd --add-service=ftp
# firewall-cmd --add-service=ftp --permanent
# firewall-cmd --add-port=15000-15500/tcp
# firewall-cmd --add-port=15000-15500/tcp --permanent

On IPTables

# iptables --append INPUT --protocol tcp --destination-port 21 -m state --state NEW,ESTABLISHED --jump ACCEPT
# iptables --append INPUT --protocol tcp --destination-port 15000:15500  -m state --state ESTABLISHED,RELATED --jump ACCEPT

Regardless of the distribution, we will need to load the ip_conntrack_ftp module:

# modprobe ip_conntrack_ftp 

And make it persistent across boots. On CentOS and openSUSE this means adding the module name to the IPTABLES_MODULES in /etc/sysconfig/iptables-config like so:

IPTABLES_MODULES="ip_conntrack_ftp"

whereas in Ubuntu you’ll want to add the module name (without the modprobe command) at the bottom of /etc/modules:

$ sudo echo "ip_conntrack_ftp" >> /etc/modules

10. Last but not least, make sure the server is listening on IPv4 or IPv6 sockets (but not both!). We will use IPv4here:

listen=YES

We will now test the newly installed and configured FTP server.

Testing the FTP Server in Linux

We will create a regular PDF file (in this case, the PDF version of the vsftpd.conf manpage) in /storage/ftp.

Note that you may need to install the ghostcript package (which provides ps2pdf) separately, or use another file of your choice:

# man -t vsftpd.conf | ps2pdf - /storage/ftp/vstpd.conf.pdf

To test, we will use both a web browser (by going to ftp://Your_IP_here) and using the command line client (ftp). Let’s see what happens when we enter that FTP address in our browser:

FTP Web Directory Browsing

FTP Web Directory Browsing

As you can see, the PDF file we saved earlier in /storage/ftp is available for you to download.

On the command line, type:

# ftp localhost

And enter anonymous as the user name. You should not be prompted for your password:

Verify FTP Connection

Verify FTP Connection

To retrieve files using the command line, use the get command followed by the filename, like so:

# get vsftpd.conf.pdf

and you’re good to go.

Summary

In this guide we have explained how to properly set up a FTP and use it to allow anonymous logins. You can also follow the instructions given to disable such logins and only allow local users to authenticate using their system credentials (not illustrated in this article since it is not required on the exam).

If you run into any issues, please share with us the output of the following command, which will stripe the configuration file from commented and empty lines, and we will be more than glad to take a look:

# grep -Eiv '(^$|^#)' /etc/vsftpd/vsftpd.conf

Mine is as below (note that there are other configuration directives that we did not cover in this article as they are set by default, so no change was required at our side):

local_enable=NO
write_enable=NO
local_umask=022
dirmessage_enable=YES
xferlog_enable=YES
connect_from_port_20=YES
xferlog_std_format=YES
ftpd_banner=This is a test FTP server brought to you by Tecmint.com
listen=YES
listen_ipv6=NO
pam_service_name=vsftpd
userlist_enable=YES
tcp_wrappers=YES
anon_max_rate=10240
local_max_rate=20480
max_per_ip=5
anon_root=/storage/ftp
no_anon_password=YES
allow_writeable_chroot=YES
pasv_enable=YES
pasv_min_port=15000
pasv_max_port=15500

Particularly, this directive

xferlog_enable=YES

will enable the transfer log in /var/log/xferlog. Make sure you look in that file while troubleshooting.

Additionally, feel free to drop us a note using the comment form below if you have questions or any comments about this article.

Setup a Basic Recursive Caching DNS Server and Configure Zones for Domain

Imagine what it would be like if we had to remember the IP addresses of all the websites that we use on a daily basis. Even if we had a prodigious memory, the process to browse to a website would be ridiculously slow and time-consuming.

And what about if we needed to visit multiple websites or use several applications that reside in the same machine or virtual host? That would be one of the worst headaches I can think of – not to mention the possibility that the IP address associated with a website or application can be changed without prior notice.

Just the very thought of it would be enough reason to desist using the Internet or internal networks after a while.

That’s precisely what a world without Domain Name System (also known as DNS) would be. Fortunately, this service solves all of the issues mentioned above – even if the relationship between an IP address and a name changes.

For that reason, in this article we will learn how to configure and use a simple DNS server, a service that will allow to translate domain names into IP addresses and vice versa.

Introducing DNS Name Resolution

For small networks that are not subject to frequent changes, the /etc/hosts file can be used as a rudimentary method of domain name to IP address resolution.

With a very simple syntax, this file allows us to associate a name (and / or an alias) with an IP address as follows:

[IP address] [name] [alias(es)]

For example,

192.168.0.1 gateway gateway.mydomain.com
192.168.0.2 web web.mydomain.com

Thus, you can reach the web machine either by its name, the web.mydomain.com alias, or its IP address.

For larger networks, or those that are subject to frequent changes, using the /etc/hosts file to resolve domain names into IP addresses would not be an acceptable solution. That’s where the need for a dedicated service comes in.

Under the hood, a DNS server queries a large database in the form of a tree, which starts at the root (“.”)zone.

The following image will help us to illustrate:

DNS Name Resolution Diagram

DNS Name Resolution Diagram

In the image above, the root (.) zone contains comedu, and net domains. Each of these domains are (or can be) managed by different organizations to avoid depending on a big, central one. This allows to properly distribute requests in a hierarchical way.

Let’s see what happens under the hood:

1. When a client makes a query to a DNS server for web1.sales.me.com, the server sends the query to the top (root) DNS server, which points the query to the name server in the .com zone.

This, in turn, sends the query to the next level name server (in the me.com zone), and then to sales.me.com. This process is repeated as many times as needed until the FQDN (Fully Qualified Domain Nameweb1.sales.me.com in this example) is returned by the name server of the zone where it belongs.

2. In this example, the name server in sales.me.com. responds for the address web1.sales.me.com and returns the desired domain name-IP association and other information as well (if configured to do so).

All this information is sent to the original DNS server, which then passes it back to the client that requested it in the first place. To avoid repeating the same steps for future identical queries, the results of the query are stored in the DNS server.

These are the reasons why this kind of setup is commonly known as a recursive, caching DNS server.

Installing and Configuring a DNS Server

In Linux, the most used DNS server is bind (short for Berkeley Internet Name Daemon), which can be installed as follows:

# yum install bind bind-utils        [CentOS]
# zypper install bind bind-utils     [openSUSE]
# aptitude install bind9 bind9utils  [Ubuntu]

Once we have installed bind and related utilities, let’s make a copy of the configuration file before making any changes:

# cp /etc/named.conf /etc/named.conf.orig            [CentOS and openSUSE]
# cp /etc/bind/named.conf /etc/bind/named.conf.orig  [Ubuntu]

Then let’s open named.conf and head over to the options block, where we need to set make sure the following settings are present to configure a recursive, caching server with IP 192.168.0.18/24 that can be accessed only by hosts in the same network (as a security measure).

The forwarders settings are used to indicate which name servers should be queried first (in the following example we use Google’s name servers) for hosts outside our domain:

options {
...
listen-on port 53 { 127.0.0.1; 192.168.0.18};
allow-query 	{ localhost; 192.168.0.0/24; };
recursion yes;
forwarders {
    	8.8.8.8;
    	8.8.4.4;
};
…
}

Outside the options block we will define our sales.me.com zone (in Ubuntu this is usually done in a separate file called named.conf.local) that maps a domain with a given IP address and a reverse zone to map the IP address to the corresponding domain.

However, the actual configuration of each zone will go in separate files as indicated by the file directive (“master” indicates we will only use one DNS server).

Add the following blocks to named.conf file:

zone "sales.me.com." IN {
    type master;
    file "/var/named/sales.me.com.zone";
};
zone "0.168.192.in-addr.arpa" IN {
    type master;
    file "/var/named/0.162.198.in-addr.arpa.zone";
};

Note that in-addr.arpa (for IPv4 addresses) and ip6.arpa (for IPv6) are conventions for reverse zone configurations.

After saving the above changes to named.conf, we can check for errors as follows:

# named-checkconf /etc/named.conf

If any errors are found, the above command will output an informative message with the cause and the line where they are located. Otherwise, it will not return anything.

Configuring DNS Zones

In the files /var/named/sales.me.com.zone and /var/named/0.168.192.in-addr.arpa.zone we will configure the forward (domain → IP address) and reverse (IP address → domain) zones.

Let’s tackle the forward configuration first:

1. At the top of the file you will find a line beginning with TTL (short for Time To Live), which specifies how long the cached response should “live” before being replaced by the results of a new query.

In the line immediately below, we will reference our domain and set the email address where notifications should be sent (note that the root.sales.me.com means root@sales.me.com).

2. A SOA (Start Of Authority) record indicates that this system is the authoritative nameserver for machines inside the sales.me.com domain.

The following settings are required when there are two nameservers (one master and one slave) per domain (although such is not our case since it is not required in the exam, they are presented here for your reference):

The Serial is used to distinguish one version of the zone definition file from a previous one (where settings could have changed). If the cached response points to a definition with a different serial, the query is performed again instead of feeding it back to the client.

In a setup with a slave (secondary) nameserver, Refresh indicates the amount of time until the secondary should check for a new serial from the master server.

In addition, Retry tells the server how often the secondary should attempt to contact the primary if no response from the primary has been received, whereas Expire indicates when the zone definition in the secondary is no longer valid after the master server could not be reached, and Negative TTL is the time that a Non-existent domain (NXdomain) should be cached.

3. A NS record indicates what is the authoritative DNS server for our domain (referenced by the @ sign at the beginning of the line).

4. An A record (for IPv4 addresses) or an AAAA (for IPv6 addresses) translates names into IP addresses.

In the example below:

dns: 192.168.0.18 (the DNS server itself)
web1: 192.168.0.29 (a web server inside the sales.me.com zone)
mail1: 192.168.0.28 (a mail server inside the sales.me.com zone)
mail2: 192.168.0.30 (another mail server)

5. A MX record indicates the names of the authorized mail transfer agents (MTAs) for this domain. The hostname should be prefaced by a number indicating the priority that the current mail server should have when there are two or more MTAs for the domain (the lower the value, the higher the priority – in the following example, mail1 is the primary whereas mail2 is the secondary MTA).

6. A CNAME record sets an alias (www.web1) for a host (web1).

IMPORTANT: The dot (.) at the end of the names is required.

$TTL	604800
@   	IN  	SOA 	sales.me.com. root.sales.me.com. (
                    	2016051101 ; Serial
                    	10800 ; Refresh
                    	3600  ; Retry
                    	604800 ; Expire
                    	604800) ; Negative TTL
;
@   	IN  	NS  	dns.sales.me.com.
dns 	IN  	A   	192.168.0.18
web1	IN  	A   	192.168.0.29
mail1   IN  	A   	192.168.0.28
mail2   IN  	A   	192.168.0.30
@   	IN  	MX  	10 mail1.sales.me.com.
@   	IN  	MX  	20 mail2.sales.me.com.
www.web1    	IN  	CNAME   web1

Let’s now take a look at the reverse zone configuration (/var/named/0.168.192.in-addr.arpa.zone). The SOArecord is the same as in the previous file, whereas the last three lines with a PTR (pointer) record indicate the last octet in the IPv4 address of the mail1web1, and mail2 hosts (192.168.0.28192.168.0.29, and 192.168.0.30, respectively).

$TTL	604800
@   	IN  	SOA 	sales.me.com. root.sales.me.com. (
                    	2016051101 ; Serial
                    	10800 ; Refresh
                    	3600  ; Retry
                    	604800 ; Expire
                    	604800) ; Minimum TTL
@   	IN  	NS  	dns.sales.me.com.
28  	IN  	PTR 	mail1.sales.me.com.
29  	IN  	PTR 	web1.sales.me.com.
30  	IN  	PTR 	mail2.sales.me.com.

You can check the zone files for errors with:

# named-checkzone sales.me.com /var/named/sales.me.com.zone
# named-checkzone 0.168.192.in-addr.arpa /var/named/0.168.192.in-addr.arpa.zone

The following image illustrates what is the expected output on success:

Check DNS Zone File Configuration Errors

Check DNS Zone File Configuration Errors

Otherwise, you will get an error message stating the cause and how to fix it:

Fix DNS Zone Configuration Error

Fix DNS Zone Configuration Error

Once you have verified the main configuration file and the zone files, restart the named service to apply changes.

In CentOS and openSUSE, do:

# systemctl restart named

And don’t forget to enable it as well:

# systemctl enable named

In Ubuntu:

$ sudo service bind9 restart

Finally, you will have to edit the configuration of your main network interfaces:

---- In /etc/sysconfig/network-scripts/ifcfg-enp0s3 for CentOS and openSUSE ----
DNS1=192.168.0.18 

---- In /etc/network/interfaces for Ubuntu ----
dns-nameservers 192.168.0.18 

and restart the network service to apply changes.

Testing the DNS Server

At this point we are ready to query our DNS server for local and outside names and addresses. The following commands will return the IP address associated with the host web1:

# host web1.sales.me.com
# host web1
# host www.web1

Query DNS on Domain Host

Query DNS on Domain Host

How can we find out who is handling emails for sales.me.com? It’s easy to find out – just query the MX records for the domain:

# host -t mx sales.me.com

Query MX Record Of Domain

Query MX Record Of Domain

Likewise, let’s perform a reverse query. This will help us find out the name behind an IP address:

# host 192.168.0.28
# host 192.168.0.29

DNS Reverse Query on IP Address

DNS Reverse Query on IP Address

You can try the same operations for outside hosts:

# host -t mx linux.com
# host 8.8.8.8

Check Domain DNS Information

Check Domain DNS Information

To verify that queries are indeed going through our DNS server, let’s enable logging:

# rndc querylog

And check the /var/log/messages file (in CentOS and openSUSE):

# host -t mx linux.com
# host 8.8.8.8

Verify DNS Queries in Log

Verify DNS Queries in Log

To disable DNS logging, type again:

# rndc querylog

In Ubuntu, enabling logging will require adding the following independent block (same level as the options block) to /etc/bind/named.conf:

logging {
	channel query_log {
    	file "/var/log/bind9/query.log";
    	severity dynamic;
    	print-category yes;
    	print-severity yes;
    	print-time yes;
	};
	category queries { query_log; };  
};

Note that the log file must exist and be writable by named.

Summary

In this article we have explained how to set up a basic recursive, caching DNS server and how to configure zones for a domain.

The mystery of name to IP resolution (and vice versa) is not such anymore! To ensure the proper operation of your DNS server, don’t forget to allow the service in your firewall (port TCP 53) as explained in Part 8 of the LFCE series (“Setup an Iptables Firewall to Enable Remote Access to Services“) and other articles in this same site such as Firewall Essentials and Network Traffic Control Using FirewallD and Iptables.

We hope you have found this article helpful – don’t hesitate to let us know if you have questions or comments. We always enjoy hearing from our readers!

Implementing Mandatory Access Control with SELinux or AppArmor in Linux

To overcome the limitations of and to increase the security mechanisms provided by standard ugo/rwxpermissions and access control lists, the United States National Security Agency (NSA) devised a flexible Mandatory Access Control (MAC) method known as SELinux (short for Security Enhanced Linux) in order to restrict among other things, the ability of processes to access or perform other operations on system objects (such as files, directories, network ports, etc) to the least permission possible, while still allowing for later modifications to this model.

SELinux and AppArmor Security Hardening Linux

SELinux
and AppArmor Security Hardening Linux

Another popular and widely-used MAC is AppArmor, which in addition to the features provided by SELinux, includes a learning mode that allows the system to “learn” how a specific application behaves, and to set limits by configuring profiles for safe application usage.

In CentOS 7SELinux is incorporated into the kernel itself and is enabled in Enforcing mode by default (more on this in the next section), as opposed to openSUSE and Ubuntu which use AppArmor.

In this article we will explain the essentials of SELinux and AppArmor and how to use one of these tools for your benefit depending on your chosen distribution.

Introduction to SELinux and How to Use it on CentOS 7

Security Enhanced Linux can operate in two different ways:

  1. Enforcing: SELinux denies access based on SELinux policy rules, a set of guidelines that control the security engine.
  2. Permissive: SELinux does not deny access, but denials are logged for actions that would have been denied if running in enforcing mode.

SELinux can also be disabled. Although it is not an operation mode itself, it is still an option. However, learning how to use this tool is better than just ignoring it. Keep it in mind!

To display the current mode of SELinux, use getenforce. If you want to toggle the operation mode, use setenforce 0 (to set it to Permissive) or setenforce 1 (Enforcing).

Since this change will not survive a reboot, you will need to edit the /etc/selinux/config file and set the SELINUXvariable to either enforcingpermissive, or disabled in order to achieve persistence across reboots:

How to Enable and Disable SELinux Mode

How to Enable and Disable SELinux Mode

On a side note, if getenforce returns Disabled, you will have to edit /etc/selinux/config with the desired operation mode and reboot. Otherwise, you will not be able to set (or toggle) the operation mode with setenforce.

One of the typical uses of setenforce consists of toggling between SELinux modes (from enforcing to permissive or the other way around) to troubleshoot an application that is misbehaving or not working as expected. If it works after you set SELinux to Permissive mode, you can be confident you’re looking at a SELinux permissions issue.

Two classic cases where we will most likely have to deal with SELinux are:

  1. Changing the default port where a daemon listens on.
  2. Setting the DocumentRoot directive for a virtual host outside of /var/www/html.

Let’s take a look at these two cases using the following examples.

EXAMPLE 1: Changing the default port for the sshd daemon

One of the first thing most system administrators do in order to secure their servers is change the port where the SSH daemon listens on, mostly to discourage port scanners and external attackers. To do this, we use the Port directive in /etc/ssh/sshd_config followed by the new port number as follows (we will use port 9999 in this case):

Port 9999

After attempting to restart the service and checking its status we will see that it failed to start:

# systemctl restart sshd
# systemctl status sshd

Check SSH Service Status

Check SSH Service Status

If we take a look at /var/log/audit/audit.log, we will see that sshd was prevented from starting on port 9999 by SELinux because that is a reserved port for the JBoss Management service (SELinux log messages include the word “AVC” so that they might be easily identified from other messages):

# cat /var/log/audit/audit.log | grep AVC | tail -1

Check Linux Audit Logs

Check Linux Audit Logs

At this point most people would probably disable SELinux but we won’t. We will see that there’s a way for SELinux, and sshd listening on a different port, to live in harmony together. Make sure you have the policycoreutils-python package installed and run:

# yum install policycoreutils-python

To view a list of the ports where SELinux allows sshd to listen on. In the following image we can also see that port 9999 was reserved for another service and thus we can’t use it to run another service for the time being:

# semanage port -l | grep ssh

Of course we could choose another port for SSH, but if we are certain that we will not need to use this specific machine for any JBoss-related services, we can then modify the existing SELinux rule and assign that port to SSH instead:

# semanage port -m -t ssh_port_t -p tcp 9999

After that, we can use the first semanage command to check if the port was correctly assigned, or the -lCoptions (short for list custom):

# semanage port -lC
# semanage port -l | grep ssh

Assign Port to SSH

Assign Port to SSH

We can now restart SSH and connect to the service using port 9999. Note that this change WILL survive a reboot.

EXAMPLE 2: Choosing a DocumentRoot outside /var/www/html for a virtual host

If you need to set up a Apache virtual host using a directory other than /var/www/html as DocumentRoot (say, for example, /websrv/sites/gabriel/public_html):

DocumentRoot “/websrv/sites/gabriel/public_html”

Apache will refuse to serve the content because the index.html has been labeled with the default_t SELinuxtype, which Apache can’t access:

# wget http://localhost/index.html
# ls -lZ /websrv/sites/gabriel/public_html/index.html

Labeled as default_t SELinux Type

Labeled as default_t SELinux Type

As with the previous example, you can use the following command to verify that this is indeed a SELinux-related issue:

# cat /var/log/audit/audit.log | grep AVC | tail -1

Check Logs for SELinux Issues

Check Logs for SELinux Issues

To change the label of /websrv/sites/gabriel/public_html recursively to httpd_sys_content_t, do:

# semanage fcontext -a -t httpd_sys_content_t "/websrv/sites/gabriel/public_html(/.*)?"

The above command will grant Apache read-only access to that directory and its contents.

Finally, to apply the policy (and make the label change effective immediately), do:

# restorecon -R -v /websrv/sites/gabriel/public_html

Now you should be able to access the directory:

# wget http://localhost/index.html

Access Apache Directory

Access Apache Directory

For more information on SELinux, refer to the Fedora 22 SELinux and Administrator guide.

Introduction to AppArmor and How to Use it on OpenSUSE and Ubuntu

The operation of AppArmor is based on profiles defined in plain text files where the allowed permissions and access control rules are set. Profiles are then used to place limits on how applications interact with processes and files in the system.

A set of profiles is provided out-of-the-box with the operating system, whereas others can be put in place either automatically by applications when they are installed or manually by the system administrator.

Like SELinuxAppArmor runs profiles in two modes. In enforce mode, applications are given the minimum permissions that are necessary for them to run, whereas in complain mode AppArmor allows an application to take restricted actions and saves the “complaints” resulting from that operation to a log (/var/log/kern.log/var/log/audit/audit.log, and other logs inside /var/log/apparmor).

These logs will show through lines with the word audit in them errors that would occur should the profile be run in enforce mode. Thus, you can try out an application in complain mode and adjust its behavior before running it under AppArmor in enforce mode.

The current status of AppArmor can be shown using:

$ sudo apparmor_status

Check AppArmor Status

Check AppArmor Status

The image above indicates that the profiles /sbin/dhclient/usr/sbin/, and /usr/sbin/tcpdump are in enforce mode (that is true by default in Ubuntu).

Since not all applications include the associated AppArmor profiles, the apparmor-profiles package, which provides other profiles that have not been shipped by the packages they provide confinement for. By default, they are configured to run in complain mode so that system administrators can test them and choose which ones are desired.

We will make use of apparmor-profiles since writing our own profiles is out of the scope of the LFCS certification. However, since profiles are plain text files, you can view them and study them in preparation to create your own profiles in the future.

AppArmor profiles are stored inside /etc/apparmor.d. Let’s take a look at the contents of that directory before and after installing apparmor-profiles:

$ ls /etc/apparmor.d

View AppArmor Directory Content

View AppArmor Directory Content

If you execute sudo apparmor_status again, you will see a longer list of profiles in complain mode. You can now perform the following operations:

To switch a profile currently in enforce mode to complain mode:

$ sudo aa-complain /path/to/file

and the other way around (complain –> enforce):

$ sudo aa-enforce /path/to/file

Wildcards are allowed in the above cases. For example,

$ sudo aa-complain /etc/apparmor.d/*

will place all profiles inside /etc/apparmor.d into complain mode, whereas

$ sudo aa-enforce /etc/apparmor.d/*

will switch all profiles to enforce mode.

To entirely disable a profile, create a symbolic link in the /etc/apparmor.d/disabled directory:

$ sudo ln -s /etc/apparmor.d/profile.name /etc/apparmor.d/disable/

For more information on AppArmor, please refer to the official AppArmor wiki and to the documentation provided by Ubuntu.

Summary

In this article we have gone through the basics of SELinux and AppArmor, two well-known MACs. When to use one or the other? To avoid difficulties, you may want to consider sticking with the one that comes with your chosen distribution. In any event, they will help you place restrictions on processes and access to system resources to increase the security in your servers.

Do you have any questions, comments, or suggestions about this article? Feel free to let us know using the form below. Don’t hesitate to let us know if you have any questions or comments.

Source

How to Install and Use Chrony in Linux

Chrony is a flexible implementation of the Network Time Protocol (NTP). It is used to synchronize the system clock from different NTP servers, reference clocks or via manual input.

It can also be used NTPv4 server to provide time service to other servers in the same network. It is meant to operate flawlessly under different conditions such as intermittent network connection, heavily loaded networks, changing temperatures which may affect the clock of ordinary computers.

Chrony comes with two programs:

  • chronyc – command line interface for chrony
  • chronyd – daemon that can be started at boot time

In this tutorial we are going to show you how to install and use Chrony on your Linux system.

Install Chrony in Linux

On some systems, chrony may be installed by default. Still if the package is missing, you can easily install it. using your default package manager tool on your respective Linux distributions using following command.

# yum -y install chrony    [On CentOS/RHEL]
# apt install chrony       [On Debian/Ubuntu]
# dnf -y install chrony    [On Fedora 22+]

To check the status of chronyd use the following command.

# systemctl status chronyd      [On SystemD]
# /etc/init.d/chronyd status    [On Init]

If you want to enable chrony daemon upon boot, you can use the following command.

 
# systemctl enable chrony       [On SystemD]
# chkconfig --add chronyd       [On Init]

Check Chrony Synchronization in Linux

To check if chrony is actually synchronized, we will use it’s command line program chronyc, which has the tracking option which will provide relevant information.

# chronyc tracking

Check Chrony Synchronization in Linux

Check Chrony Synchronization in Linux

The listed files provide the following information:

  • Reference ID – the reference ID and name to which the computer is currently synced.
  • Stratum – number of hops to a computer with an attached reference clock.
  • Ref time – this is the UTC time at which the last measurement from the reference source was made.
  • System time – delay of system clock from synchronized server.
  • Last offset – estimated offset of the last clock update.
  • RMS offset – long term average of the offset value.
  • Frequency – this is the rate by which the system’s clock would be wrong if chronyd is not correcting it. It is provided in ppm (parts per million).
  • Residual freq – residual frequency indicated the difference between the measurements from reference source and the frequency currently being used.
  • Skew – estimated error bound of the frequency.
  • Root delay – total of the network path delays to the stratum computer, from which the computer is being synced.
  • Leap status – this is the leap status which can have one of the following values – normal, insert second, delete second or not synchronized.

To check information about chrony’s sources, you can issue the following command.

# chronyc sources

Check Chrony Sources

Check Chrony Sources

Configure Chrony in Linux

The configuration file of chrony is located at /etc/chrony.conf or /etc/chrony/chrony.conf and sample configuration file may look something like this:

server 0.rhel.pool.ntp.org iburst
server 1.rhel.pool.ntp.org iburst
server 2.rhel.pool.ntp.org iburst
server 3.rhel.pool.ntp.org iburst

stratumweight 0
driftfile /var/lib/chrony/drift
makestep 10 3
logdir /var/log/chrony

The above configuration provide the following information:

  • server – this directive used to describe a NTP server to sync from.
  • stratumweight – how much distance should be added per stratum to the sync source. The default value is 0.0001.
  • driftfile – location and name of the file containing drift data.
  • Makestep – this directive causes chrony to gradually correct any time offset by speeding or slowing down the clock as required.
  • logdir – path to chrony’s log file.

If you want to step the system clock immediately and ignoring any adjustments currently being in progress, you can use the following command:

# chronyc makestep

If you decide to stop chrony, you can use the following commands.

# systemctl stop chrony          [On SystemD]
# /etc/init.d/chronyd stop       [On Init]
Conclusion

This was a show presentation of the chrony utility and how it can be used on your Linux system. If you wish to check more details about chrony, do review chrony documentation.

Source

Pssh – Execute Commands on Multiple Remote Linux Servers Using Single Terminal

No doubt, that OpenSSH is one of the most widely used and powerful tool available for Linux, that allows you to connect securely to remote Linux systems via a shell and allows you to transfer files securely to and from remote systems.

Run Commands on Multiple Linux Servers

Pssh – Run Commands on Multiple Linux Servers

But the biggest disadvantages of OpenSSH is that, you cannot execute same command on multiple hosts at one go and OpenSSH is not developed to perform such tasks. This is where Parallel SSH or PSSH tool comes in handy, is a python based application, which allows you to execute commands on multiple hosts in parallel at the same time.

Don’t MissExecute Commands on Multiple Linux Servers Using DSH Tool

PSSH tool includes parallel versions of OpenSSH and related tools such as:

  1. pssh – is a program for running ssh in parallel on a multiple remote hosts.
  2. pscp – is a program for copying files in parallel to a number of hosts.
    1. Pscp – Copy/Transfer Files Two or More Remote Linux Servers
  3. prsync – is a program for efficiently copying files to multiple hosts in parallel.
  4. pnuke – kills processes on multiple remote hosts in parallel.
  5. pslurp – copies files from multiple remote hosts to a central host in parallel.

These tools are good for System Administrators who find themselves working with large collections of nodes on a network.

Install PSSH or Parallel SSH on Linux

In this guide, we shall look at steps to install the latest version of PSSH (i.e. version 2.3.1) program on Fedorabased distributions such as CentOS/RedHat and Debian derivatives such as Ubuntu/Mint using pip command.

The pip command is a small program (replacement of easy_install script) for installing and managing Python software packages index.

On Fedora based Distributions

On CentOS/RHEL distributions, you need to first install pip (i.e. python-pip) package under your system, in order to install PSSH program.

# yum install python-pip

On Fedora 21+, you need to run dnf command instead yum (dnf replaced yum).

# dnf install python-pip

Once you’ve install pip tool, you can install the pssh package with the help of pip command as shown.

# pip install pssh  
Sample Output
/usr/lib/python2.6/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
  InsecurePlatformWarning
You are using pip version 7.1.0, however version 7.1.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Collecting pssh
/usr/lib/python2.6/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
  InsecurePlatformWarning
  Downloading pssh-2.3.1.tar.gz
Installing collected packages: pssh
  Running setup.py install for pssh
Successfully installed pssh-2.3.1

On Debian Derivatives

On Debian based distributions it takes a minute to install pssh using pip command.

$ sudo apt-get install python-pip
$ sudo pip install pssh
Sample Output
Downloading/unpacking pssh
  Downloading pssh-2.3.1.tar.gz
  Running setup.py (path:/tmp/pip_build_root/pssh/setup.py) egg_info for package pssh
    
Installing collected packages: pssh
  Running setup.py install for pssh
    changing mode of build/scripts-2.7/pssh from 644 to 755
    changing mode of build/scripts-2.7/pnuke from 644 to 755
    changing mode of build/scripts-2.7/prsync from 644 to 755
    changing mode of build/scripts-2.7/pslurp from 644 to 755
    changing mode of build/scripts-2.7/pscp from 644 to 755
    changing mode of build/scripts-2.7/pssh-askpass from 644 to 755
    
    changing mode of /usr/local/bin/pscp to 755
    changing mode of /usr/local/bin/pssh-askpass to 755
    changing mode of /usr/local/bin/pssh to 755
    changing mode of /usr/local/bin/prsync to 755
    changing mode of /usr/local/bin/pnuke to 755
    changing mode of /usr/local/bin/pslurp to 755
Successfully installed pssh
Cleaning up...

As you can see from the output above, the latest version of pssh is already installed on the system.

How do I Use pssh?

When using pssh you need to create a host file with the number of hosts along with IP address and port number that you need to connect to remote systems using pssh.

The lines in the host file are in the following form and can also include blank lines and comments.

pssh hosts file
192.168.0.10:22
192.168.0.11:22
Executing single command on multiple server using pssh

You can execute any single command on different or multiple Linux hosts on a network by running a psshcommand. There are many options to use with pssh as described below:

We shall look at a few ways of executing commands on a number of hosts using pssh with different options.

  1. To read hosts file, include the -h host_file-name or –hosts host_file_name option.
  2. To include a default username on all hosts that do not define a specific user, use the -l username or –user username option.
  3. You can also display standard output and standard error as each host completes. By using the -i or –inlineoption.
  4. You may wish to make connections time out after the given number of seconds by including the -t number_of_seconds option.
  5. To save standard output to a given directory, you can use the -o /directory/path option.
  6. To ask for a password and send to ssh, use the -A option.

Let’s see few examples and usage of pssh commands:

1. To execute echo “Hello TecMint” on the terminal of the multiple Linux hosts by root user and prompt for the root user’s password, run this command below.

Important: Remember all the hosts must be included in the host file.

# pssh -h pssh-hosts -l root -A echo "Hello TecMint"

Warning: do not enter your password if anyone else has superuser
privileges or access to your account.
Password: 
[1] 15:54:55 [SUCCESS] 192.168.0.10:22
[2] 15:54:56 [SUCCESS] 192.168.0.11:22

Note: In the above command “pssh-hosts” is a file with list of remote Linux servers IP address and SSH port number that you wish to execute commands.

2. To find out the disk space usage on multiple Linux servers on your network, you can run a single command as follows.

# pssh -h pssh-hosts -l root -A -i "df -hT"

Warning: do not enter your password if anyone else has superuser
privileges or access to your account.
Password: 
[1] 16:04:18 [SUCCESS] 192.168.0.10:22
Filesystem     Type   Size  Used Avail Use% Mounted on
/dev/sda3      ext4    38G  4.3G   32G  12% /
tmpfs          tmpfs  499M     0  499M   0% /dev/shm
/dev/sda1      ext4   190M   25M  156M  14% /boot

[2] 16:04:18 [SUCCESS] 192.168.0.11:22
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs        30G  9.8G   20G  34% /
devtmpfs                devtmpfs  488M     0  488M   0% /dev
tmpfs                   tmpfs     497M  148K  497M   1% /dev/shm
tmpfs                   tmpfs     497M  7.0M  490M   2% /run
tmpfs                   tmpfs     497M     0  497M   0% /sys/fs/cgroup
/dev/sda1               xfs       497M  166M  332M  34% /boot

3. If you wish to know the uptime of multiple Linux servers at one go, then you can run the following command.

# pssh -h pssh-hosts -l root -A -i "uptime"
Warning: do not enter your password if anyone else has superuser
privileges or access to your account.
Password: 
[1] 16:09:03 [SUCCESS] 192.168.0.10:22
 16:09:01 up  1:00,  2 users,  load average: 0.07, 0.02, 0.00

[2] 16:09:03 [SUCCESS] 192.168.0.11:22
 06:39:03 up  1:00,  2 users,  load average: 0.00, 0.06, 0.09

You can view the manual entry page for the pssh command to get many other options to find out more ways of using pssh.

# pssh --help

pssh commands and usages

pssh commands and usages

Summary

Parallel SSH or PSSH is a good tool to use for executing commands in an environment where a System Administrator has to work with many servers on a network. It will make it easy for commands to be executed remotely on different hosts on a network.

Hope you find this guide useful and incase of any additional information about pssh or errors while installing or using it, feel free to post a comment.

Source

How to Optimize and Compress JPEG or PNG Images in Linux Commandline

You have a lot of images, and want to optimize and compress the images without losing its original quality before uploading them to any cloud or local storages? There are plenty of GUI applications available which will help you to optimize the images. However, here are two simple command line utilities to optimize images and they are:

  1. jpegoptim – is a utility to optimize/compress JPEG files without loosing quality.
  2. OptiPNG – is a small program that optimize PNG images to smaller size without losing any information.

Compress and Optimize Images in Linux

Compress and Optimize JPEG and PNG Images in Linux

Using these two tools, you can either optimize a single or multiple images at a time.

Compress or Optimize JPEG Images from Command Line

jpegoptim is a command line tool that can be used to optimize and compress JPEG, JPG and JFIF files without losing its actual quality. This tool supports lossless optimization, which is based on optimizing the Huffman tables.

Install jpegoptim in Linux

To install jpegoptim on your Linux systems, run the following command from your terminal.

On Debian and it’s Derivatives
# apt-get install jpegoptim
or
$ sudo apt-get install jpegoptim
On RedHat based Systems

On RPM based systems like RHELCentOSFedora etc., you need to install and enable EPEL repository or alternatively, you can install the epel repository directly from the commandline as shown:

# yum install epel-release
# dnf install epel-release    [On Fedora 22+ versions]

Next install jpegoptim program from the repository as shown:

# yum install jpegoptim
# dnf install jpegoptim    [On Fedora 22+ versions]

How to Use Jpegoptim Image Optimizer

The syntax of jpegoptm is:

$ jpegoptim filename.jpeg
$ jpegoptim [options] filename.jpeg

Let’s now compress the following tecmint.jpeg image, but before optimizing the image, first find out the actual size of the image using du command as shown.

$ du -sh tecmint.jpeg 

6.2M	tecmint.jpeg

Here the actual file size is 6.2MB, now compress this file by running:

$ jpegoptim tecmint.jpeg 

Optimize JPEG Image in Linux

Optimize JPEG Image in Linux

Open the compressed image in any image viewer application, you will not find any major differences. The source and compressed images will have the same quality.

The above command optimizes the images to the maximum possible size. However, you can compress the given image to a specific size to, but it disables the lossless optimization.

For example, let us compress above the image from 5.6MB to around 250k.

$ jpegoptim --size=250k tecmint.jpeg

Optimize Image Fix Size

Optimize Image Fix Size

Batch JPEG Image Compression and Optimization

You might ask how to compress the images in the entire directory, that’s not difficult too. Go to the directory where you have the images.

tecmint@tecmint ~ $ cd img/
tecmint@tecmint ~/img $ ls -l
total 65184
-rwxr----- 1 tecmint tecmint 6680532 Jan 19 12:21 DSC_0310.JPG
-rwxr----- 1 tecmint tecmint 6846248 Jan 19 12:21 DSC_0311.JPG
-rwxr----- 1 tecmint tecmint 7174430 Jan 19 12:21 DSC_0312.JPG
-rwxr----- 1 tecmint tecmint 6514309 Jan 19 12:21 DSC_0313.JPG
-rwxr----- 1 tecmint tecmint 6755589 Jan 19 12:21 DSC_0314.JPG
-rwxr----- 1 tecmint tecmint 6789763 Jan 19 12:21 DSC_0315.JPG
-rwxr----- 1 tecmint tecmint 6958387 Jan 19 12:21 DSC_0316.JPG
-rwxr----- 1 tecmint tecmint 6463855 Jan 19 12:21 DSC_0317.JPG
-rwxr----- 1 tecmint tecmint 6614855 Jan 19 12:21 DSC_0318.JPG
-rwxr----- 1 tecmint tecmint 5931738 Jan 19 12:21 DSC_0319.JPG

And then run the following command to compress all images at once.

tecmint@tecmint ~/img $ jpegoptim *.JPG
DSC_0310.JPG 6000x4000 24bit N Exif  [OK] 6680532 --> 5987094 bytes (10.38%), optimized.
DSC_0311.JPG 6000x4000 24bit N Exif  [OK] 6846248 --> 6167842 bytes (9.91%), optimized.
DSC_0312.JPG 6000x4000 24bit N Exif  [OK] 7174430 --> 6536500 bytes (8.89%), optimized.
DSC_0313.JPG 6000x4000 24bit N Exif  [OK] 6514309 --> 5909840 bytes (9.28%), optimized.
DSC_0314.JPG 6000x4000 24bit N Exif  [OK] 6755589 --> 6144165 bytes (9.05%), optimized.
DSC_0315.JPG 6000x4000 24bit N Exif  [OK] 6789763 --> 6090645 bytes (10.30%), optimized.
DSC_0316.JPG 6000x4000 24bit N Exif  [OK] 6958387 --> 6354320 bytes (8.68%), optimized.
DSC_0317.JPG 6000x4000 24bit N Exif  [OK] 6463855 --> 5909298 bytes (8.58%), optimized.
DSC_0318.JPG 6000x4000 24bit N Exif  [OK] 6614855 --> 6016006 bytes (9.05%), optimized.
DSC_0319.JPG 6000x4000 24bit N Exif  [OK] 5931738 --> 5337023 bytes (10.03%), optimized.

You can also compress multiple selected images at once:

$ jpegoptim DSC_0310.JPG DSC_0311.JPG DSC_0312.JPG 
DSC_0310.JPG 6000x4000 24bit N Exif  [OK] 6680532 --> 5987094 bytes (10.38%), optimized.
DSC_0311.JPG 6000x4000 24bit N Exif  [OK] 6846248 --> 6167842 bytes (9.91%), optimized.
DSC_0312.JPG 6000x4000 24bit N Exif  [OK] 7174430 --> 6536500 bytes (8.89%), optimized.

For more details about jpegoptim tool, check out the man pages.

$ man jpegoptim 

Compress or Optimize PNG Images from Command Line

OptiPNG is a command line tool used to optimize and compress PNG (portable network graphics) files without losing its original quality.

The installation and usage of OptiPNG is very similar to jpegoptim.

Install OptiPNG in Linux

To install OptiPNG on your Linux systems, run the following command from your terminal.

On Debian and it’s Derivatives
# apt-get install optipng
or
$ sudo apt-get install optipng
On RedHat based Systems
# yum install optipng
# dnf install optipng    [On Fedora 22+ versions]

Note: You must have epel repository enabled on your RHEL/CentOS based systems to install optipng program.

How to Use OptiPNG Image Optimizer

The general syntax of optipng is:

$ optipng filename.png
$ optipng [options] filename.png

Let us compress the tecmint.png image, but before optimizing, first check the actual size of the image as shown:

tecmint@tecmint ~/img $ ls -lh tecmint.png 
-rw------- 1 tecmint tecmint 350K Jan 19 12:54 tecmint.png

Here the actual file size of above image is 350K, now compress this file by running:

tecmint@tecmint ~/img $ optipng tecmint.png 
OptiPNG 0.6.4: Advanced PNG optimizer.
Copyright (C) 2001-2010 Cosmin Truta.

** Processing: tecmint.png
1493x914 pixels, 4x8 bits/pixel, RGB+alpha
Reducing image to 3x8 bits/pixel, RGB
Input IDAT size = 357525 bytes
Input file size = 358098 bytes

Trying:
  zc = 9  zm = 8  zs = 0  f = 0		IDAT size = 249211
                               
Selecting parameters:
  zc = 9  zm = 8  zs = 0  f = 0		IDAT size = 249211

Output IDAT size = 249211 bytes (108314 bytes decrease)
Output file size = 249268 bytes (108830 bytes = 30.39% decrease)

As you see in the above output, the size of the tecmint.png file has been reduced up to 30.39%. Now verify the file size again using:

tecmint@tecmint ~/img $ ls -lh tecmint.png 
-rw-r--r-- 1 tecmint tecmint 244K Jan 19 12:56 tecmint.png

Open the compressed image in any image viewer application, you will not find any major differences between the original and compressed files. The source and compressed images will have the same quality.

Batch PNG Image Compression and Optimization

To compress batch or multiple PNG images at once, just go the directory where all images resides and run the following command to compress.

tecmint@tecmint ~ $ cd img/
tecmint@tecmint ~/img $ optipng *.png

OptiPNG 0.6.4: Advanced PNG optimizer.
Copyright (C) 2001-2010 Cosmin Truta.

** Processing: Debian-8.png
720x345 pixels, 3x8 bits/pixel, RGB
Input IDAT size = 95151 bytes
Input file size = 95429 bytes

Trying:
  zc = 9  zm = 8  zs = 0  f = 0		IDAT size = 81388
                               
Selecting parameters:
  zc = 9  zm = 8  zs = 0  f = 0		IDAT size = 81388

Output IDAT size = 81388 bytes (13763 bytes decrease)
Output file size = 81642 bytes (13787 bytes = 14.45% decrease)

** Processing: Fedora-22.png
720x345 pixels, 4x8 bits/pixel, RGB+alpha
Reducing image to 3x8 bits/pixel, RGB
Input IDAT size = 259678 bytes
Input file size = 260053 bytes

Trying:
  zc = 9  zm = 8  zs = 0  f = 5		IDAT size = 222479
  zc = 9  zm = 8  zs = 1  f = 5		IDAT size = 220311
  zc = 1  zm = 8  zs = 2  f = 5		IDAT size = 216744
                               
Selecting parameters:
  zc = 1  zm = 8  zs = 2  f = 5		IDAT size = 216744

Output IDAT size = 216744 bytes (42934 bytes decrease)
Output file size = 217035 bytes (43018 bytes = 16.54% decrease)
....

For more details about optipng check man pages.

$ man optipng

Conclusion

If you’re a webmaster and wants to serve optimized images over your website or a blog, these tools can be very handy. These tools not only saves the disk space, but also the reduces the bandwidth while uploading the images.

If you know any other better way to achieve same thing, do let us know via comments and don’t forget to share this article on your social networks and support us.

Source

60 Commands of Linux : A Guide from Newbies to System Administrator

For a person new to Linux, finding Linux functional is still not very easy even after the emergence of user friendly Linux distribution like Ubuntu and Mint. The thing remains that there will always be some configuration on user’s part to be done manually.

Linux Administration Commands

60 Linux Commands

Just to start with, the first thing a user should know is the basic commands in terminal. Linux GUI runs on Shell. When GUI is not running but Shell is running, Linux is running. If Shell is not running, nothing is running. Commands in Linux is a means of interaction with Shell. For a beginners some of the basic computational task is to:

  1. View the contents of a directory : A directory may contains visible and invisible files with different file permissions.
  2. Viewing blocks, HDD partition, External HDD
  3. Checking the integrity of Downloaded/Transferred Packages
  4. Converting and copying a file
  5. Know your machine name, OS and Kernel
  6. Viewing history
  7. Being root
  8. Make Directory
  9. Make Files
  10. Changing the file permission
  11. Own a file
  12. Install, Update and maintain Packages
  13. Uncompressing a file
  14. See current date, time and calendar
  15. Print contents of a file
  16. Copy and Move
  17. See the working directory for easy navigation
  18. Change the working directory, etc…

And we have described all of the above basic computational task in our First Article.

This was the first article of this series. We tried to provide you with detailed description of these commands with explicit examples which was highly appreciated by our reader in terms of likescomments and traffic.

What after these initial commands? Obviously we moved to the next part of this article where we provided commands for computational tasks like:

  1. Finding a file in a given directory
  2. Searching a file with the given keywords
  3. Finding online documentation
  4. See the current running processes
  5. Kill a running process
  6. See the location of installed Binaries
  7. Starting, Ending, Restarting a service
  8. Making and removing of aliases
  9. View the disk and space usages
  10. Removing a file and/or directory
  11. Print/echo a custom output on standard output
  12. Changing password of on-self and other’s, if you are root.
  13. View Printing queue
  14. Compare two files
  15. Download a file, the Linux way (wget)
  16. Mount a block / partition / external HDD
  17. Compile and Run a code written in ‘C’, ‘C++’ and ‘Java’ Programming Language

This Second Article was again highly appreciated by the readers of Tecmint.com. The article was nicely elaborated with suitable examples and output.

After providing the users with the glimpse of Commands used by a Middle Level User we thought to give our effort in a nice write-up for a list of command used by an user of System Administrator Level.

In our Third and last article of this series, we tried to cover the commands that would be required for the computational task like:

  1. Configuring Network Interface
  2. Viewing custom Network Related information
  3. Getting information about Internet Server with customisable switches and Results
  4. Digging DNS
  5. Knowing Your System uptime
  6. Sending an occasional Information to all other logged-in users
  7. Send text messages directly to a user
  8. Combination of commands
  9. Renaming a file
  10. Seeing the processes of a CPU
  11. Creating newly formatted ext4 partition
  12. Text File editors like vi, emacs and nano
  13. Copying a large file/folder with progress bar
  14. Keeping track of free and available memory
  15. Backup a mysql database
  16. Make difficult to guess – random password
  17. Merge two text files
  18. List of all the opened files

Writing this article and the list of command that needs to go with the article was a little cumbersome. We chose 20 commands with each article and hence gave a lot of thought for which command should be included and which should be excluded from the particular post. I personally selected the commands on the basis of their usability (as I use and get used to) from an user point of view and an Administrator point of view.

This Articles aims to concatenate all the articles of its series and provide you with all the functionality in commands you can perform in our this very series of articles.

There are too long lists of commands available in Linux. But we provided the list of 60 commands which is generally and most commonly used and a user having knowledge of these 60 commands as a whole can work in terminal very much smoothly.

That’s all for now from me. I will soon be coming up with another tutorial, you people will love to go through. Till then Stay Tuned!

Switching From Windows to Nix or a Newbie to Linux – 20 Useful Commands for Linux Newbies

So you are planning to switch from Windows to Linux, or have just switched to Linux? Oops!!! what I am asking! For what else reason would you have been here. From my past experience when I was new to Nux, commands and terminal really scared me, I was worried about the commands, as to what extent I have to remember and memorise them to get myself fully functional with Linux. No doubt online documentation, books, man pages and user community helped me a lot but I strongly believed that there should be an article with details of commands in easy to learn and understand language.These Motivated me to Master Linux and to make it easy-to-use. My this article is a step towards it.

Newbies Linux Commands

20 Linux Commands for Newbies

1. Command: ls

The command “ls” stands for (List Directory Contents), List the contents of the folder, be it file or folder, from which it runs.

root@tecmint:~# ls

Android-Games                     Music
Pictures                          Public
Desktop                           Tecmint.com
Documents                         TecMint-Sync
Downloads                         Templates

The command “ls -l” list the content of folder, in long listing fashion.

root@tecmint:~# ls -l

total 40588
drwxrwxr-x 2 ravisaive ravisaive     4096 May  8 01:06 Android Games
drwxr-xr-x 2 ravisaive ravisaive     4096 May 15 10:50 Desktop
drwxr-xr-x 2 ravisaive ravisaive     4096 May 16 16:45 Documents
drwxr-xr-x 6 ravisaive ravisaive     4096 May 16 14:34 Downloads
drwxr-xr-x 2 ravisaive ravisaive     4096 Apr 30 20:50 Music
drwxr-xr-x 2 ravisaive ravisaive     4096 May  9 17:54 Pictures
drwxrwxr-x 5 ravisaive ravisaive     4096 May  3 18:44 Tecmint.com
drwxr-xr-x 2 ravisaive ravisaive     4096 Apr 30 20:50 Templates

Command “ls -a“, list the content of folder, including hidden files starting with ‘.’.

root@tecmint:~# ls -a

.			.gnupg			.dbus			.goutputstream-PI5VVW		.mission-control
.adobe                  deja-dup                .grsync                 .mozilla                 	.themes
.gstreamer-0.10         .mtpaint                .thumbnails             .gtk-bookmarks          	.thunderbird
.HotShots               .mysql_history          .htaccess		.apport-ignore.xml      	.ICEauthority           
.profile                .bash_history           .icons                  .bash_logout                    .fbmessenger
.jedit                  .pulse                  .bashrc                 .liferea_1.8             	.pulse-cookie            
.Xauthority		.gconf                  .local                  .Xauthority.HGHVWW		.cache
.gftp                   .macromedia             .remmina                .cinnamon                       .gimp-2.8
.ssh                    .xsession-errors 	.compiz                 .gnome                          teamviewer_linux.deb          
.xsession-errors.old	.config                 .gnome2                 .zoncolor

Note: In Linux file name starting with ‘.‘ is hidden. In Linux every file/folder/device/command is a file. The output of ls -l is:

  1. d (stands for directory).
  2. rwxr-xr-x is the file permission of the file/folder for owner, group and world.
  3. The 1st ravisaive in the above example means that file is owned by user ravisaive.
  4. The 2nd ravisaive in the above example means file belongs to user group ravisaive.
  5. 4096 means file size is 4096 Bytes.
  6. May 8 01:06 is the date and time of last modification.
  7. And at the end is the name of the File/Folder.

For more “ls” command examples read 15 ‘ls’ Command Examples in Linux.

2. Command: lsblk

The “lsblk” stands for (List Block Devices), print block devices by their assigned name (but not RAM) on the standard output in a tree-like fashion.

root@tecmint:~# lsblk

NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 232.9G  0 disk 
├─sda1   8:1    0  46.6G  0 part /
├─sda2   8:2    0     1K  0 part 
├─sda5   8:5    0   190M  0 part /boot
├─sda6   8:6    0   3.7G  0 part [SWAP]
├─sda7   8:7    0  93.1G  0 part /data
└─sda8   8:8    0  89.2G  0 part /personal
sr0     11:0    1  1024M  0 rom

The “lsblk -l” command list block devices in ‘list‘ structure (not tree like fashion).

root@tecmint:~# lsblk -l

NAME MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda    8:0    0 232.9G  0 disk 
sda1   8:1    0  46.6G  0 part /
sda2   8:2    0     1K  0 part 
sda5   8:5    0   190M  0 part /boot
sda6   8:6    0   3.7G  0 part [SWAP]
sda7   8:7    0  93.1G  0 part /data
sda8   8:8    0  89.2G  0 part /personal
sr0   11:0    1  1024M  0 rom

Note: lsblk is very useful and easiest way to know the name of New Usb Device you just plugged in, especially when you have to deal with disk/blocks in terminal.

3. Command: md5sum

The “md5sum” stands for (Compute and Check MD5 Message Digest), md5 checksum (commonly called hash) is used to match or verify integrity of files that may have changed as a result of a faulty file transfer, a disk error or non-malicious interference.

root@tecmint:~# md5sum teamviewer_linux.deb 

47790ed345a7b7970fc1f2ac50c97002  teamviewer_linux.deb

Note: The user can match the generated md5sum with the one provided officially. Md5sum is considered less secure than sha1sum, which we will discuss later.

4. Command: dd

Command “dd” stands for (Convert and Copy a file), Can be used to convert and copy a file and most of the times is used to copy a iso file (or any other file) to a usb device (or any other location), thus can be used to make a ‘Bootlable‘ Usb Stick.

root@tecmint:~# dd if=/home/user/Downloads/debian.iso of=/dev/sdb1 bs=512M; sync

Note: In the above example the usb device is supposed to be sdb1 (You should Verify it using command lsblk, otherwise you will overwrite your disk and OS), use name of disk very Cautiously!!!.

dd command takes some time ranging from a few seconds to several minutes in execution, depending on the size and type of file and read and write speed of Usb stick.

5. Command: uname

The “uname” command stands for (Unix Name), print detailed information about the machine name, Operating System and Kernel.

root@tecmint:~# uname -a

Linux tecmint 3.8.0-19-generic #30-Ubuntu SMP Wed May 1 16:36:13 UTC 2013 i686 i686 i686 GNU/Linux

Note: uname shows type of kernel. uname -a output detailed information. Elaborating the above output of uname -a.

  1. Linux“: The machine’s kernel name.
  2. tecmint“: The machine’s node name.
  3. 3.8.0-19-generic“: The kernel release.
  4. #30-Ubuntu SMP“: The kernel version.
  5. i686“: The architecture of the processor.
  6. GNU/Linux“: The operating system name.

6. Command: history

The “history” command stands for History (Event) Record, it prints the history of long list of executed commands in terminal.

root@tecmint:~# history

 1  sudo add-apt-repository ppa:tualatrix/ppa
 2  sudo apt-get update
 3  sudo apt-get install ubuntu-tweak
 4  sudo add-apt-repository ppa:diesch/testing
 5  sudo apt-get update
 6  sudo apt-get install indicator-privacy
 7  sudo add-apt-repository ppa:atareao/atareao
 8  sudo apt-get update
 9  sudo apt-get install my-weather-indicator
 10 pwd
 11 cd && sudo cp -r unity/6 /usr/share/unity/
 12 cd /usr/share/unity/icons/
 13 cd /usr/share/unity

Note: Pressing “Ctrl + R” and then search for already executed commands which lets your command to be completed with auto completion feature.

(reverse-i-search)`if': ifconfig

7. Command: sudo

The “sudo” (super user do) command allows a permitted user to execute a command as the superuser or another user, as specified by the security policy in the sudoers list.

root@tecmint:~# sudo add-apt-repository ppa:tualatrix/ppa

Note: sudo allows user to borrow superuser privileged, while a similar command ‘su‘ allows user to actually log in as superuser. Sudo is safer than su.
It is not advised to use sudo or su for day-to-day normal use, as it can result in serious error if accidentally you did something wrong, that’s why a very popular saying in Linux community is:

“To err is human, but to really foul up everything, you need root password.”

8. Command: mkdir

The “mkdir” (Make directory) command create a new directory with name path. However is the directory already exists, it will return an error message “cannot create folder, folder already exists”.

root@tecmint:~# mkdir tecmint

Note: Directory can only be created inside the folder, in which the user has write permission. mkdir: cannot create directory `tecmint‘: File exists
(Don’t confuse with file in the above output, you might remember what i said at the beginning – In Linux every file, folder, drive, command, scripts are treated as file).

9. Command: touch

The “touch” command stands for (Update the access and modification times of each FILE to the current time). touch command creates the file, only if it doesn’t exist. If the file already exists it will update the timestamp and not the contents of the file.

root@tecmint:~# touch tecmintfile

Note: touch can be used to create file under directory, on which the user has write permission, only if the file don’t exist there.

10. Command: chmod

The Linux “chmod” command stands for (change file mode bits). chmod changes the file mode (permission) of each given file, folder, script, etc.. according to mode asked for.

There exist 3 types of permission on a file (folder or anything but to keep things simple we will be using file).

Read (r)=4
Write(w)=2
Execute(x)=1

So if you want to give only read permission on a file it will be assigned a value of ‘4‘, for write permission only, a value of ‘2‘ and for execute permission only, a value of ‘1‘ is to be given. For read and write permission 4+2 = ‘6‘ is to be given, ans so on.

Now permission need to be set for 3 kinds of user and usergroup. The first is owner, then usergroup and finally world.

rwxr-x--x   abc.sh

Here the root’s permission is rwx (readwrite and execute).
usergroup to which it belongs, is r-x (read and execute only, no write permission) and
for world is –x (only execute).

To change its permission and provide readwrite and execute permission to owner, group and world.

root@tecmint:~# chmod 777 abc.sh

only read and write permission to all three.

root@tecmint:~# chmod 666 abc.sh

readwrite and execute to owner and only execute to group and world.

root@tecmint:~# chmod 711 abc.sh

Note: one of the most important command useful for sysadmin and user both. On a multi-user environment or on a server, this command comes to rescue, setting wrong permission will either makes a file inaccessible or provide unauthorized access to someone.

11. Command: chown

The Linux “chown” command stands for (change file owner and group). Every file belongs to a group of user and a owner. It is used Do ‘ls -l‘ into your directory and you will see something like this.

root@tecmint:~# ls -l 

drwxr-xr-x 3 server root 4096 May 10 11:14 Binary 
drwxr-xr-x 2 server server 4096 May 13 09:42 Desktop

Here the directory Binary is owned by user “server” and it belongs to usergroup “root” where as directory “Desktop” is owned by user “server” and belongs to user group “server“.

This “chown” command is used to change the file ownership and thus is useful in managing and providing file to authorised user and usergroup only.

root@tecmint:~# chown server:server Binary

drwxr-xr-x 3 server server 4096 May 10 11:14 Binary 
drwxr-xr-x 2 server server 4096 May 13 09:42 Desktop

Note: “chown” changes the user and group ownership of each given FILE to NEW-OWNER or to the user and group of an existing reference file.

12. Command: apt

The Debian based “apt” command stands for (Advanced Package Tool). Apt is an advanced package manager for Debian based system (UbuntuKubuntu, etc.), that automatically and intelligently searchinstallupdate and resolves dependency of packages on Gnu/Linux system from command line.

root@tecmint:~# apt-get install mplayer

Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following package was automatically installed and is no longer required:
  java-wrappers
Use 'apt-get autoremove' to remove it.
The following extra packages will be installed:
  esound-common libaudiofile1 libesd0 libopenal-data libopenal1 libsvga1 libvdpau1 libxvidcore4
Suggested packages:
  pulseaudio-esound-compat libroar-compat2 nvidia-vdpau-driver vdpau-driver mplayer-doc netselect fping
The following NEW packages will be installed:
  esound-common libaudiofile1 libesd0 libopenal-data libopenal1 libsvga1 libvdpau1 libxvidcore4 mplayer
0 upgraded, 9 newly installed, 0 to remove and 8 not upgraded.
Need to get 3,567 kB of archives.
After this operation, 7,772 kB of additional disk space will be used.
Do you want to continue [Y/n]? y
root@tecmint:~# apt-get update

Hit http://ppa.launchpad.net raring Release.gpg                                           
Hit http://ppa.launchpad.net raring Release.gpg                                           
Hit http://ppa.launchpad.net raring Release.gpg                      
Hit http://ppa.launchpad.net raring Release.gpg                      
Get:1 http://security.ubuntu.com raring-security Release.gpg [933 B] 
Hit http://in.archive.ubuntu.com raring Release.gpg                                                   
Hit http://ppa.launchpad.net raring Release.gpg                      
Get:2 http://security.ubuntu.com raring-security Release [40.8 kB]   
Ign http://ppa.launchpad.net raring Release.gpg                                                  
Get:3 http://in.archive.ubuntu.com raring-updates Release.gpg [933 B]                            
Hit http://ppa.launchpad.net raring Release.gpg                                                                
Hit http://in.archive.ubuntu.com raring-backports Release.gpg

Note: The above commands results into system-wide changes and hence requires root password (Check ‘#‘ and not ‘$’ as prompt). Apt is considered more advanced and intelligent as compared to yum command.

As the name suggest, apt-cache search for package containing sub package mpalyerapt-get install, update all the packages, that are already installed, to the newest one.

Read more about apt-get and apt-cache commands at 25 APT-GET and APT-CACHE Commands

13. Command: tar

The “tar” command is a Tape Archive is useful in creation of archive, in a number of file format and their extraction.

root@tecmint:~# tar -zxvf abc.tar.gz (Remember 'z' for .tar.gz)
root@tecmint:~# tar -jxvf abc.tar.bz2 (Remember 'j' for .tar.bz2)
root@tecmint:~# tar -cvf archieve.tar.gz(.bz2) /path/to/folder/abc

Note: A ‘tar.gz‘ means gzipped. ‘tar.bz2‘ is compressed with bzip which uses a better but slower compression method.

Read more about “tar command” examples at 18 Tar Command Examples

14. Command: cal

The “cal” (Calendar), it is used to displays calendar of the present month or any other month of any year that is advancing or passed.

root@tecmint:~# cal 

May 2013        
Su Mo Tu We Th Fr Sa  
          1  2  3  4  
 5  6  7  8  9 10 11  
12 13 14 15 16 17 18  
19 20 21 22 23 24 25  
26 27 28 29 30 31

Show calendar of year 1835 for month February, that already has passed.

root@tecmint:~# cal 02 1835

   February 1835      
Su Mo Tu We Th Fr Sa  
 1  2  3  4  5  6  7  
 8  9 10 11 12 13 14  
15 16 17 18 19 20 21  
22 23 24 25 26 27 28

Shows calendar of year 2145 for the month of July, that will advancing

root@tecmint:~# cal 07 2145

     July 2145        
Su Mo Tu We Th Fr Sa  
             1  2  3  
 4  5  6  7  8  9 10  
11 12 13 14 15 16 17  
18 19 20 21 22 23 24  
25 26 27 28 29 30 31

Note: You need not to turn the calendar of 50 years back, neither you need to make complex mathematical calculation to know what day you were worn or your coming birthday will fall on which day.

15. Command: date

The “date” (Date) command print the current date and time on the standard output, and can further be set.

root@tecmint:~# date

Fri May 17 14:13:29 IST 2013
root@tecmint:~# date --set='14 may 2013 13:57' 

Mon May 13 13:57:00 IST 2013

Note: This Command will be very use-full in scripting, time and date based scripting, to be more perfect. Moreover changing date and time using terminal will make you feel GEEK!!!. (Obviously you need to be root to perform this operation, as it is a system wide change).

16. Command: cat

The “cat” stands for (Concatenation). Concatenate (join) two or more plain file and/or print contents of a file on standard output.

root@tecmint:~# cat a.txt b.txt c.txt d.txt >> abcd.txt
root@tecmint:~# cat abcd.txt
....
contents of file abcd 
...

Note: “>>” and “>” are called append symbol. They are used to append the output to a file and not on standard output. “>” symbol will delete a file already existed and create a new file hence for security reason it is advised to use “>>” that will write the output without overwriting or deleting the file.

Before Proceeding further, I must let you know about wildcards (you would be aware of wildcard entry, in most of the Television shows) Wildcards are a shell feature that makes the command line much more powerful than any GUI file managers. You see, if you want to select a big group of files in a graphical file manager, you usually have to select them with your mouse. This may seem simple, but in some cases it can be very frustrating.

For example, suppose you have a directory with a huge amount of all kinds of files and subdirectories, and you decide to move all the HTML files, that have the word “Linux” somewhere in the middle of their names, from that big directory into another directory. What’s a simple way to do this? If the directory contains a huge amount of differently named HTML files, your task is everything but simple!

In the Linux CLI that task is just as simple to perform as moving only one HTML file, and it’s so easy because of the shell wildcards. These are special characters that allow you to select file names that match certain patterns of characters. This helps you to select even a big group of files with typing just a few characters, and in most cases it’s easier than selecting the files with a mouse.

Here’s a list of the most commonly used wildcards :

Wildcard			Matches
   *			zero or more characters
   ?			exactly one character
[abcde]			exactly one character listed
 [a-e]			exactly one character in the given range
[!abcde]		any character that is not listed
 [!a-e]			any character that is not in the given range
{debian,linux}		exactly one entire word in the options given

! is called not symbol, and the reverse of string attached with ‘!’ is true.

Read more examples of Linux “cat command” at 13 Cat Command Examples in Linux

17. Command: cp

The “copy” stands for (Copy), it copies a file from one location to another location.

root@tecmint:~# cp /home/user/Downloads abc.tar.gz /home/user/Desktop (Return 0 when sucess)

Note: cp is one of the most commonly used command in shell scripting and it can be used with wildcard characters (Describe in the above block), for customised and desired file copying.

18. Command: mv

The “mv” command moves a file from one location to another location.

root@tecmint:~# mv /home/user/Downloads abc.tar.gz /home/user/Desktop (Return 0 when sucess)

Note: mv command can be used with wildcard characters. mv should be used with caution, as moving of system/unauthorised file may lead to security as well as breakdown of system.

19. Command: pwd

The command “pwd” (print working directory), prints the current working directory with full path name from terminal.

root@tecmint:~# pwd 

/home/user/Desktop

Note: This command won’t be much frequently used in scripting but it is an absolute life saver for newbie who gets lost in terminal in their early connection with nux. (Linux is most commonly referred as nux or nix).

20. Command: cd

Finally, the frequently used “cd” command stands for (change directory), it change the working directory to execute, copy, move write, read, etc. from terminal itself.

root@tecmint:~# cd /home/user/Desktop
server@localhost:~$ pwd

/home/user/Desktop

Note: cd comes to rescue when switching between directories from terminal. “Cd ~” will change the working directory to user’s home directory, and is very useful if a user finds himself lost in terminal. “Cd ..” will change the working directory to parent directory (of current working directory).

These commands will surely make you comfortable with Linux. But it’s not the end. Very soon I will be coming with other commands which will be useful for ‘Middle Level User‘ i.e., You! No don’t exclaim, if you get used-to these commands, You will notice promotion in user-level from newbie to Middle-level-user. In the next article, I will be coming up with commands like ‘Kill‘, ‘Ps‘, ‘grep‘,….Wait for the article and I don’t want to spoil your interest.

20 Advanced Commands for Middle Level Linux Users

You might have found the first article very much useful, this article is an extension of the 20 Useful Commands for Linux Newbies. The first article was intended for newbies and this article is for Middle-Level-User and Advanced Users. Here you will find how to customise search, know the processes running guide to kill them, how to make your Linux terminal productive is an important aspect and how to compile cc++java programs in nix.

Linux Advanced & Expert Commands

20 Linux Advanced & Expert Commands

21. Command: Find

Search for files in the given directory, hierarchically starting at the parent directory and moving to sub-directories.

root@tecmint:~# find -name *.sh 

./Desktop/load.sh 
./Desktop/test.sh 
./Desktop/shutdown.sh 
./Binary/firefox/run-mozilla.sh 
./Downloads/kdewebdev-3.5.8/quanta/scripts/externalpreview.sh 
./Downloads/kdewebdev-3.5.8/admin/doxygen.sh 
./Downloads/kdewebdev-3.5.8/admin/cvs.sh 
./Downloads/kdewebdev-3.5.8/admin/ltmain.sh 
./Downloads/wheezy-nv-install.sh

Note: The `-name‘ option makes the search case sensitive. You can use the `-iname‘ option to find something regardless of case. (* is a wildcard and searches all the file having extension ‘.sh‘ you can use filename or a part of file name to customise the output).

root@tecmint:~# find -iname *.SH ( find -iname *.Sh /  find -iname *.sH)

./Desktop/load.sh 
./Desktop/test.sh 
./Desktop/shutdown.sh 
./Binary/firefox/run-mozilla.sh 
./Downloads/kdewebdev-3.5.8/quanta/scripts/externalpreview.sh 
./Downloads/kdewebdev-3.5.8/admin/doxygen.sh 
./Downloads/kdewebdev-3.5.8/admin/cvs.sh 
./Downloads/kdewebdev-3.5.8/admin/ltmain.sh 
./Downloads/wheezy-nv-install.sh
root@tecmint:~# find -name *.tar.gz 

/var/www/modules/update/tests/aaa_update_test.tar.gz 
./var/cache/flashplugin-nonfree/install_flash_player_11_linux.i386.tar.gz 
./home/server/Downloads/drupal-7.22.tar.gz 
./home/server/Downloads/smtp-7.x-1.0.tar.gz 
./home/server/Downloads/noreqnewpass-7.x-1.2.tar.gz 
./usr/share/gettext/archive.git.tar.gz 
./usr/share/doc/apg/php.tar.gz 
./usr/share/doc/festival/examples/speech_pm_1.0.tar.gz 
./usr/share/doc/argyll/examples/spyder2.tar.gz 
./usr/share/usb_modeswitch/configPack.tar.gz

Note: The above command searches for all the file having extension ‘tar.gz‘ in root directory and all the sub-directories including mounted devices.

Read more examples of Linux ‘find‘ command at 35 Find Command Examples in Linux

22. Command: grep

The ‘grep‘ command searches the given file for lines containing a match to the given strings or words. Search ‘/etc/passwd‘ for ‘tecmint‘ user.

root@tecmint:~# grep tecmint /etc/passwd 

tecmint:x:1000:1000:Tecmint,,,:/home/tecmint:/bin/bash

Ignore word case and all other combination with ‘-i‘ option.

root@tecmint:~# grep -i TECMINT /etc/passwd 

tecmint:x:1000:1000:Tecmint,,,:/home/tecmint:/bin/bash

Search recursively (-ri.e. read all files under each directory for a string “127.0.0.1“.

root@tecmint:~# grep -r "127.0.0.1" /etc/ 

/etc/vlc/lua/http/.hosts:127.0.0.1
/etc/speech-dispatcher/modules/ivona.conf:#IvonaServerHost "127.0.0.1"
/etc/mysql/my.cnf:bind-address		= 127.0.0.1
/etc/apache2/mods-available/status.conf:    Allow from 127.0.0.1 ::1
/etc/apache2/mods-available/ldap.conf:    Allow from 127.0.0.1 ::1
/etc/apache2/mods-available/info.conf:    Allow from 127.0.0.1 ::1
/etc/apache2/mods-available/proxy_balancer.conf:#    Allow from 127.0.0.1 ::1
/etc/security/access.conf:#+ : root : 127.0.0.1
/etc/dhcp/dhclient.conf:#prepend domain-name-servers 127.0.0.1;
/etc/dhcp/dhclient.conf:#  option domain-name-servers 127.0.0.1;
/etc/init/network-interface.conf:	ifconfig lo 127.0.0.1 up || true
/etc/java-6-openjdk/net.properties:# localhost & 127.0.0.1).
/etc/java-6-openjdk/net.properties:# http.nonProxyHosts=localhost|127.0.0.1
/etc/java-6-openjdk/net.properties:# localhost & 127.0.0.1).
/etc/java-6-openjdk/net.properties:# ftp.nonProxyHosts=localhost|127.0.0.1
/etc/hosts:127.0.0.1	localhost

Note: You can use these following options along with grep.

  1. -w for word (egrep -w ‘word1|word2‘ /path/to/file).
  2. -c for count (i.e., total number of times the pattern matched) (grep -c ‘word‘ /path/to/file).
  3. –color for coloured output (grep –color server /etc/passwd).

23. Command: man

The ‘man‘ is the system’s manual pager. Man provides online documentation for all the possible options with a command and its usages. Almost all the command comes with their corresponding manual pages. For example,

root@tecmint:~# man man

MAN(1)                                                               Manual pager utils                                                              MAN(1)

NAME
       man - an interface to the on-line reference manuals

SYNOPSIS
       man  [-C  file]  [-d]  [-D]  [--warnings[=warnings]]  [-R  encoding]  [-L  locale]  [-m  system[,...]]  [-M  path]  [-S list] [-e extension] [-i|-I]
       [--regex|--wildcard] [--names-only] [-a] [-u] [--no-subpages] [-P pager] [-r prompt] [-7] [-E encoding] [--no-hyphenation] [--no-justification]  [-p
       string] [-t] [-T[device]] [-H[browser]] [-X[dpi]] [-Z] [[section] page ...] ...
       man -k [apropos options] regexp ...
       man -K [-w|-W] [-S list] [-i|-I] [--regex] [section] term ...
       man -f [whatis options] page ...
       man -l [-C file] [-d] [-D] [--warnings[=warnings]] [-R encoding] [-L locale] [-P pager] [-r prompt] [-7] [-E encoding] [-p string] [-t] [-T[device]]
       [-H[browser]] [-X[dpi]] [-Z] file ...
       man -w|-W [-C file] [-d] [-D] page ...
       man -c [-C file] [-d] [-D] page ...
       man [-hV]

Manual page for man page itself, similarly ‘man cat‘ (Manual page for cat command) and ‘man ls‘ (Manual page for command ls).

Note: man page is intended for command reference and learning.

24. Command: ps

ps (Process) gives the status of running processes with a unique Id called PID.

root@tecmint:~# ps

 PID TTY          TIME CMD
 4170 pts/1    00:00:00 bash
 9628 pts/1    00:00:00 ps

To list status of all the processes along with process id and PID, use option ‘-A‘.

root@tecmint:~# ps -A

 PID TTY          TIME CMD
    1 ?        00:00:01 init
    2 ?        00:00:00 kthreadd
    3 ?        00:00:01 ksoftirqd/0
    5 ?        00:00:00 kworker/0:0H
    7 ?        00:00:00 kworker/u:0H
    8 ?        00:00:00 migration/0
    9 ?        00:00:00 rcu_bh
....

Note: This command is very useful when you want to know which processes are running or may need PIDsometimes, for process to be killed. You can use it with ‘grep‘ command to find customised output. For example,

root@tecmint:~# ps -A | grep -i ssh

 1500 ?        00:09:58 sshd
 4317 ?        00:00:00 sshd

Here ‘ps‘ is pipelined with ‘grep‘ command to find customised and relevant output of our need.

25. Command: kill

OK, you might have understood what this command is for, from the name of the command. This command is used to kill process which is not relevant now or is not responding. It is very useful command, rather a very very useful command. You might be familiar with frequent windows restarting because of the fact that most of the time a running process can’t be killed, and if killed it needs windows to get restart so that changes could be taken into effect but in the world of Linux, there is no such things. Here you can kill a process and start it without restarting the whole system.

You need a process’s pid (ps) to kill it.

Let suppose you want to kill program ‘apache2‘ that might not be responding. Run ‘ps -A‘ along with grepcommand.

root@tecmint:~# ps -A | grep -i apache2

1285 ?        00:00:00 apache2

Find process ‘apache2‘, note its pid and kill it. For example, in my case ‘apache2‘ pid is ‘1285‘.

root@tecmint:~# kill 1285 (to kill the process apache2)

Note: Every time you re-run a process or start a system, a new pid is generated for each process and you can know about the current running processes and its pid using command ‘ps‘.

Another way to kill the same process is.

root@tecmint:~# pkill apache2

Note: Kill requires job id / process id for sending signals, where as in pkill, you have an option of using pattern, specifying process owner, etc.

26. Command: whereis

The ‘whereis‘ command is used to locate the BinarySources and Manual Pages of the command. For example, to locate the BinarySources and Manual Pages of the command ‘ls‘ and ‘kill‘.

root@tecmint:~# whereis ls 

ls: /bin/ls /usr/share/man/man1/ls.1.gz
root@tecmint:~# whereis kill

kill: /bin/kill /usr/share/man/man2/kill.2.gz /usr/share/man/man1/kill.1.gz

Note: This is useful to know where the binaries are installed for manual editing sometimes.

27. Command: service

The ‘service‘ command controls the StartingStopping or Restarting of a ‘service‘. This command make it possible to startrestart or stop a service without restarting the system, for the changes to be taken into effect.

Startting an apache2 server on Ubuntu

root@tecmint:~# service apache2 start

 * Starting web server apache2                                                                                                                                 apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName
httpd (pid 1285) already running						[ OK ]

Restarting a apache2 server on Ubuntu

root@tecmint:~# service apache2 restart

* Restarting web server apache2                                                                                                                               apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName
 ... waiting .apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName  [ OK ]

Stopping a apache2 server on Ubuntu

root@tecmint:~# service apache2 stop

 * Stopping web server apache2                                                                                                                                 apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName
 ... waiting                                                           		[ OK ]

Note: All the process script lies in ‘/etc/init.d‘, and the path might needs to be included on certain system, i.e., in spite of running “service apache2 start” you would be asked to run “/etc/init.d/apache2 start”.

28. Command: alias

alias is a built in shell command that lets you assign name for a long command or frequently used command.

I uses ‘ls -l‘ command frequently, which includes 5 characters including space. Hence I created an alias for this to ‘l‘.

root@tecmint:~# alias l='ls -l'

check if it works or not.

root@tecmint:~# l

total 36 
drwxr-xr-x 3 tecmint tecmint 4096 May 10 11:14 Binary 
drwxr-xr-x 3 tecmint tecmint 4096 May 21 11:21 Desktop 
drwxr-xr-x 2 tecmint tecmint 4096 May 21 15:23 Documents 
drwxr-xr-x 8 tecmint tecmint 4096 May 20 14:56 Downloads 
drwxr-xr-x 2 tecmint tecmint 4096 May  7 16:58 Music 
drwxr-xr-x 2 tecmint tecmint 4096 May 20 16:17 Pictures 
drwxr-xr-x 2 tecmint tecmint 4096 May  7 16:58 Public 
drwxr-xr-x 2 tecmint tecmint 4096 May  7 16:58 Templates 
drwxr-xr-x 2 tecmint tecmint 4096 May  7 16:58 Videos

To remove alias ‘l‘, use the following ‘unalias‘ command.

root@tecmint:~# unalias l

check, if ‘l‘ still is alias or not.

root@tecmint:~# l

bash: l: command not found

Making a little fun out of this command. Make alias of certain important command to some other important command.

alias cd='ls -l' (set alias of ls -l to cd)
alias su='pwd' (set alias of pwd to su)
....
(You can create your own)
....

Now when your friend types ‘cd‘, just think how funny it would be when he gets directory listing and not directory changing. And when he tries to be ‘su‘ the all he gets is the location of working directory. You can remove the alias later using command ‘unalias‘ as explained above.

29. Command: df

Report disk usages of file system. Useful for user as well as System Administrator to keep track of their disk usages. ‘df‘ works by examining directory entries, which generally are updated only when a file is closed.

root@tecmint:~# df

Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/sda1       47929224 7811908  37675948  18% /
none                   4       0         4   0% /sys/fs/cgroup
udev             1005916       4   1005912   1% /dev
tmpfs             202824     816    202008   1% /run
none                5120       0      5120   0% /run/lock
none             1014120     628   1013492   1% /run/shm
none              102400      44    102356   1% /run/user
/dev/sda5         184307   79852     94727  46% /boot
/dev/sda7       95989516   61104  91045676   1% /data
/dev/sda8       91953192   57032  87218528   1% /personal

For more examples of ‘df‘ command, read the article 12 df Command Examples in Linux.

30. Command: du

Estimate file space usage. Output the summary of disk usages by ever file hierarchically, i.e., in recursive manner.

root@tecmint:~# du

8       ./Daily Pics/wp-polls/images/default_gradient
8       ./Daily Pics/wp-polls/images/default
32      ./Daily Pics/wp-polls/images
8       ./Daily Pics/wp-polls/tinymce/plugins/polls/langs
8       ./Daily Pics/wp-polls/tinymce/plugins/polls/img
28      ./Daily Pics/wp-polls/tinymce/plugins/polls
32      ./Daily Pics/wp-polls/tinymce/plugins
36      ./Daily Pics/wp-polls/tinymce
580     ./Daily Pics/wp-polls
1456    ./Daily Pics
36      ./Plugins/wordpress-author-box
16180   ./Plugins
12      ./May Articles 2013/Xtreme Download Manager
4632    ./May Articles 2013/XCache

Note: ‘df‘ only reports usage statistics on file systems, while ‘du‘, on the other hand, measures directory contents. For more ‘du‘ command examples and usage, read 10 du (Disk Usage) Commands.

31. Command: rm

The command ‘rm‘ stands for remove. rm is used to remove files (s) and directories.

Removing a directory

root@tecmint:~# rm PassportApplicationForm_Main_English_V1.0

rm: cannot remove `PassportApplicationForm_Main_English_V1.0': Is a directory

The directory can’t be removed simply by ‘rm‘ command, you have to use ‘-rf‘ switch along with ‘rm‘.

root@tecmint:~# rm -rf PassportApplicationForm_Main_English_V1.0

Warning: “rm -rf” command is a destructive command if accidently you make it to the wrong directory. Once you ‘rm -rf‘ a directory all the files and the directory itself is lost forever, all of a sudden. Use it with caution.

32. Command: echo

echo as the name suggest echoes a text on the standard output. It has nothing to do with shell, nor does shell reads the output of echo command. However in an interactive script, echo passes the message to the user through terminal. It is one of the command that is commonly used in scripting, interactive scripting.

root@tecmint:~# echo "Tecmint.com is a very good website" 

Tecmint.com is a very good website
creating a small interactive script

1. create a file, named ‘interactive_shell.sh‘ on desktop. (Remember ‘.sh‘ extension is must).
2. copy and paste the below script, exactly same, as below.

#!/bin/bash 
echo "Please enter your name:" 
   read name 
   echo "Welcome to Linux $name"

Next, set execute permission and run the script.

root@tecmint:~# chmod 777 interactive_shell.sh
root@tecmint:~# ./interactive_shell.sh

Please enter your name:
Ravi Saive
Welcome to Linux Ravi Saive

Note: ‘#!/bin/bash‘ tells the shell that it is an script an it is always a good idea to include it at the top of script. ‘read‘ reads the given input.

33. Command: passwd

This is an important command that is useful for changing own password in terminal. Obviously you need to know your current passowrd for Security reason.

root@tecmint:~# passwd 

Changing password for tecmint. 
(current) UNIX password: ******** 
Enter new UNIX password: ********
Retype new UNIX password: ********
Password unchanged   [Here was passowrd remians unchanged, i.e., new password=old password]
Enter new UNIX password: #####
Retype new UNIX password:#####

34. Command: lpr

This command print files named on command line, to named printer.

root@tecmint:~# lpr -P deskjet-4620-series 1-final.pdf

Note: The ‘lpq‘ command lets you view the status of a printer (whether it’s up or not), and the jobs (files) waiting to be printed.

35. Command: cmp

compare two files of any type and writes the results to the standard output. By default, ‘cmp‘ Returns 0 if the files are the same; if they differ, the byte and line number at which the first difference occurred is reported.

To provide examples for this command, lets consider two files:

file1.txt
root@tecmint:~# cat file1.txt

Hi My name is Tecmint
file2.txt
root@tecmint:~# cat file2.txt

Hi My name is tecmint [dot] com

Now, let’s compare two files and see output of the command.

root@tecmint:~# cmp file1.txt file2.txt 

file1.txt file2.txt differ: byte 15, line 1

36. Command: wget

Wget is a free utility for non-interactive (i.e., can work in background) download of files from the Web. It supports HTTPHTTPSFTP protocols and HTTP proxies.

Download ffmpeg using wget

root@tecmint:~# wget http://downloads.sourceforge.net/project/ffmpeg-php/ffmpeg-php/0.6.0/ffmpeg-php-0.6.0.tbz2

--2013-05-22 18:54:52--  http://downloads.sourceforge.net/project/ffmpeg-php/ffmpeg-php/0.6.0/ffmpeg-php-0.6.0.tbz2
Resolving downloads.sourceforge.net (downloads.sourceforge.net)... 216.34.181.59
Connecting to downloads.sourceforge.net (downloads.sourceforge.net)|216.34.181.59|:80... connected.
HTTP request sent, awaiting response... 302 Found
Location: http://kaz.dl.sourceforge.net/project/ffmpeg-php/ffmpeg-php/0.6.0/ffmpeg-php-0.6.0.tbz2 [following]
--2013-05-22 18:54:54--  http://kaz.dl.sourceforge.net/project/ffmpeg-php/ffmpeg-php/0.6.0/ffmpeg-php-0.6.0.tbz2
Resolving kaz.dl.sourceforge.net (kaz.dl.sourceforge.net)... 92.46.53.163
Connecting to kaz.dl.sourceforge.net (kaz.dl.sourceforge.net)|92.46.53.163|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 275557 (269K) [application/octet-stream]
Saving to: ‘ffmpeg-php-0.6.0.tbz2’

100%[===========================================================================>] 2,75,557    67.8KB/s   in 4.0s   

2013-05-22 18:55:00 (67.8 KB/s) - ‘ffmpeg-php-0.6.0.tbz2’ saved [275557/275557]

37. Command: mount

Mount is an important command which is used to mount a filesystem that don’t mount itself. You need root permission to mount a device.

First run ‘lsblk‘ after plugging-in your filesystem and identify your device and note down you device assigned name.

root@tecmint:~# lsblk 

NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT 
sda      8:0    0 931.5G  0 disk 
├─sda1   8:1    0 923.6G  0 part / 
├─sda2   8:2    0     1K  0 part 
└─sda5   8:5    0   7.9G  0 part [SWAP] 
sr0     11:0    1  1024M  0 rom  
sdb      8:16   1   3.7G  0 disk 
└─sdb1   8:17   1   3.7G  0 part

From this screen it was clear that I plugged in a 4 GB pendrive thus ‘sdb1‘ is my filesystem to be mounted. Become a root to perform this operation and change to /dev directory where all the file system is mounted.

root@tecmint:~# su
Password:
root@tecmint:~# cd /dev

Create a directory named anything but should be relevent for reference.

root@tecmint:~# mkdir usb

Now mount filesystem ‘sdb1‘ to directory ‘usb‘.

root@tecmint:~# mount /dev/sdb1 /dev/usb

Now you can navigate to /dev/usb from terminal or X-windows system and acess file from the mounted directory.

Time for Code Developer to know how rich Linux environment is

38. Command: gcc

gcc is the in-built compiler for ‘c‘ language in Linux Environment. A simple c program, save it on ur desktop as Hello.c (remember ‘.c‘ extension is must).

#include <stdio.h>
int main()
{
  printf("Hello world\n");
  return 0;
}
Compile it
root@tecmint:~# gcc Hello.c
Run it
root@tecmint:~# ./a.out 

Hello world

Note: On compiling a c program the output is automatically generated to a new file “a.out” and everytime you compile a c program same file “a.out” gets modified. Hence it is a good advice to define a output file during compile and thus there is no risk of overwrite to output file.

Compile it this way
root@tecmint:~# gcc -o Hello Hello.c

Here ‘-o‘ sends the output to ‘Hello‘ file and not ‘a.out‘. Run it again.

root@tecmint:~# ./Hello 

Hello world

39. Command: g++

g++ is the in-built compiler for ‘C++‘ , the first object oriented programming language. A simple c++ program, save it on ur desktop as Add.cpp (remember ‘.cpp‘ extension is must).

#include <iostream>

using namespace std;

int main() 
    {
          int a;
          int b;
          cout<<"Enter first number:\n";
          cin >> a;
          cout <<"Enter the second number:\n";
          cin>> b;
          cin.ignore();
          int result = a + b;
          cout<<"Result is"<<"  "<<result<<endl;
          cin.get();
          return 0;
     }
Compile it
root@tecmint:~# g++ Add.cpp
Run it
root@tecmint:~# ./a.out

Enter first number: 
...
...

Note: On compiling a c++ program the output is automatically generated to a new file “a.out” and everytime you compile a c++ program same file “a.out” gets modified. Hence it is a good advice to define a output file during compile and thus there is no risk of overwrite to output file.

Compile it this way
root@tecmint:~# g++ -o Add Add.cpp
Run it
root@tecmint:~# ./Add 

Enter first number: 
...
...

40. Command: java

Java is one of the world’s highly used programming language and is considered fast, secure, and reliable. Most of the the web based service of today runs on java.

Create a simple java program by pasting the below test to a file, named tecmint.java (remember ‘.java‘ extension is must).

class tecmint {
  public static void main(String[] arguments) {
    System.out.println("Tecmint ");
  }
}
compile it using javac
root@tecmint:~# javac tecmint.java
Run it
root@tecmint:~# java tecmint

Note: Almost every distribution comes packed with gcc compiler, major number of distros have inbuilt g++ and java compiler, while some may not have. You can apt or yum the required package.

Don’t forget to mention your valueable comment and the type of article you want to see here. I will soon be back with an interesting topic about the lesser known facts about Linux.

20 Advanced Commands for Linux Experts

Thanks for all the likes, good words and support you gave us in the first two part of this article. In the first article we discussed commands for those users who have just switched to Linux and needed the necessary knowledge to start with.

  1. 20 Useful Commands for Linux Newbies

In the second article we discussed the commands which a middle level user requires to manage his own system.

  1. 20 Advanced Commands for Middle Level Linux Users

What Next? In this article I will be explaining those commands required for administrating the Linux Server.

Linux System Admin Commands

Linux Expert Commands

41. Command: ifconfig

ifconfig is used to configure the kernel-resident network interfaces. It is used at boot time to set up interfaces as necessary. After that, it is usually only needed when debugging or when system tuning is needed.

Check Active Network Interfaces
[avishek@tecmint ~]$ ifconfig 

eth0      Link encap:Ethernet  HWaddr 40:2C:F4:EA:CF:0E  
          inet addr:192.168.1.3  Bcast:192.168.1.255  Mask:255.255.255.0 
          inet6 addr: fe80::422c:f4ff:feea:cf0e/64 Scope:Link 
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1 
          RX packets:163843 errors:0 dropped:0 overruns:0 frame:0 
          TX packets:124990 errors:0 dropped:0 overruns:0 carrier:0 
          collisions:0 txqueuelen:1000 
          RX bytes:154389832 (147.2 MiB)  TX bytes:65085817 (62.0 MiB) 
          Interrupt:20 Memory:f7100000-f7120000 

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0 
          inet6 addr: ::1/128 Scope:Host 
          UP LOOPBACK RUNNING  MTU:16436  Metric:1 
          RX packets:78 errors:0 dropped:0 overruns:0 frame:0 
          TX packets:78 errors:0 dropped:0 overruns:0 carrier:0 
          collisions:0 txqueuelen:0 
          RX bytes:4186 (4.0 KiB)  TX bytes:4186 (4.0 KiB)
Check All Network Interfaces

Display details of All interfaces including disabled interfaces using “-a” argument.

[avishek@tecmint ~]$ ifconfig -a

eth0      Link encap:Ethernet  HWaddr 40:2C:F4:EA:CF:0E  
          inet addr:192.168.1.3  Bcast:192.168.1.255  Mask:255.255.255.0 
          inet6 addr: fe80::422c:f4ff:feea:cf0e/64 Scope:Link 
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1 
          RX packets:163843 errors:0 dropped:0 overruns:0 frame:0 
          TX packets:124990 errors:0 dropped:0 overruns:0 carrier:0 
          collisions:0 txqueuelen:1000 
          RX bytes:154389832 (147.2 MiB)  TX bytes:65085817 (62.0 MiB) 
          Interrupt:20 Memory:f7100000-f7120000 

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0 
          inet6 addr: ::1/128 Scope:Host 
          UP LOOPBACK RUNNING  MTU:16436  Metric:1 
          RX packets:78 errors:0 dropped:0 overruns:0 frame:0 
          TX packets:78 errors:0 dropped:0 overruns:0 carrier:0 
          collisions:0 txqueuelen:0 
          RX bytes:4186 (4.0 KiB)  TX bytes:4186 (4.0 KiB) 

virbr0    Link encap:Ethernet  HWaddr 0e:30:a3:3a:bf:03  
          inet addr:192.168.122.1  Bcast:192.168.122.255  Mask:255.255.255.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
Disable an Interface
[avishek@tecmint ~]$ ifconfig eth0 down
Enable an Interface
[avishek@tecmint ~]$ ifconfig eth0 up
Assign IP Address to an Interface

Assign “192.168.1.12” as the IP address for the interface eth0.

[avishek@tecmint ~]$ ifconfig eth0 192.168.1.12
Change Subnet Mask of Interface eth0
[avishek@tecmint ~]$ ifconfig eth0 netmask 255.255.255.
Change Broadcast Address of Interface eth0
[avishek@tecmint ~]$ ifconfig eth0 broadcast 192.168.1.255
Assign IP Address, Netmask and Broadcast to Interface eth0
[avishek@tecmint ~]$ ifconfig eth0 192.168.1.12 netmask 255.255.255.0 broadcast 192.168.1.255

Note: If using a wireless network you need to use command “iwconfig“. For more “ifconfig” command examples and usage, read 15 Useful “ifconfig” Commands.

42. Command: netstat

netstat command displays various network related information such as network connections, routing tables, interface statistics, masquerade connections, multicast memberships etc..,

List All Network Ports
[avishek@tecmint ~]$ netstat -a

Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node   Path
unix  2      [ ACC ]     STREAM     LISTENING     741379   /run/user/user1/keyring-I5cn1c/gpg
unix  2      [ ACC ]     STREAM     LISTENING     8965     /var/run/acpid.socket
unix  2      [ ACC ]     STREAM     LISTENING     18584    /tmp/.X11-unix/X0
unix  2      [ ACC ]     STREAM     LISTENING     741385   /run/user/user1/keyring-I5cn1c/ssh
unix  2      [ ACC ]     STREAM     LISTENING     741387   /run/user/user1/keyring-I5cn1c/pkcs11
unix  2      [ ACC ]     STREAM     LISTENING     20242    @/tmp/dbus-ghtTjuPN46
unix  2      [ ACC ]     STREAM     LISTENING     13332    /var/run/samba/winbindd_privileged/pipe
unix  2      [ ACC ]     STREAM     LISTENING     13331    /tmp/.winbindd/pipe
unix  2      [ ACC ]     STREAM     LISTENING     11030    /var/run/mysqld/mysqld.sock
unix  2      [ ACC ]     STREAM     LISTENING     19308    /tmp/ssh-qnZadSgJAbqd/agent.3221
unix  2      [ ACC ]     STREAM     LISTENING     436781   /tmp/HotShots
unix  2      [ ACC ]     STREAM     LISTENING     46110    /run/user/ravisaive/pulse/native
unix  2      [ ACC ]     STREAM     LISTENING     19310    /tmp/gpg-zfE9YT/S.gpg-agent
....
List All TCP Ports
[avishek@tecmint ~]$ netstat -at

Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State      
tcp        0      0 localhost:mysql         *:*                     LISTEN     
tcp        0      0 *:5901                  *:*                     LISTEN     
tcp        0      0 *:5902                  *:*                     LISTEN     
tcp        0      0 *:x11-1                 *:*                     LISTEN     
tcp        0      0 *:x11-2                 *:*                     LISTEN     
tcp        0      0 *:5938                  *:*                     LISTEN     
tcp        0      0 localhost:5940          *:*                     LISTEN     
tcp        0      0 ravisaive-OptiPl:domain *:*                     LISTEN     
tcp        0      0 ravisaive-OptiPl:domain *:*                     LISTEN     
tcp        0      0 localhost:ipp           *:*                     LISTEN     
tcp        0      0 ravisaive-OptiPle:48270 ec2-23-21-236-70.c:http ESTABLISHED
tcp        0      0 ravisaive-OptiPle:48272 ec2-23-21-236-70.c:http TIME_WAIT  
tcp        0      0 ravisaive-OptiPle:48421 bom03s01-in-f22.1:https ESTABLISHED
tcp        0      0 ravisaive-OptiPle:48269 ec2-23-21-236-70.c:http ESTABLISHED
tcp        0      0 ravisaive-OptiPle:39084 channel-ecmp-06-f:https ESTABLISHED
...
Show Statistics for All Ports
[avishek@tecmint ~]$ netstat -s

Ip:
    4994239 total packets received
    0 forwarded
    0 incoming packets discarded
    4165741 incoming packets delivered
    3248924 requests sent out
    8 outgoing packets dropped
Icmp:
    29460 ICMP messages received
    566 input ICMP message failed.
    ICMP input histogram:
        destination unreachable: 98
        redirects: 29362
    2918 ICMP messages sent
    0 ICMP messages failed
    ICMP output histogram:
        destination unreachable: 2918
IcmpMsg:
        InType3: 98
        InType5: 29362
        OutType3: 2918
Tcp:
    94533 active connections openings
    23 passive connection openings
    5870 failed connection attempts
    7194 connection resets received
....

OK! For some reason if you want not to resolve host, port and user name as a output of netstat.

[avishek@tecmint ~]$ netstat -an

Fine, you may need to get the output of netstat continuously till interrupt instruction is passed (ctrl+c).

[avishek@tecmint ~]$ netstat -c

For more “netstat” command examples and usage, see the article 20 Netstat Command Examples.

43. Command: nslookup

A network utility program used to obtain information about Internet servers. As its name suggests, the utility finds name server information for domains by querying DNS.

[avishek@tecmint ~]$ nslookup tecmint.com 

Server:		192.168.1.1 
Address:	192.168.1.1#53 

Non-authoritative answer: 
Name:	tecmint.com 
Address: 50.16.67.239
Query Mail Exchanger Record
[avishek@tecmint ~]$ nslookup -query=mx tecmint.com 

Server:		192.168.1.1 
Address:	192.168.1.1#53 

Non-authoritative answer: 
tecmint.com	mail exchanger = 0 smtp.secureserver.net. 
tecmint.com	mail exchanger = 10 mailstore1.secureserver.net. 

Authoritative answers can be found from:
Query Name Server
[avishek@tecmint ~]$ nslookup -type=ns tecmint.com 

Server:		192.168.1.1 
Address:	192.168.1.1#53 

Non-authoritative answer: 
tecmint.com	nameserver = ns3404.com. 
tecmint.com	nameserver = ns3403.com. 

Authoritative answers can be found from:
Query DNS Record
[avishek@tecmint ~]$ nslookup -type=any tecmint.com 

Server:		192.168.1.1 
Address:	192.168.1.1#53 

Non-authoritative answer: 
tecmint.com	mail exchanger = 10 mailstore1.secureserver.net. 
tecmint.com	mail exchanger = 0 smtp.secureserver.net. 
tecmint.com	nameserver = ns06.domaincontrol.com. 
tecmint.com	nameserver = ns3404.com. 
tecmint.com	nameserver = ns3403.com. 
tecmint.com	nameserver = ns05.domaincontrol.com. 

Authoritative answers can be found from:
Query Start of Authority
[avishek@tecmint ~]$ nslookup -type=soa tecmint.com 

Server:		192.168.1.1 
Address:	192.168.1.1#53 

Non-authoritative answer: 
tecmint.com 
	origin = ns3403.hostgator.com 
	mail addr = dnsadmin.gator1702.hostgator.com 
	serial = 2012081102 
	refresh = 86400 
	retry = 7200 
	expire = 3600000 
	minimum = 86400 

Authoritative answers can be found from:
Query Port Number

Change the port number using which you want to connect

[avishek@tecmint ~]$ nslookup -port 56 tecmint.com

Server:		tecmint.com
Address:	50.16.76.239#53

Name:	56
Address: 14.13.253.12

Read Also : 8 Nslookup Commands

44. Command: dig

dig is a tool for querying DNS nameservers for information about host addresses, mail exchanges, nameservers, and related information. This tool can be used from any Linux (Unix) or Macintosh OS X operating system. The most typical use of dig is to simply query a single host.

[avishek@tecmint ~]$ dig tecmint.com

; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6 <<>> tecmint.com 
;; global options: +cmd 
;; Got answer: 
;; ->>HEADER<
Turn Off Comment Lines
[avishek@tecmint ~]$ dig tecmint.com +nocomments 

; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6 <<>> tecmint.com +nocomments 
;; global options: +cmd 
;tecmint.com.			IN	A 
tecmint.com.		14400	IN	A	40.216.66.239 
;; Query time: 418 msec 
;; SERVER: 192.168.1.1#53(192.168.1.1) 
;; WHEN: Sat Jun 29 13:53:22 2013 
;; MSG SIZE  rcvd: 45
Turn Off Authority Section
[avishek@tecmint ~]$ dig tecmint.com +noauthority 

; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6 <<>> tecmint.com +noauthority 
;; global options: +cmd 
;; Got answer: 
;; ->>HEADER<
Turn Off Additional Section
[avishek@tecmint ~]$ dig  tecmint.com +noadditional 

; <<>> DiG 9.9.2-P1 <<>> tecmint.com +noadditional
;; global options: +cmd
;; Got answer:
;; ->>HEADER<
Turn Off Stats Section
[avishek@tecmint ~]$ dig tecmint.com +nostats 

; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6 <<>> tecmint.com +nostats 
;; global options: +cmd 
;; Got answer: 
;; ->>HEADER<
Turn Off Answer Section
[avishek@tecmint ~]$ dig tecmint.com +noanswer 

; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6 <<>> tecmint.com +noanswer 
;; global options: +cmd 
;; Got answer: 
;; ->>HEADER<
Disable All Section at Once
[avishek@tecmint ~]$ dig tecmint.com +noall 

; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6 <<>> tecmint.com +noall 
;; global options: +cmd

Read Also : 10 Linux Dig Command Examples

45. Command: uptime

You have just connected to your Linux Server Machine and founds Something unusual or malicious, what you will do? Guessing…. NO, definitely not you could run uptime to verify what happened actually when the server was unattended.

[avishek@tecmint ~]$ uptime

14:37:10 up  4:21,  2 users,  load average: 0.00, 0.00, 0.04

46. Command: wall

one of the most important command for administrator, wall sends a message to everybody logged in with their mesg permission set to “yes“. The message can be given as an argument to wall, or it can be sent to wall’s standard input.

[avishek@tecmint ~]$ wall "we will be going down for maintenance for one hour sharply at 03:30 pm"

Broadcast message from root@localhost.localdomain (pts/0) (Sat Jun 29 14:44:02 2013): 

we will be going down for maintenance for one hour sharply at 03:30 pm

47. command: mesg

Lets you control if people can use the “write” command, to send text to you over the screen.

mesg [n|y]
n - prevents the message from others popping up on the screen.
y – Allows messages to appear on your screen.

48. Command: write

Let you send text directly to the screen of another Linux machine if ‘mesg’ is ‘y’.

[avishek@tecmint ~]$ write ravisaive

49. Command: talk

An enhancement to write command, talk command lets you talk to the logged in users.

[avishek@tecmint ~]$ talk ravisaive

Note: If talk command is not installed, you can always apt or yum the required packages.

[avishek@tecmint ~]$ yum install talk
OR
[avishek@tecmint ~]$ apt-get install talk

50. Command: w

what command ‘w’ seems you funny? But actually it is not. t’s a command, even if it’s just one letter long! The command “w” is a combination of uptime and who commands given one immediately after the other, in that order.

[avishek@tecmint ~]$ w

15:05:42 up  4:49,  3 users,  load average: 0.02, 0.01, 0.00 
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT 
server   tty7     :0               14:06    4:43m  1:42   0.08s pam: gdm-passwo 
server   pts/0    :0.0             14:18    0.00s  0.23s  1.65s gnome-terminal 
server   pts/1    :0.0             14:47    4:43   0.01s  0.01s bash

51. Command: rename

As the name suggests, this command rename files. rename will rename the specified files by replacing the first occurrence from the file name.

Give the file names a1, a2, a3, a4.....1213

Just type the command.

 rename a1 a0 a?
 rename a1 a0 a??

52. Command: top

Displays the processes of CPU. This command refresh automatically, by default and continues to show CPUprocesses unless interrupt-instruction is given.

[avishek@tecmint ~]$ top

top - 14:06:45 up 10 days, 20:57,  2 users,  load average: 0.10, 0.16, 0.21
Tasks: 240 total,   1 running, 235 sleeping,   0 stopped,   4 zombie
%Cpu(s):  2.0 us,  0.5 sy,  0.0 ni, 97.5 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem:   2028240 total,  1777848 used,   250392 free,    81804 buffers
KiB Swap:  3905532 total,   156748 used,  3748784 free,   381456 cached

  PID USER      PR  NI  VIRT  RES  SHR S  %CPU %MEM    TIME+ COMMAND                                                                                                            
23768 ravisaiv  20   0 1428m 571m  41m S   2.3 28.9  14:27.52 firefox                                                                                                            
24182 ravisaiv  20   0  511m 132m  25m S   1.7  6.7   2:45.94 plugin-containe                                                                                                    
26929 ravisaiv  20   0  5344 1432  972 R   0.7  0.1   0:00.07 top                                                                                                                
24875 ravisaiv  20   0  263m  14m  10m S   0.3  0.7   0:02.76 lxterminal                                                                                                         
    1 root      20   0  3896 1928 1228 S   0.0  0.1   0:01.62 init                                                                                                               
    2 root      20   0     0    0    0 S   0.0  0.0   0:00.06 kthreadd                                                                                                           
    3 root      20   0     0    0    0 S   0.0  0.0   0:17.28 ksoftirqd/0                                                                                                        
    5 root       0 -20     0    0    0 S   0.0  0.0   0:00.00 kworker/0:0H                                                                                                       
    7 root       0 -20     0    0    0 S   0.0  0.0   0:00.00 kworker/u:0H                                                                                                       
    8 root      rt   0     0    0    0 S   0.0  0.0   0:00.12 migration/0                                                                                                        
    9 root      20   0     0    0    0 S   0.0  0.0   0:00.00 rcu_bh                                                                                                             
   10 root      20   0     0    0    0 S   0.0  0.0   0:26.94 rcu_sched                                                                                                          
   11 root      rt   0     0    0    0 S   0.0  0.0   0:01.95 watchdog/0                                                                                                         
   12 root      rt   0     0    0    0 S   0.0  0.0   0:02.00 watchdog/1                                                                                                         
   13 root      20   0     0    0    0 S   0.0  0.0   0:17.80 ksoftirqd/1                                                                                                        
   14 root      rt   0     0    0    0 S   0.0  0.0   0:00.12 migration/1                                                                                                        
   16 root       0 -20     0    0    0 S   0.0  0.0   0:00.00 kworker/1:0H                                                                                                       
   17 root       0 -20     0    0    0 S   0.0  0.0   0:00.00 cpuset                                                                                                             
   18 root       0 -20     0    0    0 S   0.0  0.0   0:00.00 khelper                                                                                                            
   19 root      20   0     0    0    0 S   0.0  0.0   0:00.00 kdevtmpfs                                                                                                          
   20 root       0 -20     0    0    0 S   0.0  0.0   0:00.00 netns                                                                                                              
   21 root      20   0     0    0    0 S   0.0  0.0   0:00.04 bdi-default                                                                                                        
   22 root       0 -20     0    0    0 S   0.0  0.0   0:00.00 kintegrityd                                                                                                        
   23 root       0 -20     0    0    0 S   0.0  0.0   0:00.00 kblockd                                                                                                            
   24 root       0 -20     0    0    0 S   0.0  0.0   0:00.00 ata_sff

Read Also : 12 TOP Command Examples

53. Command: mkfs.ext4

This command create a new ext4 file system on the specified device, if wrong device is followed after this command, the whole block will be wiped and formatted, hence it is suggested not to run this command unless and until you understand what you are doing.

Mkfs.ext4 /dev/sda1 (sda1 block will be formatted)
mkfs.ext4 /dev/sdb1 (sdb1 block will be formatted)

Read MoreWhat is Ext4 and How to Create and Convert

54. Command: vi/emacs/nano

vi (visual), emacsnano are some of the most commonly used editors in Linux. They are used oftenly to edit text, configuration,… files. A quick guide to work around vi and nano is, emacs is a.

vi-editor
[avishek@tecmint ~]$ touch a.txt (creates a text file a.txt) 
[avishek@tecmint ~]$ vi a.txt (open a.txt with vi editor)

[press ‘i’ to enter insert mode, or you won’t be able to type-in anything]

echo "Hello"  (your text here for the file)
  1. alt+x (exit insert mode, remember to keep some space between the last letter.
  2. ctrl+x command or your last word will be deleted).
  3. :wq! (saves the file, with the current text, remember ‘!’ is to override).
nano editor
[avishek@tecmint ~]$ nano a.txt (open a.txt file to be edited with nano)
edit, with the content, required

ctrl +x (to close the editor). It will show output as:

Save modified buffer (ANSWERING "No" WILL DESTROY CHANGES) ?                    
 Y Yes 
 N No           ^C Cancel

Click ‘y’ to yes and enter file name, and you are done.

55. Command: rsync

Rsync copies files and has a -P switch for a progress bar. So if you have rsync installed, you could use a simple alias.

alias cp='rsync -aP'

Now try to copy a large file in terminal and see the output with remaining items, similar to a progress bar.

Moreover, Keeping and Maintaining backup is one of the most important and boring work a system administrator, needs to perform. Rsync is a very nice tool (there exists, several other) to create and maintain backup, in terminal.

[avishek@tecmint ~]$ rsync -zvr IMG_5267\ copy\=33\ copy\=ok.jpg ~/Desktop/ 

sending incremental file list 
IMG_5267 copy=33 copy=ok.jpg 

sent 2883830 bytes  received 31 bytes  5767722.00 bytes/sec 
total size is 2882771  speedup is 1.00

Note-z for compression, -v for verbose and -r for recursive.

56. Command: free

Keeping track of memory and resources is as much important, as any other task performed by an administrator, and ‘free‘ command comes to rescue here.

Current Usage Status of Memory
[avishek@tecmint ~]$ free

             total       used       free     shared    buffers     cached
Mem:       2028240    1788272     239968          0      69468     363716
-/+ buffers/cache:    1355088     673152
Swap:      3905532     157076    3748456
Tuned Output in KB, or MB, or GB
[avishek@tecmint ~]$ free -b

             total       used       free     shared    buffers     cached
Mem:    2076917760 1838272512  238645248          0   71348224  372670464
-/+ buffers/cache: 1394253824  682663936
Swap:   3999264768  160845824 3838418944
[avishek@tecmint ~]$ free -k

             total       used       free     shared    buffers     cached
Mem:       2028240    1801484     226756          0      69948     363704
-/+ buffers/cache:    1367832     660408
Swap:      3905532     157076    3748456
[avishek@tecmint ~]$ free -m

             total       used       free     shared    buffers     cached
Mem:          1980       1762        218          0         68        355
-/+ buffers/cache:       1338        641
Swap:         3813        153       3660
[avishek@tecmint ~]$ free -g

             total       used       free     shared    buffers     cached
Mem:             1          1          0          0          0          0
-/+ buffers/cache:          1          0
Swap:            3          0          3
Check Current Usage in Human Readable Format
[avishek@tecmint ~]$ free -h

             total       used       free     shared    buffers     cached
Mem:          1.9G       1.7G       208M         0B        68M       355M
-/+ buffers/cache:       1.3G       632M
Swap:         3.7G       153M       3.6G
Check Status Contineously After Regular Interval
[avishek@tecmint ~]$ free -s 3

             total       used       free     shared    buffers     cached
Mem:       2028240    1824096     204144          0      70708     364180
-/+ buffers/cache:    1389208     639032
Swap:      3905532     157076    3748456

             total       used       free     shared    buffers     cached
Mem:       2028240    1824192     204048          0      70716     364212
-/+ buffers/cache:    1389264     638976
Swap:      3905532     157076    3748456

Read Also : 10 Examples of Free Command

57. Command: mysqldump

Ok till now you would have understood what this command actually stands for, from the name of this command.mysqldump commands dumps (backups) all or a particular database data into a given a file.For example,

[avishek@tecmint ~]$ mysqldump -u root -p --all-databases > /home/server/Desktop/backupfile.sql

Notemysqldump requires mysql to be running and correct password for authorisation. We have covered some useful “mysqldump” commands at Database Backup with mysqldump Command

58. Command: mkpasswd

Make a hard-to-guess, random password of the length as specified.

[avishek@tecmint ~]$ mkpasswd -l 10

zI4+Ybqfx9
[avishek@tecmint ~]$ mkpasswd -l 20 

w0Pr7aqKk&hmbmqdrlmk

Note-l 10 generates a random password of 10 characters while -l 20 generates a password of character 20, it could be set to anything to get desired result. This command is very useful and implemented in scripting language oftenly to generate random passwords. You might need to yum or apt the ‘expect’ package to use this command.

[root@tecmint ~]# yum install expect 
OR
[root@tecmint ~]# apt-get install expect

59. Command: paste

Merge two or more text files on lines using. Example. If the content of file1 was:

1 
2 
3 

and file2 was: 

a 
b 
c 
d 
the resulting file3 would be: 

1    a 
2    b 
3    c 
     d

60.Command: lsof

lsof stands for “list open files” and displays all the files that your system has currently opened. It’s very useful to figure out which processes uses a certain file, or to display all the files for a single process. Some useful 10 lsof Command examples, you might be interested in reading.

[avishek@tecmint ~]$ lsof 

COMMAND     PID   TID            USER   FD      TYPE     DEVICE SIZE/OFF       NODE NAME
init          1                  root  cwd       DIR        8,1     4096          2 /
init          1                  root  rtd       DIR        8,1     4096          2 /
init          1                  root  txt       REG        8,1   227432     395571 /sbin/init
init          1                  root  mem       REG        8,1    47080     263023 /lib/i386-linux-gnu/libnss_files-2.17.so
init          1                  root  mem       REG        8,1    42672     270178 /lib/i386-linux-gnu/libnss_nis-2.17.so
init          1                  root  mem       REG        8,1    87940     270187 /lib/i386-linux-gnu/libnsl-2.17.so
init          1                  root  mem       REG        8,1    30560     263021 /lib/i386-linux-gnu/libnss_compat-2.17.so
init          1                  root  mem       REG        8,1   124637     270176 /lib/i386-linux-gnu/libpthread-2.17.so
init          1                  root  mem       REG        8,1  1770984     266166 /lib/i386-linux-gnu/libc-2.17.so
init          1                  root  mem       REG        8,1    30696     262824 /lib/i386-linux-gnu/librt-2.17.so
init          1                  root  mem       REG        8,1    34392     262867 /lib/i386-linux-gnu/libjson.so.0.1.0
init          1                  root  mem       REG        8,1   296792     262889 /lib/i386-linux-gnu/libdbus-1.so.3.7.2
init          1                  root  mem       REG        8,1    34168     262840 /lib/i386-linux-gnu/libnih-dbus.so.1.0.0
init          1                  root  mem       REG        8,1    95616     262848 /lib/i386-linux-gnu/libnih.so.1.0.0
init          1                  root  mem       REG        8,1   134376     270186 /lib/i386-linux-gnu/ld-2.17.so
init          1                  root    0u      CHR        1,3      0t0       1035 /dev/null
init          1                  root    1u      CHR        1,3      0t0       1035 /dev/null
init          1                  root    2u      CHR        1,3      0t0       1035 /dev/null
init          1                  root    3r     FIFO        0,8      0t0       1714 pipe
init          1                  root    4w     FIFO        0,8      0t0       1714 pipe
init          1                  root    5r     0000        0,9        0       6245 anon_inode
init          1                  root    6r     0000        0,9        0       6245 anon_inode
init          1                  root    7u     unix 0xf5e91f80      0t0       8192 @/com/ubuntu/upstart
init          1                  root    8w      REG        8,1     3916        394 /var/log/upstart/teamviewerd.log.1 (deleted)

This is not the end, a System Administrator does a lot of stuff, to provide you such a nice interface, upon which you work. System Administration is actually an art of learning and implementing in a very much perfect way. We will try to get you with all other necessary stuff which a linux professional must learn, linux in its basic actually itself, is a process of learning and learning. Your good words are always sought, which encourages us to put in more effort to give you a knowledgeable article. “Like and share Us, to help Us Spread”.

Source

Download Rapid Photo Downloader Linux 0.9.14

As its name suggests, Rapid Photo Downloader is a photo downloader application. It provides users with a user-friendly interface written with the GTK+ toolkit and compatible mainly with the GNOME desktop environment.

Features at a glance

Its top features are the ability to download recorded videos and images from multiple devices at the same time, download and backup images simultaneously, as well as to generate user-configurable and human readable folder and file names.

Supports a wide range of images

At the moment it supports a wide range of image formats, including CR2, NEF, RAW, ARW, CRW, DNG, DCR, MEF, MRW, MOS, PEF, ORF, RAF, RW2, SRW, and SR2. In addition, a vast amount of video formats are supported, such as 3GP, AVI, MPG, M2T, MP4, MOV, MPEG, MOD, TOD, and MTS.

Translated in over 30 languages

Being translated in over 30 languages, the application is very well documented, easy to configure and use, fas, and supported on a wide range of open source desktop environments, including KDE, LXDE and Xfce.

One of the best photo and video downloader

As the developer describes, the application’s main goal is to become one of the best photo and video downloader software for Linux-based operating systems. It can automatically generate metadata for image and video files, including date, time, shutter speed, aperture and codec, allows users to rename camera generated file names, and automagically creates folders for downloaded files.

The perfect tool for professional and amateur photographers alike

Another interesting feature is the ability to synchronize the sequence numbers of both RAW and JPEG files (only available for cameras that support this function). The truth is that there aren’t many open source applications in this category, which makes Rapid Photo Downloader the perfect tool for professional and amateur photographers alike. You should definitely download and install this application if your main hobby happens to be photography.

What’s new in Rapid Photo Downloader 0.9.14:

  • This version contains several bug fixes and some translation updates.

Source

WP2Social Auto Publish Powered By : XYZScripts.com