How to Find Out What Version of Linux You Are Running

There are several ways of knowing the version of Linux you are running on your machine as well as your distribution name and kernel version plus some extra information that you may probably want to have in mind or at your fingertips.

Therefore, in this simple yet important guide for new Linux users, I will show you how to do just that. Doing this may seem to be relatively easy task, however, having a good knowledge of your system is always a recommended practice for a good number of reasons including installing and running the appropriate packages for your Linux version, for easy reporting of bugs coupled with many more.

Suggested Read: 5 Ways to Find Out Linux System is 32-bit or 64-bit

With that said, let us proceed to how you can figure out information about your Linux distribution.

Find Out Linux Kernel Version

We will use uname command, which is used to print your Linux system information such as kernel version and release name, network hostname, machine hardware name, processor architecture, hardware platform and the operating system.

To find out which version of Linux kernel you are running, type:

$ uname -or

Shows Current Linux Kernel Version Running on System

Shows Current Linux Kernel Version Running on System

In the preceding command, the option -o prints operating system name and -r prints the kernel release version.

You can also use -a option with uname command to print all system information as shown:

$ uname -a

Shows Linux System Information

Shows Linux System Information

Next, we will use /proc file system, that stores information about processes and other system information, it’s mapped to /proc and mounted at boot time.

Simply type the command below to display some of your system information including the Linux kernel version:

$ cat /proc/version

Shows Linux System Information

Shows Linux System Information

From the image above, you have the following information:

  1. Version of the Linux (kernel) you are running: Linux version 4.5.5-300.fc24.x86_64
  2. Name of the user who compiled your kernel: mockbuild@bkernel01.phx2.fedoraproject.org
  3. Version of the GCC compiler used for building the kernel: gcc version 6.1.1 20160510
  4. Type of the kernel: #1 SMP (Symmetric MultiProcessing kernel) it supports systems with multiple CPUs or multiple CPU cores.
  5. Date and time when the kernel was built: Thu May 19 13:05:32 UTC 2016

Find Out Linux Distribution Name and Release Version

The best way to determine a Linux distribution name and release version information is using cat /etc/os-release command, which works on almost all Linux system.

---------- On Red Hat Linux ---------- 
$ cat /etc/redhat-release

---------- On CentOS Linux ---------- 
$ cat /etc/centos-release

---------- On Fedora Linux ---------- 
$ cat /etc/fedora-release

---------- On Debian Linux ---------- 
$ cat /etc/debian_version

---------- On Ubuntu and Linux Mint ---------- 
$ cat /etc/lsb-release

---------- On Gentoo Linux ---------- 
$ cat /etc/gentoo-release

---------- On SuSE Linux ---------- 
$ cat /etc/SuSE-release

Find Linux Distribution Name and Release Version

Find Linux Distribution Name and Release Version

In this article, we walked through a brief and simple guide intended to help new Linux user find out the Linux version they are running and also get to know their Linux distribution name and version from the shell prompt.

Perhaps it can also be useful to advanced users on one or two occasions. Lastly, to reach us for any assistance or suggestions you wish to offer, make use of the feedback form below.

Source

20 Linux YUM (Yellowdog Updater, Modified) Commands for Package Management

In this article, we will learn how to install, update, remove, find packages, manage packages and repositories on Linux systems using YUM (Yellowdog Updater Modified) tool developed by RedHat. The example commands shown in this article are practically tested on our CentOS 6.3 server, you can use these material for study purpose, certifications or just to explore ways to install new packages and keep your system up-to-date. The basic requirement of this article is, you must have a basic understanding of commands and a working Linux operating system, where you can explore and practice all the commands listed below.

20 Linux Yum Commands

20 Linux Yum Commands

What is YUM?

YUM (Yellowdog Updater Modified) is an open source command-line as well as graphical based package management tool for RPM (RedHat Package Manager) based Linux systems. It allows users and system administrator to easily install, update, remove or search software packages on a systems. It was developed and released by Seth Vidal under GPL (General Public License) as an open source, means anyone can allowed to download and access the code to fix bugs and develop customized packages. YUM uses numerous third party repositories to install packages automatically by resolving their dependencies issues.

1. Install a Package with YUM

To install a package called Firefox 14, just run the below command it will automatically find and install all required dependencies for Firefox.

# yum install firefox
Loaded plugins: fastestmirror
Dependencies Resolved

================================================================================================
 Package                    Arch        Version                    Repository            Size        
================================================================================================
Updating:
firefox                        i686        10.0.6-1.el6.centos     updates             20 M
Updating for dependencies:
 xulrunner                     i686        10.0.6-1.el6.centos     updates             12 M

Transaction Summary
================================================================================================
Install       0 Package(s)
Upgrade       2 Package(s)

Total download size: 32 M
Is this ok [y/N]: y
Downloading Packages:
(1/2): firefox-10.0.6-1.el6.centos.i686.rpm                                |  20 MB   01:10
(2/2): xulrunner-10.0.6-1.el6.centos.i686.rpm                              |  12 MB   00:52
------------------------------------------------------------------------------------------------
Total                                                           63 kB/s |  32 MB   02:04

Updated:
  firefox.i686 0:10.0.6-1.el6.centos

Dependency Updated:
  xulrunner.i686 0:10.0.6-1.el6.centos

Complete!

The above command will ask confirmation before installing any package on your system. If you want to install packages automatically without asking any confirmation, use option -y as shown in below example.

# yum -y install firefox

2. Removing a Package with YUM

To remove a package completely with their all dependencies, just run the following command as shown below.

# yum remove firefox
Loaded plugins: fastestmirror
Setting up Remove Process
Resolving Dependencies
--> Running transaction check
---> Package firefox.i686 0:10.0.6-1.el6.centos set to be erased
--> Finished Dependency Resolution

Dependencies Resolved

====================================================================================================
 Package                    Arch        Version                        Repository            Size        
====================================================================================================
Removing:
 firefox                    i686        10.0.6-1.el6.centos            @updates              23 M

Transaction Summary
====================================================================================================
Remove        1 Package(s)
Reinstall     0 Package(s)
Downgrade     0 Package(s)

Is this ok [y/N]: y
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Erasing        : firefox-10.0.6-1.el6.centos.i686                                                                                                                          1/1

Removed:
  firefox.i686 0:10.0.6-1.el6.centos

Complete!

Same way the above command will ask confirmation before removing a package. To disable confirmation prompt just add option -y as shown in below.

# yum -y remove firefox

3. Updating a Package using YUM

Let’s say you have outdated version of MySQL package and you want to update it to the latest stable version. Just run the following command it will automatically resolves all dependencies issues and install them.

# yum update mysql
Loaded plugins: fastestmirror
Dependencies Resolved

============================================================================================================
 Package            Arch                Version                    Repository                    Size
============================================================================================================
Updating:
 vsftpd             i386                2.0.5-24.el5_8.1           updates                       144 k

Transaction Summary
============================================================================================================
Install       0 Package(s)
Upgrade       1 Package(s)

Total size: 144 k
Is this ok [y/N]: y
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
  Updating       : vsftpd                                                                     1/2
  Cleanup        : vsftpd                                                                     2/2

Updated:
  vsftpd.i386 0:2.0.5-24.el5_8.1

Complete!

4. List a Package using YUM

Use the list function to search for the specific package with name. For example to search for a package called openssh, use the command.

# yum list openssh
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.neu.edu.cn
 * epel: mirror.neu.edu.cn
 * extras: mirror.neu.edu.cn
 * rpmforge: mirror.nl.leaseweb.net
 * updates: mirror.nus.edu.sg
Installed Packages
openssh.i386                                       4.3p2-72.el5_6.3                                                                      installed
Available Packages                                 4.3p2-82.el5                                                                          base

To make your search more accurate, define package name with their version, in case you know. For example to search for a specific version openssh-4.3p2 of the package, use the command.

# yum list openssh-4.3p2

5. Search for a Package using YUM

If you don’t remember the exact name of the package, then use search function to search all the available packages to match the name of the package you specified. For example, to search all the packages that matches the word .

# yum search vsftpd
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.neu.edu.cn
 * epel: mirror.neu.edu.cn
 * extras: mirror.neu.edu.cn
 * rpmforge: mirror.nl.leaseweb.net
 * updates: ftp.iitm.ac.in
============================== Matched: vsftpd ========================
ccze.i386 : A robust log colorizer
pure-ftpd-selinux.i386 : SELinux support for Pure-FTPD
vsftpd.i386 : vsftpd - Very Secure Ftp Daemon

6. Get Information of a Package using YUM

Say you would like to know information of a package before installing it. To get information of a package just issue the below command.

# yum info firefox
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.neu.edu.cn
 * epel: mirror.neu.edu.cn
 * extras: mirror.neu.edu.cn
 * rpmforge: mirror.nl.leaseweb.net
 * updates: ftp.iitm.ac.in
Available Packages
Name       : firefox
Arch       : i386
Version    : 10.0.6
Release    : 1.el5.centos
Size       : 20 M
Repo       : updates
Summary    : Mozilla Firefox Web browser
URL        : http://www.mozilla.org/projects/firefox/
License    : MPLv1.1 or GPLv2+ or LGPLv2+
Description: Mozilla Firefox is an open-source web browser, designed for standards
           : compliance, performance and portability.

7. List all Available Packages using YUM

To list all the available packages in the Yum database, use the below command.

# yum list | less

8. List all Installed Packages using YUM

To list all the installed packages on a system, just issue below command, it will display all the installed packages.

# yum list installed | less

9. Yum Provides Function

Yum provides function is used to find which package a specific file belongs to. For example, if you would like to know the name of the package that has the /etc/httpd/conf/httpd.conf.

# yum provides /etc/httpd/conf/httpd.conf
Loaded plugins: fastestmirror
httpd-2.2.3-63.el5.centos.i386 : Apache HTTP Server
Repo        : base
Matched from:
Filename    : /etc/httpd/conf/httpd.conf

httpd-2.2.3-63.el5.centos.1.i386 : Apache HTTP Server
Repo        : updates
Matched from:
Filename    : /etc/httpd/conf/httpd.conf

httpd-2.2.3-65.el5.centos.i386 : Apache HTTP Server
Repo        : updates
Matched from:
Filename    : /etc/httpd/conf/httpd.conf

httpd-2.2.3-53.el5.centos.1.i386 : Apache HTTP Server
Repo        : installed
Matched from:
Other       : Provides-match: /etc/httpd/conf/httpd.conf

10. Check for Available Updates using Yum

To find how many of installed packages on your system have updates available, to check use the following command.

# yum check-update

11. Update System using Yum

To keep your system up-to-date with all security and binary package updates, run the following command. It will install all latest patches and security updates to your system.

# yum update

12. List all available Group Packages

In Linux, number of packages are bundled to particular group. Instead of installing individual packages with yum, you can install particular group that will install all the related packages that belongs to the group. For example to list all the available groups, just issue following command.

# yum grouplist
Installed Groups:
   Administration Tools
   DNS Name Server
   Dialup Networking Support
   Editors
   Engineering and Scientific
   FTP Server
   Graphics
   Java Development
   Legacy Network Server
Available Groups:
   Authoring and Publishing
   Base
   Beagle
   Cluster Storage
   Clustering
   Development Libraries
   Development Tools
   Eclipse
   Educational Software
   KDE (K Desktop Environment)
   KDE Software Development

13. Install a Group Packages

To install a particular package group, we use option as groupinstall. Fore example, to install “MySQL Database“, just execute the below command.

# yum groupinstall 'MySQL Database'
Dependencies Resolved

=================================================================================================
Package								Arch      Version			 Repository        Size
=================================================================================================
Updating:
 unixODBC                           i386      2.2.11-10.el5      base              290 k
Installing for dependencies:
 unixODBC-libs                      i386      2.2.11-10.el5      base              551 k

Transaction Summary
=================================================================================================
Install       1 Package(s)
Upgrade       1 Package(s)

Total size: 841 k
Is this ok [y/N]: y
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing     : unixODBC-libs	1/3
  Updating       : unixODBC         2/3
  Cleanup        : unixODBC         3/3

Dependency Installed:
  unixODBC-libs.i386 0:2.2.11-10.el5

Updated:
  unixODBC.i386 0:2.2.11-10.el5

Complete!

14. Update a Group Packages

To update any existing installed group packages, just run the following command as shown below.

# yum groupupdate 'DNS Name Server'

Dependencies Resolved
================================================================================================================
 Package			Arch	        Version				Repository           Size
================================================================================================================
Updating:
 bind                           i386            30:9.3.6-20.P1.el5_8.2          updates              981 k
 bind-chroot                    i386            30:9.3.6-20.P1.el5_8.2          updates              47 k
Updating for dependencies:
 bind-libs                      i386            30:9.3.6-20.P1.el5_8.2          updates              864 k
 bind-utils                     i386            30:9.3.6-20.P1.el5_8.2          updates              174 k

Transaction Summary
================================================================================================================
Install       0 Package(s)
Upgrade       4 Package(s)

Total size: 2.0 M
Is this ok [y/N]: y
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
  Updating       : bind-libs            1/8
  Updating       : bind                 2/8
  Updating       : bind-chroot          3/8
  Updating       : bind-utils           4/8
  Cleanup        : bind                 5/8
  Cleanup        : bind-chroot          6/8
  Cleanup        : bind-utils           7/8
  Cleanup        : bind-libs            8/8

Updated:
  bind.i386 30:9.3.6-20.P1.el5_8.2                  bind-chroot.i386 30:9.3.6-20.P1.el5_8.2

Dependency Updated:
  bind-libs.i386 30:9.3.6-20.P1.el5_8.2             bind-utils.i386 30:9.3.6-20.P1.el5_8.2

Complete!

15. Remove a Group Packages

To delete or remove any existing installed group from the system, just use below command.

# yum groupremove 'DNS Name Server'

Dependencies Resolved

===========================================================================================================
 Package                Arch              Version                         Repository          Size
===========================================================================================================
Removing:
 bind                   i386              30:9.3.6-20.P1.el5_8.2          installed           2.1 M
 bind-chroot            i386              30:9.3.6-20.P1.el5_8.2          installed           0.0

Transaction Summary
===========================================================================================================
Remove        2 Package(s)
Reinstall     0 Package(s)
Downgrade     0 Package(s)

Is this ok [y/N]: y
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
  Erasing        : bind                                                   1/2
warning: /etc/sysconfig/named saved as /etc/sysconfig/named.rpmsave
  Erasing        : bind-chroot                                            2/2

Removed:
  bind.i386 30:9.3.6-20.P1.el5_8.2                                        bind-chroot.i386 30:9.3.6-20.P1.el5_8.2

Complete!

16. List Enabled Yum Repositories

To list all enabled Yum repositories in your system, use following option.

# yum repolist

repo id                     repo name                                            status
base                        CentOS-5 - Base                                      enabled:  2,725
epel                        Extra Packages for Enterprise Linux 5 - i386         enabled:  5,783
extras                      CentOS-5 - Extras                                    enabled:    282
mod-pagespeed               mod-pagespeed                                        enabled:      1
rpmforge                    RHEL 5 - RPMforge.net - dag                          enabled: 11,290
updates                     CentOS-5 - Updates                                   enabled:    743
repolist: 20,824

16. List all Enabled and Disabled Yum Repositories

The following command will display all enabled and disabled yum repositories on the system.

# yum repolist all

repo id                     repo name                                            status
C5.0-base                   CentOS-5.0 - Base                                    disabled
C5.0-centosplus             CentOS-5.0 - Plus                                    disabled
C5.0-extras                 CentOS-5.0 - Extras                                  disabled
base                        CentOS-5 - Base                                      enabled:  2,725
epel                        Extra Packages for Enterprise Linux 5 - i386         enabled:  5,783
extras                      CentOS-5 - Extras                                    enabled:    282
repolist: 20,824

17. Install a Package from Specific Repository

To install a particular package from a specific enabled or disabled repository, you must use –enablerepo option in your yum command. For example to Install PhpMyAdmin 3.5.2 package, just execute the command.

# yum --enablerepo=epel install phpmyadmin

Dependencies Resolved
=============================================================================================
 Package                Arch           Version            Repository           Size
=============================================================================================
Installing:
 phpMyAdmin             noarch         3.5.1-1.el6        epel                 4.2 M

Transaction Summary
=============================================================================================
Install       1 Package(s)

Total download size: 4.2 M
Installed size: 17 M
Is this ok [y/N]: y
Downloading Packages:
phpMyAdmin-3.5.1-1.el6.noarch.rpm                       | 4.2 MB     00:25
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : phpMyAdmin-3.5.1-1.el6.noarch             1/1
  Verifying  : phpMyAdmin-3.5.1-1.el6.noarch             1/1

Installed:
  phpMyAdmin.noarch 0:3.5.1-1.el6

Complete!

18. Interactive Yum Shell

Yum utility provides a custom shell where you can execute multiple commands.

# yum shell
Loaded plugins: fastestmirror
Setting up Yum Shell
> update httpd
Loading mirror speeds from cached hostfile
 * base: mirrors.sin3.sg.voxel.net
 * epel: ftp.riken.jp
 * extras: mirrors.sin3.sg.voxel.net
 * updates: mirrors.sin3.sg.voxel.net
Setting up Update Process
>

19. Clean Yum Cache

By default yum keeps all the repository enabled package data in /var/cache/yum/ with each sub-directory, to clean all cached files from enabled repository, you need to run the following command regularly to clean up all the cache and make sure that there is nothing unnecessary space is using. We don’t want to give the output of the below command, because we like to keep cached data as it is.

# yum clean all

20. View History of Yum

To view all the past transactions of yum command, just use the following command.

# yum history

Loaded plugins: fastestmirror
ID     | Login user               | Date and time    | Action(s)      | Altered
-------------------------------------------------------------------------------
    10 | root               | 2012-08-11 15:19 | Install        |    3
     9 | root               | 2012-08-11 15:11 | Install        |    1
     8 | root               | 2012-08-11 15:10 | Erase          |    1 EE
     7 | root               | 2012-08-10 17:44 | Install        |    1
     6 | root               | 2012-08-10 12:19 | Install        |    2
     5 | root               | 2012-08-10 12:14 | Install        |    3
     4 | root               | 2012-08-10 12:12 | I, U           |   13 E<
     3 | root               | 2012-08-09 13:01 | Install        |    1 >
     2 | root               | 2012-08-08 20:13 | I, U           |  292 EE
     1 | System            | 2012-08-08 17:15 | Install        |  560
history list

We have tried to cover all the basic to advance yum commands with their examples. If anything related to yum commands may have missed out. Please update us through our comment box. So, we keep updating the same based on feedback’s received.

Source

How to Run MySQL/MariaDB Queries Directly from the Linux Command Line

If you are in charge of managing a database server, from time to time you may need to run a query and inspect it carefully. While you can do that from the MySQL / MariaDB shell, but this tip will allow you to execute the MySQL/MariaDB Queries directly using the Linux command line AND save the output to a file for later inspection (this is particularly useful if the query return lots of records).

Let us look at some simple examples of running queries directly from the command line before we can move to a more advanced query.

To view all the databases on your server, you can issue the following command:

# mysql -u root -p -e "show databases;"

Next, to create a database table named tutorials in the database tecmintdb, run the command below:

$ mysql -u root -p -e "USE tecmintdb; CREATE TABLE tutorials(tut_id INT NOT NULL AUTO_INCREMENT, tut_title VARCHAR(100) NOT NULL, tut_author VARCHAR(40) NOT NULL, submissoin_date DATE, PRIMARY KEY (tut_id));"

We will use the following command and pipe the output to the tee command followed by the filename where we want to store the output.

Suggested Read: 20 MySQL/MariaDB Commands for Database Administration in Linux

For illustration, we will use a database named employees and a simple join between the employees and salaries tables. In your own case, just type the SQL query between the quotes and hit Enter.

Note that you will be prompted to enter the password for the database user:

# mysql -u root -p -e "USE employees; SELECT DISTINCT A.first_name, A.last_name FROM employees A JOIN salaries B ON A.emp_no = B.emp_no WHERE hire_date < '1985-01-31';" | tee queryresults.txt

View the query results with the help of cat command.

# cat queryresults.txt

Run MySQL/MariaDB Queries from Commandline

Run MySQL/MariaDB Queries from Commandline

With the query results in a plain text files, you can process the records more easily using other command-line utilities.

Summary

We have shared several Linux tips that you, as a system administrator, may find useful when it comes to automating your daily Linux tasks or performing them more easily.

Suggested Read: How to Backup and Restore MySQL/MariaDB Databases

Do you have any other tips that you would like to share with the rest of the community? If so, please do so using the comment form below.

Otherwise, feel free to let us your thoughts about the assortment of tips that we have looked at, or what we can add or possibly do to improve each of them. We look forward to hearing from you!

Source

15 Basic ‘ls’ Command Examples in Linux

ls command is one of the most frequently used command in Linux. I believe ls command is the first command you may use when you get into the command prompt of Linux Box.

We use ls command daily basis and frequently even though we may not aware and never use all the ls option available. In this article, we’ll be discussing basic ls command where we have tried to cover as much parameters as possible.

Linux ls Command

Linux ls Command

1. List Files using ls with no option

ls with no option list files and directories in bare format where we won’t be able to view details like file types, size, modified date and time, permission and links etc.

# ls

0001.pcap        Desktop    Downloads         index.html   install.log.syslog  Pictures  Templates
anaconda-ks.cfg  Documents  fbcmd_update.php  install.log  Music               Public    Videos

2 List Files With option –l

Here, ls -l (-l is character not one) shows file or directory, size, modified date and time, file or folder name and owner of file and its permission.

# ls -l

total 176
-rw-r--r--. 1 root root   683 Aug 19 09:59 0001.pcap
-rw-------. 1 root root  1586 Jul 31 02:17 anaconda-ks.cfg
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Desktop
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Documents
drwxr-xr-x. 4 root root  4096 Aug 16 02:55 Downloads
-rw-r--r--. 1 root root 21262 Aug 12 12:42 fbcmd_update.php
-rw-r--r--. 1 root root 46701 Jul 31 09:58 index.html
-rw-r--r--. 1 root root 48867 Jul 31 02:17 install.log
-rw-r--r--. 1 root root 11439 Jul 31 02:13 install.log.syslog
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Music
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Pictures
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Public
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Templates
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Videos

3. View Hidden Files

List all files including hidden file starting with ‘.‘.

# ls -a

.                .bashrc  Documents         .gconfd          install.log         .nautilus     .pulse-cookie
..               .cache   Downloads         .gnome2          install.log.syslog  .netstat.swp  .recently-used.xbel
0001.pcap        .config  .elinks           .gnome2_private  .kde                .opera        .spice-vdagent
anaconda-ks.cfg  .cshrc   .esd_auth         .gtk-bookmarks   .libreoffice        Pictures      .tcshrc
.bash_history    .dbus    .fbcmd            .gvfs            .local              .pki          Templates
.bash_logout     Desktop  fbcmd_update.php  .ICEauthority    .mozilla            Public        Videos
.bash_profile    .digrc   .gconf            index.html       Music               .pulse        .wireshark

4. List Files with Human Readable Format with option -lh

With combination of -lh option, shows sizes in human readable format.

# ls -lh

total 176K
-rw-r--r--. 1 root root  683 Aug 19 09:59 0001.pcap
-rw-------. 1 root root 1.6K Jul 31 02:17 anaconda-ks.cfg
drwxr-xr-x. 2 root root 4.0K Jul 31 02:48 Desktop
drwxr-xr-x. 2 root root 4.0K Jul 31 02:48 Documents
drwxr-xr-x. 4 root root 4.0K Aug 16 02:55 Downloads
-rw-r--r--. 1 root root  21K Aug 12 12:42 fbcmd_update.php
-rw-r--r--. 1 root root  46K Jul 31 09:58 index.html
-rw-r--r--. 1 root root  48K Jul 31 02:17 install.log
-rw-r--r--. 1 root root  12K Jul 31 02:13 install.log.syslog
drwxr-xr-x. 2 root root 4.0K Jul 31 02:48 Music
drwxr-xr-x. 2 root root 4.0K Jul 31 02:48 Pictures
drwxr-xr-x. 2 root root 4.0K Jul 31 02:48 Public
drwxr-xr-x. 2 root root 4.0K Jul 31 02:48 Templates
drwxr-xr-x. 2 root root 4.0K Jul 31 02:48 Videos

5. List Files and Directories with ‘/’ Character at the end

Using -F option with ls command, will add the ‘/’ Character at the end each directory.

# ls -F

0001.pcap        Desktop/    Downloads/        index.html   install.log.syslog  Pictures/  Templates/
anaconda-ks.cfg  Documents/  fbcmd_update.php  install.log  Music/              Public/    Videos/

6. List Files in Reverse Order

The following command with ls -r option display files and directories in reverse order.

# ls -r

Videos     Public    Music               install.log  fbcmd_update.php  Documents  anaconda-ks.cfg
Templates  Pictures  install.log.syslog  index.html   Downloads         Desktop    0001.pcap

7. Recursively list Sub-Directories

ls -R option will list very long listing directory trees. See an example of output of the command.

# ls -R

total 1384
-rw-------. 1 root     root      33408 Aug  8 17:25 anaconda.log
-rw-------. 1 root     root      30508 Aug  8 17:25 anaconda.program.log

./httpd:
total 132
-rw-r--r--  1 root root     0 Aug 19 03:14 access_log
-rw-r--r--. 1 root root 61916 Aug 10 17:55 access_log-20120812

./lighttpd:
total 68
-rw-r--r--  1 lighttpd lighttpd  7858 Aug 21 15:26 access.log
-rw-r--r--. 1 lighttpd lighttpd 37531 Aug 17 18:21 access.log-20120819

./nginx:
total 12
-rw-r--r--. 1 root root    0 Aug 12 03:17 access.log
-rw-r--r--. 1 root root  390 Aug 12 03:17 access.log-20120812.gz

8. Reverse Output Order

With combination of -ltr will shows latest modification file or directory date as last.

# ls -ltr

total 176
-rw-r--r--. 1 root root 11439 Jul 31 02:13 install.log.syslog
-rw-r--r--. 1 root root 48867 Jul 31 02:17 install.log
-rw-------. 1 root root  1586 Jul 31 02:17 anaconda-ks.cfg
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Desktop
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Videos
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Templates
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Public
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Pictures
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Music
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Documents
-rw-r--r--. 1 root root 46701 Jul 31 09:58 index.html
-rw-r--r--. 1 root root 21262 Aug 12 12:42 fbcmd_update.php
drwxr-xr-x. 4 root root  4096 Aug 16 02:55 Downloads
-rw-r--r--. 1 root root   683 Aug 19 09:59 0001.pcap

9. Sort Files by File Size

With combination of -lS displays file size in order, will display big in size first.

# ls -lS

total 176
-rw-r--r--. 1 root root 48867 Jul 31 02:17 install.log
-rw-r--r--. 1 root root 46701 Jul 31 09:58 index.html
-rw-r--r--. 1 root root 21262 Aug 12 12:42 fbcmd_update.php
-rw-r--r--. 1 root root 11439 Jul 31 02:13 install.log.syslog
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Desktop
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Documents
drwxr-xr-x. 4 root root  4096 Aug 16 02:55 Downloads
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Music
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Pictures
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Public
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Templates
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Videos
-rw-------. 1 root root  1586 Jul 31 02:17 anaconda-ks.cfg
-rw-r--r--. 1 root root   683 Aug 19 09:59 0001.pcap

10. Display Inode number of File or Directory

We can see some number printed before file / directory name. With -i options list file / directory with inode number.

# ls -i

20112 0001.pcap        23610 Documents         23793 index.html          23611 Music     23597 Templates
23564 anaconda-ks.cfg  23595 Downloads            22 install.log         23612 Pictures  23613 Videos
23594 Desktop          23585 fbcmd_update.php     35 install.log.syslog  23601 Public

11. Shows version of ls command

Check version of ls command.

# ls --version

ls (GNU coreutils) 8.4
Copyright (C) 2010 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Written by Richard M. Stallman and David MacKenzie.

12. Show Help Page

List help page of ls command with their option.

# ls --help

Usage: ls [OPTION]... [FILE]...

13. List Directory Information

With ls -l command list files under directory /tmp. Wherein with -ld parameters displays information of /tmpdirectory.

# ls -l /tmp
total 408
drwx------. 2 narad narad   4096 Aug  2 02:00 CRX_75DAF8CB7768
-r--------. 1 root  root  384683 Aug  4 12:28 htop-1.0.1.tar.gz
drwx------. 2 root  root    4096 Aug  4 11:20 keyring-6Mfjnk
drwx------. 2 root  root    4096 Aug 16 01:33 keyring-pioZJr
drwx------. 2 gdm   gdm     4096 Aug 21 11:26 orbit-gdm
drwx------. 2 root  root    4096 Aug 19 08:41 pulse-gl6o4ZdxQVrX
drwx------. 2 narad narad   4096 Aug  4 08:16 pulse-UDH76ExwUVoU
drwx------. 2 gdm   gdm     4096 Aug 21 11:26 pulse-wJtcweUCtvhn
-rw-------. 1 root  root     300 Aug 16 03:34 yum_save_tx-2012-08-16-03-34LJTAa1.yumtx
# ls -ld /tmp/

drwxrwxrwt. 13 root root 4096 Aug 21 12:48 /tmp/

14. Display UID and GID of Files

To display UID and GID of files and directories. use option -n with ls command.

# ls -n

total 36
drwxr-xr-x. 2 500 500 4096 Aug  2 01:52 Downloads
drwxr-xr-x. 2 500 500 4096 Aug  2 01:52 Music
drwxr-xr-x. 2 500 500 4096 Aug  2 01:52 Pictures
-rw-rw-r--. 1 500 500   12 Aug 21 13:06 tmp.txt
drwxr-xr-x. 2 500 500 4096 Aug  2 01:52 Videos

15. ls command and its Aliases

We have made alias for ls command, when we execute ls command it’ll take -l option by default and display long listing as mentioned earlier.

# alias ls="ls -l"

Note: We can see number of alias available in your system with below alias command and same can be unalias as shown below example.

# alias

alias cp='cp -i'
alias l.='ls -d .* --color=auto'
alias ll='ls -l --color=auto'
alias ls='ls --color=auto'
alias mv='mv -i'
alias rm='rm -i'
alias which='alias | /usr/bin/which --tty-only --read-alias --show-dot --show-tilde'

To remove an alias previously defined, just use the unalias command.

# unalias ls

Source

How to Access a Remote Server Using a Jump Host

jump host (also known as a jump server) is an intermediary host or an SSH gateway to a remote network, through which a connection can be made to another host in a dissimilar security zone, for example a demilitarized zone (DMZ). It bridges two dissimilar security zones and offers controlled access between them.

jump host should be highly secured and monitored especially when it spans a private network and a DMZ with servers providing services to users on the internet.

A classic scenario is connecting from your desktop or laptop from inside your company’s internal network, which is highly secured with firewalls to a DMZ. In order to easily manage a server in a DMZ, you may access it via a jump host.

In this article, we will demonstrate how to access a remote Linux server via a jump host and also we will configure necessary settings in your per-user SSH client configurations.

Consider the following scenario.

SSH Jump Host

SSH Jump Host

In above scenario, you want to connect to HOST 2, but you have to go through HOST 1, because of firewalling, routing and access privileges. There is a number of valid reasons why jumphosts are needed..

Dynamic Jumphost List

The simplest way to connect to a target server via a jump host is using the -J flag from the command line. This tells ssh to make a connection to the jump host and then establish a TCP forwarding to the target server, from there (make sure you’ve Passwordless SSH Login between machines).

$ ssh -J host1 host2

If usernames or ports on machines differ, specify them on the terminal as shown.

$ ssh -J username@host1:port username@host2:port	  

Multiple Jumphosts List

The same syntax can be used to make jumps over multiple servers.

$ ssh -J username@host1:port,username@host2:port username@host3:port

Static Jumphost List

Static jumphost list means, that you know the jumphost or jumphosts that you need to connect a machine. Therefore you need to add the following static jumphost ‘routing’ in ~/.ssh/config file and specify the host aliases as shown.

### First jumphost. Directly reachable
Host vps1
  HostName vps1.example.org

### Host to jump to via jumphost1.example.org
Host contabo
  HostName contabo.example.org
  ProxyJump contabo

Now try to connect to a target server via a jump host as shown.

$ ssh -J vps1 contabo

Login to Target Host via Jumphost

Login to Target Host via Jumphost

The second method is to use the ProxyCommand option to add the jumphost configuration in your ~.ssh/config or $HOME/.ssh/config file as shown.

In this example, the target host is contabo and the jumphost is vps1.

Host vps1
	HostName vps1.example.org
	IdentityFile ~/.ssh/vps1.pem
	User ec2-user

Host contabo
	HostName contabo.example.org	
	IdentityFile ~/.ssh/contabovps
	Port 22
	User admin	
	Proxy Command ssh -q -W %h:%p vps1

Where the command Proxy Command ssh -q -W %h:%p vps1, means run ssh in quiet mode (using -q) and in stdio forwarding (using -W) mode, redirect the connection through an intermediate host (vps1).

Then try to access your target host as shown.

$ ssh contabo

The above command will first open an ssh connection to vps1 in the background effected by the ProxyCommand, and there after, start the ssh session to the target server contabo.

For more information, see the ssh man page or refer to: OpenSSH/Cookbxook/Proxies and Jump Hosts.

That’s all for now! In this article, we have demonstrated how to access a remote server via a jump host. Use the feedback form below to ask any questions or share your thoughts with us.

Source

A Definitive Series to Learn Java Programming for Beginners

We are pleased to announce our dedicated series of posts on Java Programming Language on the demand of our readers. In this series we are going to cover everything you need to know about Java.

Learn Java Programming

Learn Java Programming

Why Java?

Java is a General purpose, Object Oriented Programming Language written by James Gosling. It is known for many features which makes it different from other Programming languages. It is one of those Programming Language that has always remained in demand since the time of its initial release. It is one of the most powerful Programming Language which can do amazing things when combined with the power of Linux. Linux+Java is Future. Most talked about Features of Java are:

  1. General Purpose Programming Language
  2. Object Oriented approach
  3. Friendly Syntax
  4. Portability
  5. Memory Management Feature
  6. Architecture Neutral
  7. Interpreted

This tutorial is for those, who have knowledge of any other programming and/or Scripting Language and want to learn Java from core level.

What you need to get started with Java

First thing is you need is to install Java Compiler and set path. The detailed instructions to install latest version of Java and set path is here [Install Java JDK/JRE in Linux]. Once the Java Compiler is Installed and Path is set, run

$ java -version
Sample Output
java version "1.8.0_45"
Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)

Second thing is you need a text editor. You may use any text editor of your choice which may be command line based or a GUI Text editor. My favorite editors are nano (command_line) and gedit (GUI).

You may use any but make sure you don’t use an Integrated Development Environment (IDE). If you start using IDE at this level, you will never understand a few things, so say no to IDE.

The last things you need is a good step-by step guide (which I will be providing) and a never ending quest to learn Java.

Topics we are going to cover

This is an Ever expanding list of topics and there is nothing hard and fast associated with it. I will keep adding up topics to this section as we dive deep into Java. From here you may go anywhere but I suggest you to go through all the topics stepwise.

Part 6Understanding Class and Objects in Java to Create Object in Java
Part 7Understanding Java Variables and its Types and Introduction to Keywords
Part 8Behavior of objects in JVM and variable Initialization in Java
Part 9Local Variables and Instances in Java
Part 10How to Code and Develop Your First Class in Java

We always got the support of our readers and once again we seek the support of our beloved Readers to make popular Java Series Post on Tecmint. Fasten your seat belts and lets start. Keep following.

What is Java? A Brief History about Java

Java is a General Purpose, class based, object oriented, Platform independent, portable, Architecturally neutral, multithreaded, dynamic, distributed, Portable and robust interpreted Programming Language.

What is Java and Brief History about Java

What is Java and Brief History about Java

Why Java is a called:

General Purpose

Java capabilities are not limited to any specific application domain rather it can be used in various application domain and hence it is called General Purpose Programming Language.

Class based

Java is a class based/oriented programming language which means Java supports inheritance feature of object-oriented Programming Language.

Object oriented

Java is object-oriented means software developed in Java are combination of different types of object.

Platform Independent

A Java code will run on any JVM (Java Virtual Machine). Literally you can run same Java code on Windows JVM, Linux JVM, Mac JVM or any other JVM practically and get same result every time.

Java Platform Independent

Java Platform Independent

Architecturally Neutral

A Java code is not dependent upon Processor Architecture. A Java Application compiled on 64 bit architecture of any platform will run on 32 bit (or any other architecture) system without any issue.

Multithreaded
A thread in Java refers to an independent program. Java supports multithread which means Java is capable of running many tasks simultaneously, sharing the same memory.

Dynamic

Java is a Dynamic programming language which means it executes many programming behavior at Runtime and don’t need to be passed at compile time as in the case of static programming.

Distributed

Java Supports distributed System which means we can access files over Internet just by calling the methods.

Portable

A Java program when compiled produce bytecodes. Bytecodes are magic. These bytecodes can be transferred via network and can be executed by any JVM, hence came the concept of ‘Write once, Run Anywhere(WORA)’.

Java Concept

Java Concept

Robust

Java is a robust programming Language which means it can cope with error while the program is executing as well as keep operating with abnormalities to certain extent. Automatic Garbage collection, strong memory management, exception handling and type checking further adds to the list.

Interpreted

Java is a compiled programming Language which compiles the Java program into Java byte codes. This JVM is then interpreted to run the program.

Other than the above discussed feature, there are a few other remarkable features, like:

Security

Unlike other programming Language where Program interacts with OS using User runtime environment of OS, Java provides an extra layer of security by putting JVM between Program and OS.

Java Security

Java Security

Simple Syntax

Java is an improved c++ which ensures friendly syntax but with removed unwanted features and inclusion of Automatic Garbage collection.

High Level Programming Language

Java is a High Level Programming Language the syntax of which is human readable. Java lets programmer to concentrate on what to achieve and not how to achieve. The JVM converts a Java Program to Machine understandable language.

High Performance

Java make use of Just-In-Time compiler for high performance. Just-In-Time compiler is a computer program that turns Java byte codes into instructions that can directly be sent to compilers.

History of Java

Java Programming Language was written by James Gosling along with two other person ‘Mike Sheridan‘ and ‘Patrick Naughton‘, while they were working at Sun Microsystems. Initially it was named oak Programming Language.

Java Releases
  1. Initial Java Versions 1.0 and 1.1 was released in the year 1996 for Linux, Solaris, Mac and Windows.
  2. Java version 1.2 (Commonly called as java 2) was released in the year 1998.
  3. Java Version 1.3 codename Kestrel was released in the year 2000.
  4. Java Version 1.4 codename Merlin was released in the year 2002.
  5. Java Version 1.5/Java SE 5 codename ‘Tiger’ was released in the year 2004.
  6. Java Version 1.6/Java SE 6 Codename ‘Mustang’ was released in the year 2006.
  7. Java Version 1.7/Java SE 7 Codename ‘Dolphin’ was released in the year 2011.
  8. Java Version 1.8 is the current stable release which was released this year (2015).

Five Goals which were taken into consideration while developing Java:

  1. Keep it simple, familiar and object oriented.
  2. Keep it Robust and Secure.
  3. Keep it architecture-neural and portable.
  4. Executable with High Performance.
  5. Interpreted, threaded and dynamic.

Why we call it Java 2, Java 5, Java 6, Java 7 and Java 8, not their actual version number which 1.2, 1.5, 1.6, 1.7 and 1.8?

Java 1.0 and 1.1 were Java. When Java 1.2 was released it had a lots of changes and marketers/developers wanted a new name so they called it Java 2 (J2SE), remove the numeric before decimal.

This was not the condition when Java 1.3 and Java 1.4 were released hence they were never called Java 3 and Java 4, but they were still Java 2.

When Java 5 was released, once again it was having a lots of changes for the developer/marketers and need a new name. The next number in sequence was 3, but calling Java 1.5 as Java 3 was confusing hence a decision was made to keep the naming as per version number and till now the legacy continues.

Places where Java is used

Java is implemented over a number of places in modern world. It is implemented as Standalone Application, Web Application, Enterprise Application and Mobile Application. Games, Smart Card, Embedded System, Robotics, Desktop, etc.

Keep connected we are coming up with “Working and code Structure of Java”.

How Java Works and Understanding Code Structure of Java – Part 2

In our last post ‘What is Java and History of Java‘ we had covered What is Java, features of Java in details, release history and its naming as well as places where Java is utilized.

Working of Java Understanding Java Code

Working of Java Understanding Java Code – Part 2

Here in this post we will be going through working and code structure of Java Programming Language. Before we proceed let me remind you that Java was developed keeping in mind “Write Once Run Anywhere/Anytime (WORA)” means to ensure that the application developed should be architecturally neutral, Platform Independent and portable.

Working of Java

Having these goals in mind Java was developed with the below working model which can be classified into four stages.

Stage 1

Write the source file. This file contains all the procedure, method, class and objects within established protocol for Java Programming Language. The name of source file should be the name of the class or vice-versa. The source file name must have extension .java. Also the filename and class name are case sensitive.

Stage 2

Run the Java Source Code file through Java Compiler. Java Source code Compiler checks for error and syntax in the source file. It won’t let you compile your source code without satisfying Java compiler by fixing all errors and warning.

Stage 3

Compiler creates classfile. These classfile inherit the same name as the Source code file name, but the extension varies. The Source file name has extension 'filename.java', where as the extension of classfile created by compiler is 'filename.class'. This classfile is coded into bytecode – bytecodes are like magic.

Stage 4

This classfile created by Java Compiler is portable and architecturally neutral. You can port this classfile to run on any processor architecture and Platform/device. All you need is a Java Virtual Machine (JVM) to run this code no matter where.

Now understand the above four stages using an example. Here is a small sample Java Program code. Don’t worry if you don’t understand the code below. As of now just understand how it works.

public class MyFirstProgram
{
    public static void main(String[] args)
    {
        System.out.println("Hello Tecmint, This is my first Java Program");
    }
}

1. I wrote this program and defined class name MyFirstProgram. It is important to notice that this program must be saved as 'MyFirstProgram.java'.

Remember stage 1 above – The class name and file name must be same and filename must have extension .java. Also java is case sensitive hence if your classname is ‘MyFirstProgram‘, your source file name must be ‘MyFirstProgram.java‘.

You can not name it as ‘Myfirstprogram.java‘ or ‘myfirstprogram.java‘ or anything else. By convention it is a good idea to name your class based upon what the program is doing actually.

2. To compile this Java Source file, you need to pass it through Java compiler. Java compiler will essentially check the source code for any error and warning. It wont compile the source code until all the issues are solved. To compile java source code, you need to run:

$ javac MyFirstProgram.java

Where MyFirstProgram.java is the name of the source file.

3. On successful compilation you will notice that the Java compiler created a new file in the same directory the name of which is MyFirstProgram.class.

This class file is coded in bytecodes and can be run on any platform, any processor architecture any number of time. You may run the class file inside of JVM (Java Virtual Machine) on Linux or any other platform simply as:

$ java MyFirstProgram

So all you learnt above can be summarized as:

Java Source Code >> Compiler >> classfile/bytecode >> Various devices running JVM 

Understanding Code Structure in Java

1. Java source code file must contains a class definition. One Java Source file can contain only one public class/top-level class however it can contain lots of private class/inner-class.

The outer class/top class/public class can access all private class/inner class. The class must be within curly braces. Everything in Java is an object and class is a blueprint for object.

A demo of public/private class in Java:

public class class0
{
...
	private class1
	{
	…
	}

	private class 2
	{
	…
	}
...
}

2. Class contain one or more methods. Method must go within the curly braces of the class. A dummy example is:

public class class0
{
	public static void main(String[] args)
	{
	…..
	…..
	}
}

3. A method contain one or more statement/instruction. The instruction(s) must go within the curly braces of method. A dummy example is:

public class class0
{
	public static void main(String[] args)
	{
	System.out.println("Hello Tecmint, This is my first Java Program");
	System.out.println("I am Loving Java");
	…
	...
	}
}

Also important to mention at this point – Every Statement must end with semicolon. A dummy example is:

System.out.println("Hello Tecmint, This is my first Java Program");
...
...
System.out.println("I am Loving Java");

Writing your first Java Program with detailed description. The description is being put as comments here (//means commented out) in this example. You should write comments within a program.

Not only because this is a good habit but also because it makes the code readable ab you or anyone else at anytime later.

// Declare a Public class and name it anything but remember the class name and file name must be same, say class name is MyProg and hence file name must be MyProg.java
public class MyProg

// Remember everything goes into curly braces of class?
{
 

// This is a method which is inside the curly braces of class.
   public static void main(String[] args)

    // Everything inside a method goes into curly braces	
    {
        
    // Statement or Instruction inside method. Note it ends with a semicolon
    System.out.println("I didn't knew JAVA was so much fun filled");
    
    // closing braces of method
    }

// closing braces of class
}

A detailed technical description of the above simple Java Program.

public class MyProg

Here in the above name of class is MyProg and MyProg is a Public class which means everyone can access it.

public static void main(String[] args)

Here the method name is main which is a public method, means it can be accessed by anyone. The return type is void which means no return value. 'Strings[] args' means the arguments for the method main should be array which is to be called args. Don’t worry about the meaning of ‘static‘ as of now. We will be describing in details about it when required.

System.out.println("I didn't knew JAVA was so much fun filled");

System.out.ln ask JVM to print the output to standard output which is Linux command Line in our case. Anything that is in between braces of println statement gets print as it is, unless it is a variable. We will be going into details of variable later. The statement is ending with semicolon.

Even if something is not clear now you need not worry about this. Also you don’t need to memories anything. Just go through the post and understand terminologies and working even when the picture is not very clear.

That’s all for now. Provide us with your valuable feedback in the comments below. We are working on the next part “class and Main method in Java”and will be publishing soon.

Understanding Java Class, Main Method and Loops Control in Java – Part 3

In our last post ‘Working and code structure of Java‘ we emphasized in details of working of Java, Java Source File, Java Class File, Class (Public/Private), Method, Statement, Your first Java Program, Compilation and running of Java Program.

Here in this java learning series guide, we will understand how java class, main method and loops control works and also we will see basic codes using Java class with main method and loops control.

Understanding Java Class Method and Loops Control

Understanding Java Class Method and Loops Control – Part 3

Everything in Java goes in a class

Everything in Java is an object and class is a blueprint of object. Every piece of code in Java is placed under the curly braces of class. When you compile a Java Program it produces a class file. When you run Java Program you are not running the Program file in actual but the class.

When you run a Program in Java Virtual Machine (JVM), it loads the required class and then goes directly to the main () method. The program continues to run till the closing braces of main () method. The program start executing just after the main() method. A class must have a main () method. Not all the class (Private class) requires a main () method.

What goes inside main () Method?

main () method is the place where magic starts. You can ask JVM to do anything within main() method via statement/instructions and loops.

What is loop?

Loop is an instruction or a number of instructions in sequence that keeps on repeating till the condition is reached. Loops are the logical structure of a programming language. Loop logical structure are typically used to do a process, check the condition, do a process, check the condition,….. till condition requirements are met.

Loops in Java

There are three different loop mechanism in Java.

1. while Loop

while Loop in Java is a control structure which is used to perform a task repeatedly for a certain number of times, as defined in boolean expression, till the expression test result is true. If the boolean expression text result is false the while loop will be ignored completely without being executed even a single time.

Syntax of while loop:

while (boolean expression)
{
	statement/instructions
}

An example of while Loop in Java:

public class While_loop
{
    public static void main(String[] args)
    {
        int A = 100;
        while(A>0)
        {
            System.out.println("The Value of A = " +A);
            A=A-10;
        }
    }
}
Sample Output
$ java While_loop 

The Value of A = 100
The Value of A = 90
The Value of A = 80
The Value of A = 70
The Value of A = 60
The Value of A = 50
The Value of A = 40
The Value of A = 30
The Value of A = 20
The Value of A = 10

Anatomy of While_loop Program

// Public Class While_loop
public class While_loop
{
    // main () Method
    public static void main(String[] args)
    {
        // declare an integer variable named 'A' and give it the value of 100
        int A = 100;
        // Keep looping as long as the value of A is greater than 0. 'A>0' here is the boolean                 
           expression
        while(A>0)
        {
	 // Statement
            System.out.println("The Value of A = " +A);
            // Post Decrement (by 10)
	 A=A-10;
        }
    }
}
2. do..while Loop

do…while loop is very much similar to the while loop except the fact that it contains a do… before while to ensure that loop execute at least once.

Syntax of while loop:

do 
{
statement/instructions
}
while (boolean expression);

You may see the above syntax which clearly shows that the 'do..' part of the loop executed before checking the boolean expression, if it is true or false. Hence no matter what the result (true/false) of boolean expression, the loop executes. If true it will execute till the condition is satisfied. If false it will be executed once.

An Example of do…while Loop in Java:

public class do_while
{
    public static void main(String[] args)
    {
        int A=100;
        do
        {
            System.out.println("Value of A = " +A);
            A=A-10;
        }
        while (A>=50);
    }
}
Sample Output
$ java do_while 

Value of A = 100
Value of A = 90
Value of A = 80
Value of A = 70
Value of A = 60
Value of A = 50

Anatomy of do_while Program:

// public class do_while
public class do_while
{
    // main () Method
    public static void main(String[] args)
    {
        // Declare a Integer Variable 'A' and assign it a value = 100
        int A=100;
        // do...while loop starts
        do
        {
            // execute the below statement without checking boolean expression condition if true 
               or false
            System.out.println("Value of A = " +A);
            // Post Decrement (by 10)
            A=A-10;
        }
        // Check condition. Loop the execute only till the value of Variable A is greater than or 
           equal to 50.
        while (A>=50);
    }
}
3. for Loop

for_loop in Java is widely used for repetition control. It is used to iterate a task for specific number of times. For loop is used to control how many times the loop needs to execute to perform a task. for loop is only useful if you know how many times you need to execute the loop.

Syntax of for loop:

for (initialization; boolean-expression; update)
{
statement
}

An example of the for loop in Java

public class for_loop
{
    public static void main(String[] arge)
    {
        int A;
        for (A=100; A>=0; A=A-7)
        {
            System.out.println("Value of A = " +A);
        }
    }
}
Sample Output
$ java for_loop 

Value of A = 100
Value of A = 93
Value of A = 86
Value of A = 79
Value of A = 72
Value of A = 65
Value of A = 58
Value of A = 51
Value of A = 44
Value of A = 37
Value of A = 30
Value of A = 23
Value of A = 16
Value of A = 9
Value of A = 2

Anatomy of for_loop Program:

// public class for_loop
public class for_loop
{
    // main () Method
    public static void main(String[] arge)
    {
        // Declare a Integer Variable A
        int A;
        // for loop starts. Here Initialization is A=100, boolean_expression is A>=0 and update is 
           A=A-7
        for (A=100; A>=0; A=A-7)
        {
            // Statement        
            System.out.println("Value of A = " +A);
        }
    }
}

The Break and Continue keywords for loops in Java

1. The Break Keyword

As the name suggest the break keyword is used to stop the entire loop immediately. The break keyword must always be used inside the loop or switch statement. Once the loop breaks by using break; JVM starts executing the very next line of code outside of the loop. An example of break loop in Java is:

public class break
{
    public static void main(String[] args)
    {
        int A = 100;
        while(A>0)
        {
            System.out.println("The Value of A = " +A);
            A=A-10;
            if (A == 40)
            {
                break;
            }
        }
    }
}
Sample Output
$ java break 

The Value of A = 100
The Value of A = 90
The Value of A = 80
The Value of A = 70
The Value of A = 60
The Value of A = 50
The Continue Keyword

The continue keyword can be used with any loop in Java. Continue keyword ask the loop to jump to the next iteration immediately. However it is interpreted differently by for loop and while/do…while loop.

Continue Keyword in for loop jumps to the next update statement.

An example of continue in for loop:

public class continue_for_loop
{
    public static void main(String[] arge)
    {
        int A;
        for (A=10; A>=0; A=A-1)
        {
	    if (A == 2)
		{
	        continue;
		}
            System.out.println("Value of A = " +A);
        }
    }
}
Sample Output
$ java continue_for_loop

Value of A = 10
Value of A = 9
Value of A = 8
Value of A = 7
Value of A = 6
Value of A = 5
Value of A = 4
Value of A = 3
Value of A = 1
Value of A = 0

Did you noticed, it skipped Value of A = 2. It does so by dumping to the next update statement.

2. Continue Keyword in while loop or do…while loop jumps to the boolean expression.

Well you can do it yourself. Its too easy. Just follow the above steps.

That’s all for now from my side.

Understanding Java Compiler and Java Virtual Machine – Part 4

Till now we have gone through working and code structure of Java and Class, Main method & Loop Control in Java. Here in this post we will see What is Java Compiler and Java Virtual Machine. What are they meant for and their roles.

Understanding Java Compiler and Java Virtual Machine

Understanding Java Compiler and Java Virtual Machine – Part 4

What is Java Compiler

Java is a strongly typed language which means variable must hold right kind of data. In a strongly typed language a variable can not hold wrong data type. This is a safety feature very well implemented in Java Programming Language.

Java compiler is responsible for through checking the variables for any violation in data-type holding. A few exception may arise at run-time which is compulsory for dynamic binding feature of Java. As Java program runs it may include new objects that were not existing before hence to have some degree of flexibility a few exceptions are allowed in data-type that a variable can hold.

Java Compiler set filter for those piece of code that won’t compile ever except for the comments. Compiler do not parse the comments and leave it as it is. Java code supports three kinds of comments within Program.

1. /* COMMENT HERE */
2. /** DOCUMENTATION COMMENT HERE */
3. // COMMENT HERE

Anything that is placed between /* and */ or /** and */ or after // is ignored by Java Compiler.

Java Compiler is responsible for strict checking any syntax violation. Java Compiler is designed to be a bytecode compiler ie., it create a class file out of actual program file written purely in bytecode.

Java Compiler is the first stage of security. It is the first line of defense where checking for incorrect data-type in variable is checked. A wrong data-type can cause damage to the program and outside it. Also compiler check if any piece of code trying to invoke restricted piece of code like private class. It restrict unauthorized access of code/class/critical data.

Java Compiler produce bytecodes/class file that are platform and architecturally neutral that requires JVM to run and it will literally run on any device/platform/architecture.

What is Java Virtual Machine (JVM)

Java Virtual Machine is the next line of security which put an extra layer between Java Application and OS. Also it check the class file that has been security checked and compiled by Java Compiler, if someone tampered the class file/bytecode to restrict access to unauthorized critical data.

Java Virtual Machine interprets the bytecode by loading the class file to machine Language.

JVM is responsible for functions like Load and Store, Arithmetic calculation, Type conversion, Object Creation, Object Manupulation, Control Transfer, Throwing exception, etc.

The working model of Java in which Java Compiler compiles the code into calssfile/bytecodes and then Java Virtual Machine run the classfile/bytecode. This model ensures that code run at fast speed and the additional layer ensures security.

So what do you think – Java Compiler or Java Virtual Machine perform more important task? A Java program has to run through both the surface (Compiler and JVM) essentially.

This post sums the role of Java Compiler and JVM. All your suggestions are welcome in the comments below. We are working on the next post “object oriented approach of Java”. Till then stay tuned and connected to TecMint. Like and share us and help us get spread.

Object Oriented Approach of Java Programming and Encapsulation – Part 5

Since the beginning of this series (and even before that) you knew Java is an Object Oriented Programming Language. The object oriented Programming Language is based upon the concept of “objects”, which contains data as attributes in methods.

Object Oriented Approach of Java

Object Oriented Approach of Java – Part 5

Every object in Java has state and behavior which are represented by instance variables and methods. Each instance of a class can have unique value for it’s instance variable.

For example,

Machine A may be powered up with Debian and have 8GB of RAM while Machine B can have installed Gentoo with 4GB of RAM. Also it is obvious that managing Machine that have installed Gentoo requires more knowledge – A behavior acting on its state. Here method is using instance variable values.

The JVM when parse a class, it make object of that kind. When you are writing a class, in actual you acting like a compiler telling your class what the object should know and how it should act. Every object of a particular type can have different value for same instance variable.

Every Instance of a class has the same method but it possible that all of them behave differently.

The OS class has 3 Instance variables namely OS NameOS TypeOS Category.

OS
OS_Name
OS_Type
OS_Category
Boot()
Reboot()
scan()

The Boot() method boots one OS which is represented by OS Name for that instance. So if you boot() on one instance you will boot into Debian while on another instance you will boot into Gentoo. The method code, remains the same in either case.

Void Boot() 
	{
	bootloader.bootos(OS_Name);
	}

You are already aware that the program starts to execute just after the main() method. You can pass values into you method.

For example you would like to tell you OS what services to start at boot as:

You are already aware that the program starts to execute just after the main() method. You can pass values into you method. For example you would like to tell you OS what services to start at boot as:
OS.services(apache2);

What you pass into methods are called arguments. You can use a variable with a type and a name inside a method. It is important to pass values with parameter if a method takes a parameter.

OS deb = debian();
deb.reboot(600);

Here the reboot method on the OS passes the value of 600 (reboot machine after 600 sec) as an argument to the method. Till now we have seen method always returning void, which means it don’t return you anything, simply as:

void main()
	{
	…
	…
	}

However you can ask your compiler to get exactly what you are desiring and your compiler won’t return you wrong types. You may simply do like:

int Integer()
	{
	…
	…
	return 70;
	}

You can send more than one value value to a method. You can do this by calling two parameter methods and sending it to arguments. Note variable type and parameter type must always match.

void numbers(int a, int b)
	{
	int c = a + b;
	System.out.print(“sum is” +c);
	}
Declare and Initialize Instance Variables

1. When you don’t know the value to initialize.

int a;
float b;
string c;

2. When the know the value to Initialize.

int a = 12;
float b = 11.23;
string c = tecmint;

Note: A instance variables are often confused with local variables, however there is a very thin line between them to differentiate.

3. Instance Variables are declared inside a class unlike local variables that are declared within a method.

4. Unlike Instance Variables, local variables must initialize before it can be used. The compiler will report error if you use local variable before it is initialized.

Encapsulation

You might have heard about encapsulation. It is a feature of most of the object oriented programming language which makes it possible to bind data and functions into a single component. Encapsulation is supported by class and protects codes from accidental damage by creating a wall around objects and hides their properties and methods, selectively.

We will expand encapsulation in details in the right tutorial when it is required. As of now it is sufficient for you to know What encapsulation is? What it does? And how it does?

That’s all for now.

Day to Day: Learning Java Programming Language – Part I

In 1995 when c++ programming language were widely used. An employee of Sun Microsystem working on a platform called ‘Green‘ Developed a programming language and named it as ‘oak‘.

The name was inspired by an oak tree which he use to see outside of his office windows. Later the name oakwas replaced by Java.

Java Programming language was developed by James Gosling and hence James Gosling has been honoured as the Father of Java Programming Language.

James Gosling - Father of Java Programming

James Gosling – Father of Java Programming

Now the question is, if there already was such a functional programming language (c++) available, why Mr. Gosling and his team needed a different programming language.

Java was intended with the Features:
  1. Write once, run anywhere
  2. Cross Platform Program Development i.e., Architecturally Neutral
  3. Security
  4. Class based
  5. Object oriented
  6. Support for web technologies
  7. Robust
  8. Interpreted
  9. Inheritance
  10. Threaded
  11. Dynamic
  12. High Performance

Before Java was developed, The program written on a computer or for an architecture won’t run on another computer and architecture, hence while developing Java the team focus mainly on cross platform functionality and from there the concept of write oncerun anywhere came, which remains the quote of sun microsystem for a long time.

Java program runs inside JVM (Java Virtual Machine) which adds an extra layer between System and program, which further means extra security. Other programming language prior to Java was not having such feature which means a code being run could be malicious can infect a system or other systems attached to it, however Java maintained to overcome this issue using JVM.

Java is a OOP (Object Oriented Programming) Language. By object oriented feature, it means all the entity is a object which further suggest Real World Object.

When Java was being developed at Sun, coincidentally web technologies has started to take take shape and the Java development was highly influenced with this, and even today web world uses Java more than any other language. Java is strictly an interpreted language, which means Java executes the source code directly by translating the source code in an intermediate form.

Java is robust in nature i.e., it can cope up with errors be in input or calculation. When we say Java is dynamic programming language, we mean to say that it is capable of breaking complex problems into simple problems and then execute them independently.

Java supports threadingThreads are small processes that can be managed independently by operating system scheduler.

Java Support Inheritance, which means relation can be established between classes.

No doubt! Java was developed as a successor to ‘c‘ and ‘c++‘ programming Language hence it inherit a number of features from its predecessor viz., c and c++ with a number of new features.

Learning Java from the point of view of carrier is highly appreciated and one of the most sought after technology. The best way to learn any programming language is to start programming.

Before we go to programming, one more thing we need to know is: the class name and program name should be same, however it can be different in certain condition but by convention it is always a good idea to rename the program as it’s class name.

Javac is the compiler of Java Programming Language. Obviously you should have Java installed and environment variable set. Installing Java on RPM based system is just a click away as on Windows and more or less on Debian based system.

However Debian Wheezy don’t have Java in its repo. And it is a little messy to install Java in Wheezy. Hence a quick step to install on debian is as below:

Installing Java in Debian Wheezy

Download correct Java version for your System and architecture from here:

  1. http://www.oracle.com/technetwork/java/javase/downloads/index.html

Once you’ve downloaded , use the following commands to install in Debian Wheezy.

# mv /home/user_name/Downloads /opt/
# cd /opt/
# tar -zxvf jdk-7u3-linux-x64.tar.gz
# rm -rf jdk-7u3-linux-x64.tar.gz
# cd jdk1.7.0_03
# update-alternatives --install /usr/bin/java java /opt/jdk1.7.0_03/bin/java 1
# update-alternatives --install /usr/bin/javac javac /opt/jdk1.7.0_03/bin/javac 1
# update-alternatives --install /usr/lib/mozilla/plugins/libjavaplugin.so mozilla-javaplugin.so /opt/jdk1.7.0_03/jre/lib/amd64/libnpjp2.so 1
# update-alternatives --set java /opt/jdk1.7.0_03/bin/java
# update-alternatives --set javac /opt/jdk1.7.0_03/bin/javac
# update-alternatives --set mozilla-javaplugin.so /opt/jdk1.7.0_03/jre/lib/amd64/libnpjp2.so

For RHELCentOS and Fedora users can also install latest version of Java by going to below url.

  1. Install Java in RHEL, CentOS and Fedora

Let’s move to programming section to learn few basic Java programs.

Program 1: hello.java

class hello{
public static void main (String args[]){
System.out.println("Sucess!");
}
}

Save it as: hello.java. And Compile it and run as shown.

# javac hello.java
# java hello

Sample Output

Sucess!

Program 2: calculation.java

class calculation { 
public static void main(String args[]) { 
int num; 
num = 123;
System.out.println("This is num: " + num); 
num = num * 2; 
System.out.print("The value of num * 2 is "); 
System.out.println(num); 
} 
}

Save it as: calculation.java. And Compile it and run as shown.

# javac calculation.java
# java calculation

Sample Output

This is num: 123
The value of num * 2 is 246

Do it Yourself:

  1. Write a program that ask for your first name and last name and then address you with your last name.
  2. Write a program with three Integer values and perform additionSubstractionMultiplication and Divisionand gets the custom output.

Note: This way of learning will make you know and learn something. However if you face problem in writing programs of ‘Do it Yourself‘ you can come up with your codes and problems in comments.

Day to Day: Learning Java Programming Language – Part 2

Moving a step ahead of the previous article on Day-to-DayJava Programming Part – I. Here in this very post we will be learning control statements and loops in Java, which is very useful in developing an application.

Learning java Programming

Learning java Programming Part – 2

if statement

The if statement in Java works similar to if statement in any other programming language of the world including shell scripting.

Program 3: compare.java

class compare{ 
public static void main(String args[]){ 
int a,b; 
a=10; 
b=20; 
if(a < b)  
System.out.println("a(" +a+ ")is less than b(" +b+")");  
a=a*2;  
if(a==b)  
System.out.println("a(" +a+ ")is equal to b(" +b+")");  
a=a*2;  
if(a>b) 
System.out.println("a(" +a+ ")is greater than b(" +b+")"); 
} 
}

Save it as: compare.java. And Compile it and run as shown.

# javac compare.java
# java compare

Sample Output

a(10)is less than b(20) 
a(20)is equal to b(20) 
a(40)is greater than b(20)

Note: In the above program

  1. A class namely compare is defined.
  2. Two Integers are declared with the initial value of 10 and 20 respectively.
  3. The if statement checks the condition and act according to the statement. Syntax of if statement is if (condition) statement;
  4. System.out.println prints anything and everything that is placed between double quotes. Anything within the quotes are printed as it is, and outside of quotes are treated as variable.
  5. + is a concatenation, which is used to concatenate two parts of a statement.
for loop

If you have any programming experience, sure you would be aware of the importance of loop statements. Here again the for loop statement works similar to the for statement in any language.

Program4: forloop.java

class forloop{ 
public static void main(String args[]){ 
int q1; 
for (q1=0; q1<=10; q1++) 
System.out.println("The value of interger: "+q1); 
} 
}

Save it as: forloop.java. And Compile it and run as shown.

# javac forloop.java
# java forloop

Sample Output

Output:
The value of interger: 0 
The value of interger: 1 
The value of interger: 2 
The value of interger: 3 
The value of interger: 4 
The value of interger: 5 
The value of interger: 6 
The value of interger: 7 
The value of interger: 8 
The value of interger: 9 
The value of interger: 10

Note: In the above program all the statements and codes are more or less identical to the above program, except the for statement.

  1. The above for statement is a loop, which continues to execute again and again till the conditions are satisfied.
  2. The for loop, generally is divided in three chunks of codes separated by semicolon, each of which is very meaningful.
  3. The first part (q1=0, in the above program) is called initialiser. i.e., the above integer, q1 is forced to start with ‘0‘.
  4. The second part (q1<=10, in the above program) is called condition. i.e., the above integer is permitted to go up-to the value of 10 or less than 10, which ever is correct for the given situation.
  5. The Third and the last part (q1++, in the above code, which may be written as q+1) is called iteration.i.e., the above integer value is asked to increase with a value of ‘+1‘ every time the loop is executed, till the condition is satisfied.

Well the above program has only one linked statement to the ‘for loop‘. But in larger and more sophisticated program the loop statement could be linked to more than one statement or say a block of codes.

Program 5: loopblock.java

class loopblock{ 
	public static void main(String args[]){ 
		int x, y=20;		 
		for(x=0;x<20;x=x+2) 
		{ 
		System.out.println("x is: "+x); 
		System.out.println("y is: "+y); 
		y=y-2; 
} 
} 
}

Save it as: loopblock.java. And Compile it and run as shown.

# javac loopblock.java
# java loopblock

Sample Output

x is: 0 
y is: 20 
x is: 2 
y is: 18 
x is: 4 
y is: 16 
x is: 6 
y is: 14 
x is: 8 
y is: 12 
x is: 10 
y is: 10 
x is: 12 
y is: 8 
x is: 14 
y is: 6 
x is: 16 
y is: 4 
x is: 18 
y is: 2

Note: The above program is almost the same as the previous program, except it uses a block of codes linked with for loop. To execute more than one statement/block, we need to put all the statement as “{….codes/block..}” else the code won’t compile correctly.

Yeah we can use ‘x- –‘ or ‘x-1‘ for decrease statement in for loop where required.

After getting a glimpse of whole lot of codes, we need to know a little theory which will be helpful in the later stage of coding’s.

What we have seen till now is: Java programs are a collection of Whitespacesidentifierscommentsliteralsoperatorsseparators and keywords.

Whitespace

Java is a free form language, you need not follow any indentation rule. You could write all the codes on a single line with one whitespace between each token and it will execute correctly. However it will be difficult to understand.

Identifiers

In Java identifiers are class namemethod name or variable name. It could be uppercase, lowercase, their sequence or a combination of all of these with special characters like ‘$‘. However identifiers should never start with a numerical values.

Examples of valid identifiers in Java:

s4, New#class, TECmint_class, etc.
Literals

A constant value in Java is created using literals. e.g., ‘115′ is an integer literal. ‘3.14‘ is a float literal, ‘X‘ is a character constant and “tecmint is the best online site dedicated to foss technology” is a string literal.

Comment

comment has nothing to do with the execution of codes in Java or any other language, however comment in between the codes make them readable and human understandable. It is a good practice to write comments in between the lines of code, where required.

In Java anything between /** and **/ is meant for documentation and is a comment.

Certain separators are defined in Java.

  1. Parenthesis ()
  2. Braces {}
  3. Brackets []
  4. Semicolon ;
  5. comma ,
  6. Period .

Note: Each separator has a meaning and needs to be used where required, You can’t use one in place of other. We will discuss them in details, in the later phase of codes itself.

Keywords

There are 50 reserved keywords defined in Java. These keywords can not be used as names for a variable, class or method as these keywords has predefined meaning.

abstract	continue	for	          new	        switch
assert	        default	        goto	          package	synchronized
boolean	        do	        if	          private	this
break   	double	        implements	  protected	throw
byte	        else	        import	          public	throws
case	        enum	        instanceof	  return	transient
catch	        extends	        int	          short	        try
char	        final	        interface	  static	void
class	        finally	        long	          strictfp	volatile
const	        float	        native	          super	        while

The keyword cons and keywords are reserved but not used. Feeling nervous with all these stuffs. You actually don’t need to be nervous, neither you need to memorise all these stuffs. You will be used to all these when you start living Java.

That’s all for now from me.

Source

HTML5 Mobile Web Development

Installing Netbeans and Java JDK in Ubuntu and Setting Up a Basic HTML5 Project

In this 4-article mobile web development series, we will walk you through setting up Netbeans as an IDE (also known as Integrated Development Environment) in Ubuntu 14.04.2 LTS Trusty Tahr to start developing mobile-friendly and responsive HTML5 web applications.

HTML5 Mobile Web Development

HTML5 Mobile Web Development – Part 1

Following are the 4-article series about HTML5 Mobile Web Development:

Part 1Installing Netbeans and Java JDK in Ubuntu 14.04 and Setting Up a Basic HTML5 Project

A well-polished work environment (as we will later see), autocompletion for supported languages, and its seamless integration with web browsers are, in our opinion, some of Netbeans, most distinguishing features.

Let us also remember that the HTML 5 specification brought many advantages for developers – to name a few examples: cleaner code thanks to many new elements), built-in video and audio playback capabilities (which replaces the need for Flash), cross-compatibility with major browsers, and optimization for mobile devices.

Although we will initially test our applications on our local development machine, we will eventually move our web site to a LAMP server and turn it into a dynamic tool.

Along the way we will make use of jQuery (a well-known cross-platform Javascript library that greatly simplifies client-side scripting), and of Bootstrap (the popular HTML, CSS, and JavaScript framework for developing responsive websites). You will see in coming articles how easy it is to set up a mobile-friendly application using these HTML 5 tools.

After you go through this brief series, you will be able to:

  1. use the tools described herein to create basic HTML5 dynamic applications, and
  2. go on to learn more advanced web development skills.

However, please note that even though we will be using Ubuntu for this series, the instructions and procedures are perfectly valid for other desktop distributions as well (Linux MintDebianCentOSFedora, you name it).

To that end, we have chosen to install the necessary software (Netbeans and the Java JDK, as you will see in a minute) using a generic tarball (.tar.gz) as installation method.

That being said – let’s get started with Part 1.

Installing Java JDK and NetBeans

This tutorial assumes that you already have an Ubuntu 14.04.2 LTS Trusty Tahr desktop installation in place. If you don’t, please refer to Ubuntu 14.04 Desktop Installation article, written by our colleague Matei Cezar before proceeding further.

Since the Netbeans version that is available for download from the Ubuntu official repositories (7.0.1) is a little outdated, we will download the package from the Oracle website to get a newer version (8.0.2).

To do this, you have two choices:

  1. Choice 1: download the bundle that includes Netbeans + JDK, or
  2. Choice 2: install both utilities separately.

In this article we will choose #2 because that not only means a download that is a bit smaller (as we will only install Netbeans with support for HTML5 and PHP), but also will allow us to have a standalone JDK installer should we need it for another setting that does not require Netbeans nor involve web development (mostly related to other Oracle products).

To download JDK 8u45, go to the Oracle Technology Network site and navigate to the Java → Java SE → Downloads section.

When you click on the image highlighted below, you will be asked to accept the license agreement and then you will be able to download the necessary JDK version (which in our case is the tarball for 64-bit machines). When prompted by your web browser, choose to save the file instead of opening it.

Download Java JDK

Download Java JDK

When the download is complete, go to ~/Downloads and extract the tarball to /usr/local/bin:

$ sudo tar xf jdk-8u45-linux-x64.tar.gz -C /usr/local/bin

Extract Java

Extract Java

To install Netbeans with support for HTML5 and PHP, go to https://netbeans.org/downloads/ and click Download as indicated in the following image:

Download NetBeans

Download NetBeans

This will cause your browser to either open the installation shell script or save it to your computer. Choose Save File, then OK:

Save NetBeans Shell Script

Save NetBeans Shell Script

Once done, turn the .sh into an executable file and then run the shell script with administrative privileges:

$ cd ~/Downloads
$ chmod 755 netbeans-8.0.2-php-linux.sh
$ sudo ./netbeans-8.0.2-php-linux.sh --javahome /usr/local/bin/jdk1.8.0_45

From then on, follow the on-screen instructions to complete the installation leaving the default values:

NetBeans IDE Installation

NetBeans IDE Installation

and wait for the installation to complete.

Adding References

That sure looks good, but we still haven’t told our index.html file to use any of those files. For the sake of simplicity, we will replace the contents of that file with a barebones html file first:

<!DOCTYPE html>
<html>
<head>
	<meta charset="utf-8">
	<title>jQuery and Bootstrap</title>
</head>
<body>
 
   <!-- // Your code will appear here. -->

</body>
</html>

Then, just drag and drop each file from the project navigator section to the code, inside the </head> tags, as you can see in the following screencast. Make sure that the reference to jQuery appears before the reference to Bootstrap because the latter depends on the former:

That’s it – you have added the references to both jQuery and Bootstrap, and can now start writing code.

Writing Your First Responsive Code

Let’s now add a navigation bar and place it at the top of our page. Feel free to include 4-5 links with dummy text and don’t link it to any document for the time being – just insert the following code snippet inside the body of the document.

Don’t forget to spend some time become acquainted with the auto-completion feature in Netbeans, which will show you the classes made available by Bootstrap as you start typing.

At the heart of the code snippet below is the Bootstrap container class, which is used to place content inside of a horizontal container which will automatically resize depending on the size of the screen where it is being viewed. Not less important is the container-fluid class, which will ensure that the content within will occupy the entire width of the screen.

<div class="container">
  	<nav class="navbar navbar-default">
    	<div class="container-fluid">
      	<div class="navbar-header">
        	<button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#navbar" aria-expanded="false" aria-controls="navbar">
          	<span class="sr-only">Toggle navigation</span>
          	<span class="icon-bar"></span>
          	<span class="icon-bar"></span>
          	<span class="icon-bar"></span>
        	</button>
        	<a class="navbar-brand" href="#">Project name</a>
      	</div>
      	<div id="navbar" class="navbar-collapse collapse">
        	<ul class="nav navbar-nav">
          	<li class="active"><a href="#">Home</a></li>
          	<li><a href="#">About</a></li>
          	<li><a href="#">Contact</a></li>
          	<li class="dropdown">
            	<a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-haspopup="true" aria-expanded="false">Dropdown <span class="caret"></span></a>
            	<ul class="dropdown-menu">
              	<li><a href="#">Action</a></li>
              	<li><a href="#">Another action</a></li>
              	<li><a href="#">Something else here</a></li>
              	<li role="separator" class="divider"></li>
              	<li class="dropdown-header">Nav header</li>
              	<li><a href="#">Separated link</a></li>
              	<li><a href="#">One more separated link</a></li>
            	</ul>
          	</li>
        	</ul>
      	</div><!--/.nav-collapse -->
    	</div><!--/.container-fluid -->
  	</nav>
</div>

Another distinguishing feature of Bootstrap is that it eliminates the need for tables in HTML code. Instead, it uses a grid system to layout content and make it look properly both on large and small devices (from phones all the way to big desktop or laptop screens).

In Bootstrap’s grid system, the screen layout is divided in 12 columns:

Bootstrap Grid Layout

Bootstrap Grid Layout

A typical setup consists of using the 12-column layout divided into 3 groups of 4 columns each, as follows:

Bootstrap Column Layout

Bootstrap Column Layout

To indicate this fact in code, and in order to have it displayed that way starting in medium-size devices (such as laptops) and above, add the following code below the closing </nav> tag:

...
    </nav>
   	 <div class="row">
   	 	<div class="col-md-4">This is the text in GROUP 1</div>
   	 	<div class="col-md-4">This is the text in GROUP 2</div>
   	 	<div class="col-md-4">This is the text in GROUP 3</div>
   	 </div>
</div> <!--Closing tag of the container class -->

You must have probably noticed that the column classes in the Bootstrap grid indicate the starting layout for a specific device size and above, as md in this example stands for medium (which also covers lg, or large devices).

For smaller devices (sm and xs), the content divs gets stacked and appears one above the next.

In the following screencast you can see how your page should look by now. Note that you can resize your browser’s window to simulate different screen sizes after launching the project using the Run project button as we learned in Part 1.

Summary

Congratulations! You must have written a simple, yet functional, responsive page by now. Don’t forget to check the Bootstrap website in order to become more familiar with the almost-limitless functionality of this framework.

As always, in case you have a question or comment, feel free to contact us using the form below.

Creating a Dynamic HTML5 Web Application and Deploying on Remote Web Server Using Filezilla

In the previous two articles of this series, we explained how to set up Netbeans in a Linux desktop distribution as an IDE to develop web applications. We then proceeded to add two core components, jQuery and Bootstrap, in order to make your pages mobile-friendly and responsive.

Create HTML5 Applications and Deploy to Web Server

Create HTML5 Applications and Deploy to Web Server – Part 3

  1. Install Netbeans and Java to Create a Basic HTML5 Application – Part 1
  2. Creating Mobile-Friendly and Responsive Web Application Using jQuery and Bootstrap – Part 2

As you will seldom deal with static content as a developer, we will now add dynamic functionality to the basic page that we set up in Part 2. To begin, let us list the prerequisites and address them before moving forward.

Prerequisites

In order to test a dynamic application in our development machine before deploying it to a LAMP server, we will need to install some packages. Since we are using a Ubuntu 14.04 desktop to write this series, we assume that your user account has already been added to the sudoers file and granted the necessary permissions.

Installing Packages and Configuring Access to the DB Server

Please note that during the installation you may be prompted to enter a password for the MySQL root user. Make sure you choose a strong password and then continue.

Ubuntu and derivatives (also for other Debian-based distributions):

$ sudo aptitude update && sudo aptitude install apache2 php5 php5-common php5-myqsql mysql mysql-server filezilla

Fedora / CentOS / RHEL:

$ sudo yum update && sudo yum install httpd php php-common php-mysql mysql mysql-server filezilla

When the installation is complete, it is strongly recommended that you run mysql_secure_installation to, not surprisingly, secure your database server. You will be prompted for the following information:

  1. Change the root password? [Y/n]. If you already set a password for the MySQL root user, you can skip this step.
  2. Remove anonymous users? [Y/n] y.
  3. Disallow root login remotely? [Y/n] y (Since this is your local development environment, you will not need to connect to your DB server remotely).
  4. Remove test database and access to it? [Y/n] y
  5. Reload privilege tables now? [Y/n] y.

Creating a sample Database and Loading test Data

To create a sample database and load some test data, log on to your DB server:

$ sudo mysql -u root -p

You will be be prompted to enter the password for the MySQL root user.

At the MySQL prompt, type

CREATE DATABASE tecmint_db;

and press Enter:

Create MySQL Database

Create MySQL Database

Now let’s create a table:

USE tecmint_db;
CREATE TABLE articles_tbl(
   Id INT NOT NULL AUTO_INCREMENT,
   Title VARCHAR(100) NOT NULL,
   Author VARCHAR(40) NOT NULL,
   SubmissionDate TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
   PRIMARY KEY ( Id )
);

Create Database Table

Create Database Table

and populate it with sample data:

INSERT INTO articles_tbl (Title, Author) VALUES ('Installing Filezilla in CentOS 7', 'Gabriel Canepa'), ('How to set up a LAMP server in Debian', 'Dave Null'), ('Enabling EPEL repository in CentOS 6', 'John Doe');

Populate Database Table

Populate Database Table

Adding symbolic links in the Web Server directory

Since Netbeans, by default, stores projects in the current user’s home directory, you will need to add symbolic links that point to that location. For example,

$ sudo ln -s /home/gabriel/NetBeansProjects/TecmintTest/public_html /var/www/html/TecmintTest

will add a soft link called TecmintTest that points to /home/gabriel/NetBeansProjects/TecmintTest/public_html.

For that reason, when you point your browser to http://localhost/TecmintTest/, you will actually see the application that we set up in Part 2:

HTML5 Application

HTML5 Application

Setting up a remote FTP and Web server

Since you can easily set up a FTP and Web server with the instructions provided in Part 9 – Install and Configure Secure FTP and Web Server of the RHCSA series in Tecmint, we will not repeat them here. Please refer to that guide before proceeding further.

Turning our application into a Dynamic One

You will probably think that we can’t do much with the sample data that we added to our database earlier, and you are right, but it will be enough to learn the basics of embedding PHP code and the results of queries to a MySQL DB in your HTML5 pages.

First off, we will need to change the extension of the main document of our application to .php instead of html:

# mv /var/www/html/TecmintTest/index.html /var/www/html/TecmintTest/index.php

Then let’s open the project in Netbeans and start doing some modifications.

1. Add a folder to the project named includes where we will store backend php applications.

2. Create a file named dbconnection.php inside includes and insert with the following code:

<?php
    $host = "localhost";
    $username = "root";
    $password = "MyFancyP4ssw0rd";
    $database = "tecmint_db";

    //Establish a connection with MySQL server
    $mysqli = new mysqli($host, $username, $password,$database);

    /* Check connection status. Exit in case of errors. */
    if (mysqli_connect_errno()) {
        printf("Connect failed: %s\n", mysqli_connect_error());
        exit();
    }
    $mysqli -> query("SET character_set_results = 'utf8', character_set_client = 'utf8', character_set_connection = 'utf8', character_set_database = 'utf8', character_set_server = 'utf8'");

    $records = array();
    $query = "SELECT Title, Author, SubmissionDate FROM articles_tbl;";
    $result = $mysqli->query($query) or die($mysqli->error);
    $data = array();

    while ( $row = $result->fetch_assoc() ){
        $data[] = json_encode($row);
    }
    echo json_encode( $data );
?>

as indicated in the following image:

Create Database Configuration FileCreate Database Configuration File

Create Database Configuration File

This file will be used to connect to the database server, to query it, and to return the results of that query in a JSON-like string to be consumed by the frontend application with a slight modification.

Note that typically you would use separate files to perform each of these operations, but we chose to include all of that functionality in one file for sake of simplicity.

3. In index.php, add the following snippet just beneath the opening body tag. That is the jQuery way of calling an external PHP app when the web document is ready, or in other words, each time it loads:

<script>
    $(document).ready(function(){
        $.ajax({
            url: 'includes/dbconnection.php',
            datatype: 'json',
            type: 'POST',
            success: function(data){
                var output = $.parseJSON(data);
                for(var i =0;i < output.length;i++)
                {
                  var item = output[i];
                  $("#title").append("<br>"+item.Title);
                  $("#author").append("<br>"+item.Author);
                  $("#submissiondate").append("<br>"+item.SubmissionDate);
                }
            }}
        );
    });
</script>

Add jQuery Script

Add jQuery Script

4. Now, add an unique id (same as inside the for loop above) to each line in the div with class row at the bottom of index.php:

<div class="row">
    <div class="col-md-4" id="title" style="text-align: center"><strong>Titles</strong></div>
    <div class="col-md-4" id="author" style="text-align: center"><strong>Authors</strong></div>
    <div class="col-md-4" id="submissiondate" style="text-align: center"><strong>Published on:</strong></div>
</div>

If you now click Run Project, you should see this:

Working Web Application Preview

Working Web Application Preview

Which is essentially the same as the information returned when we ran the query from our MySQL client prompt earlier.

Deploying to a LAMP server using Filezilla

Launch Filezilla from the Dash menu and enter the IP of the remote FTP server and your credentials. Then click Quickconnect to connect to the FTP server:

Upload Files To FTP Server

Deploying Application on Web Server

Navigate to /home/gabriel/NetBeansProjects/TecmintTest/public_html/, select its contents, right click on them and select Upload.

This, of course, assumes that the remote user indicated in Username has write permissions on the remote directory. When the upload is complete, point your browser to the desired location and you should see the same page as before (please note that we have not set up the local MySQL database to the remote host, but you can easily do so following the steps from the beginning of this tutorial).

Web Application Preview

Web Application Preview

Summary

In this article we have added dynamic functionality to our web application using jQuery and a little JavaScript. You can refer to the official jQuery docs for more information, which will be very helpful if you decide to write more complex applications. Wrapping up, we have also deployed our application to a remote LAMP server using a FTP client.

We are excited to hear your opinion about this article – feel free to contact us using the form below.

Tuning Dynamic HTML5 Web Apps Using Open Source Online Tools

As I begin the last article in this series, it is my hope that you have been able to grasp the importance of HTML 5 and mobile-friendly / responsive web development. Regardless of your desktop distribution of choice, Netbeansis a powerful IDE and when used together with basic Linux command-line skills and the tools discussed in Part 3, can help you to create outstanding applications without much hassle.

Tuning Dynamic HTML5 Web Apps

Tuning Dynamic HTML5 Web Apps – Part 4

However, please note that we have only covered the basics of HTML 5 and web development in this series and assumed that you are somewhat familiar with HTML, but the WWW is full of great resources – some of them are FOSS – to expand on what we’ve shared here.

In this last guide we will talk about some of those tools and show you how to use them to add to the existing page we have been working on Beautifying our UI (user interface).

You will recall from Part 2 of this series (“Adding jQuery and Bootstrap to Write a HTML5 Web Application“) that the Bootstrap zip file comes with a directory named fonts. We saved its contents into a folder with the same name inside our project’s SiteRoot:

Bootstrap Fonts

Bootstrap Fonts

As you probably can tell from the above image, Bootstrap includes a set of elements called glyphicons, which are no more and no less the built-in components that provide nice-looking icons for buttons and menus in your applications. The complete list of glyphicons included in Bootstrap is available at http://getbootstrap.com/components/.

To illustrate the use of glyphicons, let’s add some to the navigation bar in our main page. Modify the navigation bar menus as follows. Please note the space between each closing span tag and the menu text:

<li class="active"><a href="#"><span class="glyphicon glyphicon-home" aria-hidden="true"></span> Home</a></li>
<li><a href="#"><span class="glyphicon glyphicon-info-sign" aria-hidden="true"></span> About</a></li>
<li><a href="#"><span class="glyphicon glyphicon-envelope" aria-hidden="true"></span> Contact</a></li>

(by the way, the span tags are used here to prevent the icons from getting mixed with other components).

And here’s the result:

Add Navigation Menu

Add Navigation Menu

Glyphicons, though useful, are also limited. And here’s where Font Awesome enters the scene. Font Awesome is an icon / font / css complete toolkit that has the potential to seamlessly integrate with Bootstrap.

Not only you can add a whole lot of other icons to your pages, but can also resize them, cast shadows, change color, and a many other options using CSS. However, since dealing with CSS is out of the scope of this series, we will only deal with the default-sized icons but encourage you at the same time to “dig a little deeper” to discover how far this tool can take you.

To download Font Awesome and incorporate it into your project, execute the following commands (or feel free to go directly to the project’s web site and download the zip file through your browser and decompress it using GUI tools):

# wget http://fortawesome.github.io/Font-Awesome/assets/font-awesome-4.3.0.zip

(yes, the domain name is actually FortAwesome, with an R, so that is not a typo).

# unzip font-awesome-4.3.0.zip
# cp font-awesome-4.3.0/css/font-awesome.min.css /home/gabriel/NetBeansProjects/TecmintTest/public_html/styles
# cp font-awesome-4.3.0/fonts/* /home/gabriel/NetBeansProjects/TecmintTest/public_html/fonts

And add the .css file to the references list at the top of our page, just like we did with jQuery and Bootstrapearlier (remember that you don’t have to type everything – just drag the file from the Projects tab into the code window):

Add Font Awesome

Add Font Awesome

Let’s take the dropdown list in our navigation bar, for example:

Dropdown List

Dropdown List

Nice, right? All it takes is replacing the contents of the existing ul class named dropdown-menu at the bottom of index.php with:

<li><a href="#"><i class="fa fa-pencil fa-fw"></i> Edit</a></li>
<li><a href="#"><i class="fa fa-trash-o fa-fw"></i> Delete</a></li>
<li><a href="#"><i class="fa fa-ban fa-fw"></i> Ban</a></li>
<li class="divider"></li>
<li><a href="#"><i class="i"></i> Make admin</a></li>

Believe me – investing your time in learning how to use these tools will be a very rewarding experience.

Where to Ask for Help

As an IT person, you must be well acquainted with the many resources for help the Internet has made available. Since doing web development is not an exception, here are a few resources that we’re sure you will find useful while tuning your applications.

When dealing with Javascript code (for example, when working with jQuery as we did in Part 2), you will want to use JSHint, an online Javascript quality code checker that aims at helping developers to detect errors and potential problems. When those pitfalls are found, JSHint indicates the line number where they are located and gives you hints to fix them:

JSHint Tool to Detect Errors

JSHint Tool to Detect Errors

That surely looks great, but even with this great automated tool, there will be times when you will need someone else to take a look at your code and tell you how to fix it or otherwise improve it, which implies sharing it somehow.

JSFiddle (an online Javascript / CSS / HTML code tester) and Bootply (same as JSFiddle but specialized in Bootstrap code) let you save code snippets (also known as fiddles) and provide you a link to share them very easily over the Internet (either via email with your friends, using your social network profiles, or in forums).

Summary

In this article we have provided you with a few tips to tune your web applications and shared some resources that will come in handy if you get stuck or want another pair of eyes (and not just one, but many) to take a look at your code to see how it can be improved. Chances are that you may know of other resources as well. We hope that this series have given you a glimpse of the vast possibilities of mobile-friendly and responsive web development.

Source

SUSE OpenStack Cloud 9 – coming soon!

Share with friends and colleagues on social media

Here at SUSE, we’re very excited to let you know that the latest version of SUSE OpenStack Cloud is due to be released later this month. In fact, you might say that we’re on cloud 9.

The main event

Now that I have that dreadful pun out of the way, let me tell you a little bit about this release. Designed and engineered to take the pain out of implementing a software-defined infrastructure, SUSE OpenStack Cloud enables customers to transform their IT infrastructure so they can meet the business challenges of today and tomorrow. Based on the OpenStack Rocky release, SUSE OpenStack Cloud 9 delivers enhanced agility and improved time to value, while simplifying the transition of traditional workloads. New features and improved functionality include:

  • Improved agility with a new user interface that simplifies post-deployment cloud operations. The new day two UI, CLM Admin Console simplifies post-deployment cloud operations for customers choosing the CLM installation of SUSE OpenStack Cloud, through its familiar interface.
  • Simplifying the transition of traditional workloads to OpenStack private cloud and maximizing the value of your existing IT investments. Businesses can now choose to customize their bare metal servers for specific workload performance and use case needs, further simplifying the transition of traditional or business-critical workloads to their OpenStack private cloud.
  • Improved time to value by reducing the complexity of delivering an enterprise-grade software-defined infrastructure. Powered by the OpenStack Rocky release and built on SUSE Linux Enterprise 12 SP4, SUSE OpenStack Cloud takes the complexity out of deploying a mature, stable and robust private cloud that has been designed for mission-critical workloads.

Moving to a software-defined infrastructure is a key part of many organizations’ digital transformation, and increasing numbers of businesses are choosing SUSE OpenStack Cloud to power their software-defined infrastructure. It gives them the business agility they need to compete in today’s market, as well as enabling them to make better use of their existing IT infrastructure for cost benefit, while taking advantage of new business opportunities and rapidly evolving technology trends such as DevOps, containers and AI.

The winning combination

Combining SUSE OpenStack Cloud with SUSE Enterprise Storage makes it easier for businesses to build a software-defined infrastructure with cost-effective, near-infinite storage scalability. Those companies concerned about an IT skills gap delaying their digital transformation may be interested in SUSE Select Services. Offering a 12-month, fixed-price solution combining implementation, consulting, knowledge transfer and premium support to jumpstart their private cloud investment and help them to realize the benefits sooner.

Gonna fly now

One thing’s for sure, SUSE OpenStack Cloud 9 powered by OpenStack Rocky can keep your business going the distance through their IT transformation, and with SUSE Select Services to support them, your internal IT teams won’t be holding out for a hero with a burning heart. Before the final countdown commences though, I’m gonna fly now as there’s no easy way out of these terrible attempts at puns. What’s your favourite tune from the Rocky series of films? In my view, they’re all great and the perfect addition to my workout playlist, but Gonna Fly Now always helps me push a little bit harder on the treadmill!

If you’re visiting Nashville for SUSECON, then please come and say hello to hear more about this upcoming release. We’ll also be at the inaugural Open Infrastructure Summit in Denver at the end of April, so come along there to hear more about SUSE OpenStack Cloud 9 and to hear about how it could help you to take the stress out of your software-defined infrastructure.

Note: OpenStack Rocky is not affiliated with, supported or endorsed by Rocky Balbao. No copyright infringement intended. Other Rockys are available – see Rocky Mountains, Rocky Road, Rocky the Flying Rooster, Rocky and Bullwinkle and Rocky Horror for more details.

Share with friends and colleagues on social media

Source

Linux INTERVIEW QUESTIONS & ANSWERS

11 Basic Linux Interview Questions and Answers

Theories apart, we are proud to announce a new section on Tecmint, dedicated to Linux Interview. Here we will bring to you Linux Interview Questions and all other aspects of Linux, which is must for a professional in this cut-throat competition world.

Basic Linux Interview Questions

Basic Linux Interview Questions

A new article in this section (Linux Interview) will be posted on every weekend. The initiative taken by Tecmint is first of it’s kind among other Linux Dedicated websites, along with quality and unique articles.

We will start with Basic Linux Interview Question and will go advance article by article, for which your response is highly appreciated, which put us on a higher note.

Q.1: What is the core of Linux Operating System?
  1. Shell
  2. Kernel
  3. Command
  4. Script
  5. Terminal
Answer : Kernel is the core of Linux Operating System. Shell is a command Line Interpreter, Command is user Instruction to Computer, Script is collection of commands stored in a file and Terminal is a command Line Interface
Q.2: What Linus Torvalds Created?
  1. Fedora
  2. Slackware
  3. Debian
  4. Gentoo
  5. Linux
Answer : Linus Torvalds created Linux, which is the kernel (heart) of all of the above Operating System and all other Linux Operating System.
Q.3: Torvalds, Wrote most of the Linux Kernel in C++ programming Language, do you agree?
Answer : No! Linux Kernel contains 12,020,528 Lines of codes out of which 2,151,595 Lines are comments. So remaining 9,868,933 lines are codes and out of 9,868,933 Lines of codes 7,896,318 are written in C Programming Language.

The remaining Lines of code 1,972,615 is written in C++, Assembly, Perl, Shell Script, Python, Bash Script, HTML, awk, yacc, lex, sed, etc.

Note : The Number of Lines of codes varies on daily basis and an average of more than 3,509 lines are being added to Kernel.

Q.4: Linux initially was developed for intel X86 architecture but has been ported to other hardware platform than any other Operating System. Do you agree?.
Answer : Yes, I do agree. Linux was written for x86 machine, and has been ported to all kind of platform. Today’s more than 90% of supercomputers are using Linux. Linux made a very promising future in mobile phone, Tablets. In-fact we are surrounded by Linux in remote controls, space science, Research, Web, Desktop Computing. The list is endless.
Q.5: Is it legal to edit Linux Kernel?
Answer : Yes, Kernel is released under General Public Licence (GPL), and anyone can edit Linux Kernel to the extent permitted under GPL. Linux Kernel comes under the category of Free and Open Source Software (FOSS).
Q.6: What is the basic difference between UNIX and Linux Operating System.
Answer : Linux Operating System is Free and Open Source Software, the kernel of which is created by Linus Torvalds and community. Well you can not say UNIX Operating System doesn’t comes under the category of Free and Open Source Software, BSD, is a variant of UNIX which comes under the category of FOSS. Moreover Big companies like Apple, IBM, Oracle, HP, etc. are contributing to UNIX Kernel.
Q. 7: Choose the odd one out.
  1. HP-UX
  2. AIX
  3. OSX
  4. Slackware
  5. Solaris
Answer : Slackware is the odd in the above list. HP-UX, AIX, OSX, Solaris are developed by HP, IBM, APPLE, Oracle respectively and all are UNIX variant. Slackware is a Linux Operating System.
Q.8: Is Linux Operating system Virus free?
Answer : No! There doesn’t exist any Operating System on this earth that is virus free. However Linux is known to have least number of Viruses, till date, yes even less than UNIX OS. Linux has had about 60-100 viruses listed till date. None of them actively spreading nowadays. A rough estimate of UNIX viruses is between 85 -120 viruses reported till date.
Q.9: Linux is which kind of Operating System?
  1. Multi User
  2. Multi Tasking
  3. Multi Process
  4. All of the above
  5. None of the above
Answer : All of the Above. Linux is an Operating System which supports Multi User, Running a Number of Processes performing different tasks simultaneously.
Q.10: Syntax of any Linux command is:
  1. command [options] [arguments]
  2. command options [arguments]
  3. command [options] [arguments]
  4. command options arguments
Answer : The correct Syntax of Linux Command is Command [options] [arguments].
Q.11: Choose the odd one out.
  1. Vi
  2. vim
  3. cd
  4. nano
Answer : The odd one in the above list is cd. Vi, vim and nano are editors which is useful in editing files, while cd command is used for changing directory.

That’s all for now. How much you learned for the above questions? How it helped you in your Interview? We would like to hear all these from you in our comment section. Wait till the next weekend, for new set of questions.

Basic Linux Interview Questions and Answers – Part II

Continuing the Interview Series, we are giving 10 Questions here, in this article. These questions and the questions in the future articles doesn’t necessarily means they were asked in any interview. We are presenting you an interactive learning platform through these kind of posts, which surely will be helpful.

Basic Linux Interview Questions

Basic Linux Interview Questions – 2

Upon the analysis of comments in different forums on last article 11 Basic Linux Interview Questions of this series, it is important to mention here that to bring up a quality article to our readers. We give our time and money, and in return what we expect from you? Nothing. If you can’t praise our work, please don’t demoralize us from your negative comments.

If you find nothing new in a post, don’t forget that for someone it was helpful, and for that he/she was thankful. We can’t make everyone happy in each of our article. Hope you readers would take pain to understand this.

Q.1: Which command is used to record a user login session in a file?
  1. macro
  2. read
  3. script
  4. record
  5. sessionrecord
Answer : The ‘script’ command is used to record a user’s login session in a file. Script command can be implemented in a shell script or can directly be used in terminal. Here is an example which records everything between script and exit.

Let’s record the user’s login session with script command as shown.

[root@tecmint ~]# script my-session-record.txt

Script started, file is my-session-record.txt

The content of log file ‘my-session-record.txt’ can be views as:

[root@tecmint ~]# nano my-session-record.txt

script started on Friday 22 November 2013 08:19:01 PM IST
[root@tecmint ~]# ls
^[[0m^[[01;34mBinary^[[0m ^[[01;34mDocuments^[[0m ^[[01;34mMusic^[[0m $
^[[01;34mDesktop^[[0m ^[[01;34mDownloads^[[0m my-session-record.txt ^[[01;34$
Q.2: The kernel log message can be viewed using which of the following command?
  1. dmesg
  2. kernel
  3. ls -i
  4. uname
  5. None of the above
Answer : The kernel log message can be viewed by executing ‘dmesg’ command. In the list kernel is not a valid Linux command, ‘ls -i’ lists the file with inode within the working directory and ‘uname’ command shows os.
[root@tecmint ~]# dmesg

Initializing cgroup subsys cpuset
Initializing cgroup subsys cpu
Linux version 2.6.32-279.el6.i686 (mockbuild@c6b9.bsys.dev.centos.org) (gcc version 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) ) #1 SMP Fri Jun 22 10:59:55 UTC 2012
KERNEL supported cpus:
  Intel GenuineIntel
  AMD AuthenticAMD
  NSC Geode by NSC
  Cyrix CyrixInstead
  Centaur CentaurHauls
  Transmeta GenuineTMx86
  Transmeta TransmetaCPU
  UMC UMC UMC UMC
Disabled fast string operations
BIOS-provided physical RAM map:
...
Q.3: Which command is used to display the release of Linux Kernel?
  1. uname -v
  2. uname -r
  3. uname -m
  4. uname -n
  5. uname -o
Answer : The command ‘uname -r’ display the kernel release information. The switch ‘-v’ , ‘-m’ , ‘-n’ , ‘o’ display kernel version, machine hardware name, network node, hostname and operating system, respectively.
[root@tecmint ~]# uname -r

2.6.32-279.el6.i686
Q.4: Which command is used to identify the types of file?
  1. type
  2. info
  3. file
  4. which
  5. ls
Answer : The ‘file’ command is used to identify the types of file. The syntax is ‘file [option] File_name’.
[root@tecmint ~]# file wtop

wtop: POSIX shell script text executable
Q.5: Which command locate the binary, source and man page of a command?
Answer : The ‘whereis’ command comes to rescue here. The ‘whereis’ command locate the binary, source, and manual page files for a command.
[root@tecmint ~]# whereis /usr/bin/ftp

ftp: /usr/bin/ftp /usr/share/man/man1/ftp.1.gz
Q.6: When a user login, which files are called for user profile, by default??
Answer : The ‘.profile’ and ‘.bashrc’ present under home directory are called for user profile by default.
[root@tecmint ~]# ls -al
-rw-r--r--.  1 tecmint     tecmint            176 May 11  2012 .bash_profile
-rw-r--r--.  1 tecmint     tecmint            124 May 11  2012 .bashrc
Q.7: The ‘resolv.conf’ file is a configuration file for?
Answer : The ‘/etc/resolv.conf’ is the configuration file for DNS at client side.
[root@tecmint ~]# cat /etc/resolv.conf

nameserver 172.16.16.94
Q.8: Which command is used to create soft link of a file?
  1. ln
  2. ln -s
  3. link
  4. link -soft
  5. None of the above
Answer : The ‘ln -s’ command is used to create soft link of a file in Linux Environment.
[root@tecmint ~]# ln -s /etc/httpd/conf/httpd.conf httpd.original.conf
Q.9: The command ‘pwd’ is an alias of command ‘passwd’ in Linux?
Answer : No! The command ‘pwd’ is not an alias of command ‘passwd’ by default. ‘pwd’ stands for ‘print working directory’, which shows current directory and ‘passwd is used to change the password of user account in Linux.
[root@tecmint ~]# pwd

/home/tecmint
[root@tecmint ~]# passwd
Changing password for user root.
New password:
Retype new password:
Q.10: How will you check pci devices vendor and version on a Linux?
Answer : The Linux command ‘lspci’ comes to rescue here.
[root@tecmint ~]# lspci

00:00.0 Host bridge: Intel Corporation 5000P Chipset Memory Controller Hub (rev b1)
00:02.0 PCI bridge: Intel Corporation 5000 Series Chipset PCI Express x8 Port 2-3 (rev b1)
00:04.0 PCI bridge: Intel Corporation 5000 Series Chipset PCI Express x8 Port 4-5 (rev b1)
00:06.0 PCI bridge: Intel Corporation 5000 Series Chipset PCI Express x8 Port 6-7 (rev b1)
00:08.0 System peripheral: Intel Corporation 5000 Series Chipset DMA Engine (rev b1)
...

That’s all for now. I hope these above questions might be very helpful to you. In our next weekend we again come-up with some new set of questions.

10 Linux Interview Questions and Answers for Linux Beginners – Part 3

Continuing the Interview Questions series, with a big thanks for the nice feedback on last two articles of this series, we are here presenting 10 questions again for interactive learning.

  1. 11 Basic Linux Interview Questions and Answers – Part 1
  2. 10 Basic Linux Interview Questions and Answers – Part II

Linux Interview Questions

Linux Interview Questions Part – 3

1. How will you add a new user (say, tux) to your system.?
  1. useradd command
  2. adduser command
  3. linuxconf command
  4. All of the above
  5. None of the above
Answer : All of the above commands i.e., useraddadduser and linuxconf will add an user to the Linux system.
2. How many primary partition is possible on one drive?
  1. 1
  2. 2
  3. 4
  4. 16
Answer : There are a maximum of ‘4‘ primary partition possible on a drive.
3. The default port for Apache/Http is?
  1. 8080
  2. 80
  3. 8443
  4. 91
  5. None of the above.
Answer : By default Apache/Http is configured on port 80.
4. What does GNU stand for?
  1. GNU’s not Unix
  2. General Unix
  3. General Noble Unix
  4. Greek Needed Unix
  5. None of the above
Answer : GNU stands for ‘GNU‘s not Unix‘.
5. You typed at shell prompt “mysql” and what you got in return was “can’t connect to local MySQL server through socket ‘/var/mysql/mysql.sock’”, what would you check first.
Answer : Seeing the error message, I will first check if mysql is running or not using commands service mysql status or service mysqld status. If mysql service is not running, starting of the service is required.

Note:The above error message can be the result of ill configured my.cnf or mysql user permission. If mysql service starting doesn’t help, you need to see into the above said issues.

6. How to Mount a windows ntfs partition on Linux?
Answer : First install ntfs­3g pack on the system using apt or yum tool and then use “mount sudo mount ­t ntfs­3g /dev/<Windows­partition>/<Mount­point>” command to mount Windows partition on Linux.
7. From the following which is not an RPM based OS.?
  1. RedHat Linux
  2. Centos
  3. Scientific Linux
  4. Debian
  5. Fedora
Answer : The ‘Debian‘ operating system is not an RPM based and all listed above are ‘RPM‘ based except Debian.
8. Which command can be used to rename a file in Linux.?
  1. mv
  2. ren
  3. rename
  4. change
  5. None of the Above
Answer : The mv command is used to rename a file in Linux. For example, mv /path_to_File/original_file_name.extension /Path_to_File/New_name.extension.
9. Which command is used to create and display file in Linux?
  1. ed
  2. vi
  3. cat
  4. nano
  5. None of the above
Answer : The ‘cat‘ command can be used to create and display file in Linux.
10. What layer protocol is responsible for user and the application program support such as passwords, resource sharing, file transfer and network management?
  1. Layer 4 protocols
  2. Layer 5 protocols
  3. Layer 6 protocols
  4. Layer 7 protocols
  5. None of the above
Answer : The ‘Layer 7 Protocol‘ is responsible for user and the application program support such as passwords, resource sharing, file transfer and network management.

That’s all for now. I will be writing on another useful topic soon.

15 Basic MySQL Interview Questions for Database Administrators

Prior to This Article, three articles has already been published in ‘Linux Interview‘ Section and all of them were highly appreciated by our notable readers, however we were receiving feedback to make this interactive learning process, section-wise. From idea to action, we are providing you 15 MySQL Interview Questions.

Mysql Interview Questions

Mysql Interview Questions

1. How would you check if MySql service is running or not?
Answer : Issue the command “service mysql status” in ‘Debian’ and “service mysqld status” in RedHat. Check the output, and all done.
root@localhost:/home/avi# service mysql status

/usr/bin/mysqladmin  Ver 8.42 Distrib 5.1.72, for debian-linux-gnu on i486
Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Server version 5.1.72-2
Protocol version 10
Connection Localhost via UNIX socket
UNIX socket /var/run/mysqld/mysqld.sock
Uptime: 1 hour 22 min 49 sec

Threads: 1  Questions: 112138  Slow queries: 1  Opens: 1485  Flush tables: 1  Open tables: 64  Queries per second avg: 22.567.
2. If the service is running/stop how would you stop/start the service?
Answer : To start MySql service use command as service mysqld start and to stop use service mysqld stop.
root@localhost:/home/avi# service mysql stop

Stopping MySQL database server: mysqld.

root@localhost:/home/avi# service mysql start

Starting MySQL database server: mysqld.

Checking for corrupt, not cleanly closed and upgrade needing tables..
3. How will you login to MySQL from Linux Shell?
Answer : To connect or login to MySQL service, use command: mysql -u root -p.
root@localhost:/home/avi# mysql -u root -p 
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g. 
Your MySQL connection id is 207 
Server version: 5.1.72-2 (Debian) 

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved. 

Oracle is a registered trademark of Oracle Corporation and/or its 
affiliates. Other names may be trademarks of their respective 
owners. 

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. 

mysql>
4. How will you obtain list of all the databases?
Answer : To list all currently running databases run the command on mysql shell as: show databases;
mysql> show databases; 
+--------------------+ 
| Database           | 
+--------------------+ 
| information_schema | 
| a1                 | 
| cloud              | 
| mysql              | 
| phpmyadmin         | 
| playsms            | 
| sisso              | 
| test               | 
| ukolovnik          | 
| wordpress          | 
+--------------------+ 
10 rows in set (0.14 sec)
5. How will you switch to a database, and start working on that?
Answer : To use or switch to a specific database run the command on mysql shell as: use database_name;
mysql> use cloud; 
Reading table information for completion of table and column names 
You can turn off this feature to get a quicker startup with -A 

Database changed 
mysql>
6. How will you get the list of all the tables, in a database?
Answer : To list all the tables of a database use the command on mysql shell as: show tables;
mysql> show tables; 
+----------------------------+ 
| Tables_in_cloud            | 
+----------------------------+ 
| oc_appconfig               | 
| oc_calendar_calendars      | 
| oc_calendar_objects        | 
| oc_calendar_repeat         | 
| oc_calendar_share_calendar | 
| oc_calendar_share_event    | 
| oc_contacts_addressbooks   | 
| oc_contacts_cards          | 
| oc_fscache                 | 
| oc_gallery_sharing         | 
+----------------------------+ 
10 rows in set (0.00 sec)
7. How will you get the Field Name and Type of a MySql table?
Answer : To get the Field Name and Type of a table use the command on mysql shell as: describe table_name;
mysql> describe oc_users; 
+----------+--------------+------+-----+---------+-------+ 
| Field    | Type         | Null | Key | Default | Extra | 
+----------+--------------+------+-----+---------+-------+ 
| uid      | varchar(64)  | NO   | PRI |         |       | 
| password | varchar(255) | NO   |     |         |       | 
+----------+--------------+------+-----+---------+-------+ 
2 rows in set (0.00 sec)
8. How will you delete a table?
Answer : To delte a specific table use the command on mysql shell as: drop table table_name;
mysql> drop table lookup; 

Query OK, 0 rows affected (0.00 sec)
9. What about database? How will you delete a database?
Answer : To delte a specific database use the command on mysql shell as: drop database database-name;
mysql> drop database a1; 

Query OK, 11 rows affected (0.07 sec)
10. How will you see all the contents of a table?
Answer : To view all the contents of a particular table use the command on mysql shell as: select * from table_name;
mysql> select * from engines; 
+------------+---------+----------------------------------------------------------------+--------------+------+------------+ 
| ENGINE     | SUPPORT | COMMENT                                                        | TRANSACTIONS | XA   | SAVEPOINTS | 
+------------+---------+----------------------------------------------------------------+--------------+------+------------+ 
| InnoDB     | YES     | Supports transactions, row-level locking, and foreign keys     | YES          | YES  | YES        | 
| MRG_MYISAM | YES     | Collection of identical MyISAM tables                          | NO           | NO   | NO         | 
| BLACKHOLE  | YES     | /dev/null storage engine (anything you write to it disappears) | NO           | NO   | NO         | 
| CSV        | YES     | CSV storage engine                                             | NO           | NO   | NO         | 
| MEMORY     | YES     | Hash based, stored in memory, useful for temporary tables      | NO           | NO   | NO         | 
| FEDERATED  | NO      | Federated MySQL storage engine                                 | NULL         | NULL | NULL       | 
| ARCHIVE    | YES     | Archive storage engine                                         | NO           | NO   | NO         | 
| MyISAM     | DEFAULT | Default engine as of MySQL 3.23 with great performance         | NO           | NO   | NO         | 
+------------+---------+----------------------------------------------------------------+--------------+------+------------+ 
8 rows in set (0.00 sec)
11. How will you see all the data in a field (say, uid), from table (say, oc_users)?
Answer : To view all the data in a field use the command on mysql shell as: select uid from oc_users;
mysql> select uid from oc_users; 
+-----+ 
| uid | 
+-----+ 
| avi | 
+-----+ 
1 row in set (0.03 sec)
12. Say you have a table ‘xyz’, which contains several fields including ‘create_time’ and ‘engine’. The field ‘engine’ is populated with two types of data ‘Memory’ and ‘MyIsam’. How will you get only ‘create_time’ and ‘engine’ from the table where engine is ‘MyIsam’?
Answer : Use the command on mysql shell as: select create_time, engine from xyz where engine=”MyIsam”;
12. mysql> select create_time, engine from xyz where engine="MyIsam";

+---------------------+--------+ 
| create_time         | engine | 
+---------------------+--------+ 
| 2013-12-15 13:43:27 | MyISAM | 
| 2013-12-15 13:43:27 | MyISAM | 
| 2013-12-15 13:43:27 | MyISAM | 
| 2013-12-15 13:43:27 | MyISAM | 
| 2013-12-15 13:43:27 | MyISAM | 
| 2013-12-15 13:43:27 | MyISAM | 
| 2013-12-15 13:43:27 | MyISAM | 
| 2013-12-15 13:43:27 | MyISAM | 
| 2013-10-23 14:56:38 | MyISAM | 
| 2013-10-23 14:56:38 | MyISAM | 
| 2013-10-23 14:56:38 | MyISAM | 
| 2013-10-23 14:56:38 | MyISAM | 
| 2013-10-23 14:56:38 | MyISAM | 
| 2013-10-23 14:56:38 | MyISAM | 
| 2013-10-23 14:56:38 | MyISAM | 
+---------------------+--------+ 
132 rows in set (0.29 sec)
13. How will you show all the records from table ‘xrt’ where name is ‘tecmint’ and web_address is ‘tecmint.com’?
Answer : Use the command on mysql shell as: select * from xrt where name = “tecmint” and web_address = “tecmint.com”;
mysql> select  * from xrt where name = "tecmint" and web_address = “tecmint.com”;
+---------------+---------------------+---------------+ 
| Id                  | name                   | web_address | 
+---------------+---------------------+----------------+ 
| 13                 |  tecmint               | tecmint.com  |
+---------------+---------------------+----------------+ 
| 41                 |  tecmint               | tecmint.com  |
+---------------+---------------------+----------------+
14. How will you show all the records from table ‘xrt’ where name is not ‘tecmint’ and web_address is ‘tecmint.com’?
Answer : Use the command on mysql shell as: select * from xrt where name != “tecmint” and web_address = “tecmint.com”;
mysql> select * from xrt where name != ”tecmint” and web_address = ”tecmint.com”;

+---------------+---------------------+---------------+ 
| Id            | name                | web_address   | 
+---------------+---------------------+----------------+ 
| 1173          |  tecmint            | tecmint.com   |
+---------------+---------------------+----------------+
15. You need to know total number of row entry in a table. How will you achieve it?
Answer : Use the command on mysql shell as: select count(*) from table_name;
mysql> select count(*) from Tables; 

+----------+ 
| count(*) | 
+----------+ 
|      282 | 
+----------+ 
1 row in set (0.01 sec)

Read Also : 10 MySQL Database Interview Questions Intermediates

That’s all for now. How you feel about this ‘Linux Interview Question‘ section. Don’t forget to provide us with your valuable feedback in our comment section.

25 Apache Interview Questions for Beginners and Intermediates

We are very thankful to All our readers for the response we are getting for our new Linux Interview section. And now we have started section wise learning for Interview questions and continuing with the same today’s article focuses on Basic to Intermediate Apache interview Questions that will help you to prepare yourself.

Apache Interview Questions

Apache Job Interview Questions

In this section, we have covered some interesting 25 Apache Job Interview Questions along with their answers so that you can easily understand some new things about Apache that you might never known before.

Before you read this article, We strongly recommend you to don’t try to memorize the answers, always first try to understand the scenarios on a practical basis.

1. What is Apache web server?
Answer : Apache web server HTTP is a most popular, powerful and Open Source to host websites on the web server by serving web files on the networks. It works on HTTP as in Hypertext Transfer protocol, which provides a standard for servers and client side web browsers to communicate. It supports SSL, CGI files, Virtual hosting and many other features.
2. How to check Apache and it’s version?
Answer : First, use the rpm command to check whether Apache installed or not. If it’s installed, then use httpd -v command to check its version.
[root@tecmint ~]# rpm -qa | grep httpd

httpd-devel-2.2.15-29.el6.centos.i686
httpd-2.2.15-29.el6.centos.i686
httpd-tools-2.2.15-29.el6.centos.i686
[root@tecmint ~]# httpd -v

Server version: Apache/2.2.15 (Unix)
Server built:   Aug 13 2013 17:27:11
3. Apache runs as which user? and location of main config file?.
Answer : Apache runs with the user “nobody” and httpd daemon. Apache main configuration file: /etc/httpd/conf/httpd.conf (CentOS/RHEL/Fedora) and /etc/apache2.conf (Ubuntu/Debian).
4. On which port Apache listens http and https both?
Answer : By default Apache runs on http port 80 and https port 443 (for SSL certificate). You can also use netstat command to check ports.
[root@tecmint ~]# netstat -antp | grep http

tcp        0      0 :::80                       :::*                        LISTEN      1076/httpd          
tcp        0      0 :::443                      :::*                        LISTEN      1076/httpd
5. How do you install Apache Server on your Linux machine?
Answer : Simply, you can use any package installer such as yum on (RHEL/CentOS/Fedora) and apt-get on (Debian/Ubuntu) to install Apache server on your Linux machine.
[root@tecmint ~]# yum install httpd
[root@tecmint ~]# apt-get install apache2
6. Where you can find all configuration directories of Apache Web Server?
Answer : By default Apache configuration directories installed under /etc/httpd/ on (RHEL/CentOS/Fedora) and /etc/apache2 on (Debian/Ubuntu).
[root@tecmint ~]# cd /etc/httpd/
[root@tecmint httpd]# ls -l
total 8
drwxr-xr-x. 2 root root 4096 Dec 24 21:44 conf
drwxr-xr-x. 2 root root 4096 Dec 25 02:09 conf.d
lrwxrwxrwx  1 root root   19 Oct 13 19:06 logs -> ../../var/log/httpd
lrwxrwxrwx  1 root root   27 Oct 13 19:06 modules -> ../../usr/lib/httpd/modules
lrwxrwxrwx  1 root root   19 Oct 13 19:06 run -> ../../var/run/httpd
[root@tecmint ~]# cd /etc/apache2
[root@tecmint apache2]# ls -l
total 84
-rw-r--r-- 1 root root  7113 Jul 24 16:15 apache2.conf
drwxr-xr-x 2 root root  4096 Dec 16 11:48 conf-available
drwxr-xr-x 2 root root  4096 Dec 16 11:45 conf.d
drwxr-xr-x 2 root root  4096 Dec 16 11:48 conf-enabled
-rw-r--r-- 1 root root  1782 Jul 21 02:14 envvars
-rw-r--r-- 1 root root 31063 Jul 21 02:14 magic
drwxr-xr-x 2 root root 12288 Dec 16 11:48 mods-available
drwxr-xr-x 2 root root  4096 Dec 16 11:48 mods-enabled
-rw-r--r-- 1 root root   315 Jul 21 02:14 ports.conf
drwxr-xr-x 2 root root  4096 Dec 16 11:48 sites-available
drwxr-xr-x 2 root root  4096 Dec  6 00:04 sites-enabled

7. Can Apache be secured with TCP wrappers?

Answer : No, It can’t be secured with the TCP wrappers since it doesn’t support libwrap.a library of Linux.
8. How to change default Apache Port and How Listen Directive works in Apache?
Answer : There is a directive “Listen” in httpd.conf file which allows us to change the default Apache port. With the help of Listen directive we can make Apache listen on different port as well as different interfaces.

Suppose you have multiple IPs assigned to your Linux machine and want Apache to receive HTTP requests on a special Ethernet port or Interface, even that can be done with Listen directive.

To change the Apache default port, please open your Apache main configuration file httpd.conf or apache2.conffile with VI editor.

[root@tecmint ~]# vi /etc/httpd/conf/httpd.conf

[root@tecmint ~]# vi /etc/apache2/apache2.conf

Search for the word ”Listen”, comment the original line and write your own directive below that line.

# Listen 80
Listen 8080

OR

Listen 172.16.16.1:8080

Save the file and restart the web server.

[root@tecmint ~]# service httpd restart

[root@tecmint ~]# service apache2 restart
9. Can we have two Apache Web servers on a single machine?
Answer : Yes, we can run two different Apache servers at one time on a Linux machine, but the condition for that is they should listen on different ports and we can change the ports with Listen directive of Apache.
10. What do you mean by DocumentRoot of Apache?
Answer : DocumentRoot in Apache means, it’s the location of web files are stored in the server, the default DocumentRoot of Apache is /var/www/html or /var/www. This can be changed to anything, by setting up “DocumentRoot” in a virtual host of configuration file of domain.
11. How to host files in different folder and what is Alias directive?
Answer : Yes, this can be achieved by Alias directive in the main Apache configuration file. Alias directive maps resources in File system, it takes a URL path and substitute it with a file or directory path on the system with is set up to redirect.

To use Alias directive, Its the part of mod_alias module of Apache. The default syntax of Alias directive is:

Alias /images /var/data/images/

Here in above example, /images url prefix to the /var/data/images prefix that mean clients will query for “http://www.example.com/images/sample-image.png” and Apache will pick up the “sample-image.png” file from /var/data/images/sample-image.png on the server. It’s also known as URL Mapping.

12. What do you understand by “DirectoryIndex”?
Answer : DirectoryIndex is the name of first file which Apache looks for when a request comes from a domain. For example: www.example.com is requested by the client, so Apache will go the document root of that website and looks for the index file (first file to display).

The default setting of DirectoryIndex is .html index.html index.php, if you have different names of your first file, you need to make the changes in httpd.conf or apache2.conf for DirectoryIndex value to display that to your client browser.

#
# DirectoryIndex: sets the file that Apache will serve if a directory
# is requested.
#
# The index.html.var file (a type-map) is used to deliver content-
# negotiated documents.  The MultiViews Option can be used for the
# same purpose, but it is much slower.
#
DirectoryIndex index.html index.html.var index.cgi .exe
13. How to disable Directory listing when an index file is missing?
Answer : If, the main index file is missing in the website root directory, then the Apache will lists all the contents like files and folder of the website on the browser instead of Main website pages.

To stop Apache directory listing, you can set the following rule in the main configuration file globally or in .htaccess file for a particular website.

<Directory /var/www/html>
   Options -Indexes
</Directory>
14. What are different log files of Apache Web Server?
Answer : The default log files of Apache Web Server are access log “/var/log/httpd/access_log” and error log :/var/log/httpd/error_log”.
15. What do you understand by “connection reset by peer” in error logs?
Answer : When the server is serving any ongoing Apache request and end user terminates the connection in between, we see “connection reset by peer” in the Apache error logs.
16. What is Virtual Host in Apache?
Answer : The Virtual Host section contains the information like Website name, Document root, Directory Index, Server Admin Email, ErrorLog File location etc.

You are free to add as many directives you require for your domain, but the two minimal entries for a working website is the ServerName and DocumentRoot. We usually define our Virtual Host section at the bottom of httpd.conf file in Linux machines.

Sample VirtualHost
<VirtualHost *:80>
   ServerAdmin webmaster@dummy-host.example.com
   DocumentRoot /www/docs/dummy-host.example.com
   ServerName dummy-host.example.com
   ErrorLog logs/dummy-host.example.com-error_log
   CustomLog logs/dummy-host.example.com-access_log common
</VirtualHost>
  1. ServerAdmin : Its usually the email address of the website owner, where the error or notification can be sent.
  2. DocumentRoot : location where the web files are located in the server(Necessary).
  3. ServerName : Its the domain name which you want to access from your web browser(Necessary).
  4. ErrorLog : Its the location of the log file where all the domain related logs are being recorded.
17. What’s the difference between <Location> and <Directory>?

Answer :

  1. <Location> is used to set element related to the URL / address bar of the web server.
  2. <Directory> refers that the location of file system object on the server
18. What is Apache Virtual Hosting?
Answer : Apache Virtual hosting is the concept of hosting multiple website on a single web server. There are two types of Virtual hosts can be setup with Apache are Name Based Virtual hosting and IP based virtual hosting.

For more information, read on How to Create Name/IP based Virtual Hosts in Apache.

19. What do you understand by MPM in Apache?
Answer : MPM stands for Multi Processing Modules, actually Apache follows some mechanism to accept and complete web server requests.
20. What is the difference between Worker and Prefork MPM?
Answer : Both MPMs, Worker and prefork has their own mechanism to work with Apache. It totally depends on you that in which mode you want to start your Apache.
  1. Basic difference between Worker and MPM is in their process of spawning the child process. In the Prefork MPM, a master httpd process is started and this master process starts manages all other child processes to serve client requests. Whereas, In the worker MPM one httpd process is active, and it uses different threads to serve client requests.
  2. Prefork MPM uses multiple child processes with one thread each, where worker MPM uses multiple child processes with many threads each.
  3. Connection handling in the Prefork MPM, each process handles one connection at a time, whereas in the Worker mpm each thread handles one connection at a time.
  4. Memory footprints Prefork MPM Large memory footprints, where Worker has smaller memory footprints.
21. What’s the use of “LimitRequestBody” and how to put limit on your uploads?
Answer : LimitRequestBody directive is used to put a limit on the upload size.

For example: I want to put limits of 100000 Bytes in the folder /var/www/html/tecmin/uploads. So, you need to add following directive in Apache configuration file.

<Directory "/var/www/html/tecmint/uploads">
LimitRequestBody 100000
</Directory>
22. What is mod_perl and mod _php?

Answer :

  1. mod_perl is an Apache module which is compiled with Apache for easy integration and to increase the performance of Perl scripts.
  2. mod_php is used for easy integration of PHP scripts by the web server, it embeds the PHP interpreter inside the Apache process. Its forces Apache child process to use more memory and works with Apache only but still very popular.
23. What is Mod_evasive?
Answer : Its a third-party module which helps us to prevent your web server from the web attacks like DDOS because it performs only one task at a time and performs it very well.

For more information, read the article that guides you how to install and configure mod_evasive in Apache.

24. What is Loglevel debug in httpd.conf file?
Answer : With the help of Loglevel Debug option, we can get/log more information in the error logs which helps us to debug a problem.
25. What’s the use of mod_ssl and how SSL works with Apache?
Answer : Mod_ssl package is an Apache module, which allows Apache to establish its connection and transfer all the data in a secure encrypted environment. With the help of SSL certificates, all the Login details and other important secret details get transferred in an encrypted manner over the Internet, which prevents our data from Eavesdropping and IP spoofing.
How SSL works with Apache

Whenever an https requests comes, these three steps Apache follows:

  1. Apache generates its private key and converts that private key to .CSR file (Certificate signing request).
  2. Then Apache sends the .csr file to the CA (Certificate Authority).
  3. CA will take the .csr file and convert it to .crt (certificate) and will send that .crt file back to Apache to secure and complete the https connection request.

These are just most popular 25 questions being asked these days by Interviewers, please provide some more interview questions which you have faced in your recent interview and help others via our Comment section below.

We are also recommend you to read our previous articles on Apache.

  1. 13 Apache Web Server Security and Hardening Tips
  2. How to Sync Two Apache Web Servers/Websites Using Rsync

10 MySQL Database Interview Questions for Beginners and Intermediates

In our last article, we’ve covered 15 Basic MySQL Questions, again we are here with another set interview questions for intermediate users. As we said earlier these questions can be asked in Job Interviews. But some of our critics on the last article said, that I don’t give response to my critics and the questions are very basic and will never be asked in any Database Administrator Interview.

Mysql Job Interview Questions

10 Mysql Job Interview Questions

To them we must admit all the articles and question can not be composed keeping all the flock in mind. We are coming from basic to expert level step by step. Please Co­operate with us.

1. Define SQL?
Answer : SQL stands for Structured Query Language. SQL is a programming Language designed specially for managing data in Relational Database Management System (RDBMS).
2. What is RDBMS? Explain its features?

Answer : A Relational Database Management System (RDBMS) is the most widely used database Management System based on the Relational Database model.

Features of RDBMS
  1. Stores data in tables.
  2. Tables have rows and column.
  3. Creation and Retrieval of Table is allowed through SQL.
3. What is Data Mining?
Answer : Data Mining is a subcategory of Computer Science which aims at extraction of information from set of data and transform it into Human Readable structure, to be used later.
4. What is an ERD?
Answer : ERD stands for Entity Relationship Diagram. Entity Relationship Diagram is the graphical representation of tables, with the relationship between them.
5. What is the difference between Primary Key and Unique Key?
Answer : Both Primary and Unique Key is implemented for Uniqueness of the column. Primary Key creates a clustered index of column where as an Unique creates unclustered index of column. Moreover, Primary Key doesn’t allow NULL value, however Unique Key does allows one NULL value.
6. How to store picture file in the database. What Object type is used?
Answer : Storing Pictures in a database is a bad idea. To store picture in a database Object Type ‘Blob’ is recommended.
7. What is Data Warehousing?
Answer : A Data Warehousing generally refereed as Enterprise Data Warehousing is a central Data repository, created using different Data Sources.
8. What are indexes in a Database. What are the types of indexes?

Answer : Indexes are the quick references for fast data retrieval of data from a database. There are two different kinds of indexes.

Clustered Index
  1. Only one per table.
  2. Faster to read than non clustered as data is physically stored in index order.
Non­clustered Index
  1. Can be used many times per table.
  2. Quicker for insert and update operations than a clustered index.
9. How many TRIGGERS are possible in MySql?

Answer : There are only six triggers are allowed to use in MySQL database and they are.

  1. Before Insert
  2. After Insert
  3. Before Update
  4. After Update
  5. Before Delete
  6. After Delete
10. What is Heap table?
Answer : Tables that are present in the memory are called as HEAP tables. These tables are commonly known as memory tables. These memory tables never have values with data type like “BLOB” or “TEXT”. They use indexes which make them faster.

That’s all for now on MySQL questions, I will be coming up with another set of questions soon. Don’t forget to provide your valuable feedback in comment section.

10 Core Linux Interview Questions and Answers

Again its time to read some serious content in light mood, Yup! Its another article on Interview question and here we are presenting 10 Core Linux Questions, which surely will add to your knowledge.

Linux Interview Job Questions

10 Core Linux Interview Questions

1. You need to define a macro, a key binding for the existing command. How would you do it?
Answer : There is a command called bind, in bash shell which is capable of defining macro, or binding a key. In order to bind a key with an existing command, we need to generate Character Sequence emitted by the key. Press Ctrl+v and then key F12, I got ^[[24~
[root@localhost ~]# bind '"\e[24~":"date"'

Note : Different types of terminals or terminal emulators can emit different codes for the same key.

2. A user is new to Linux and he wants to know full list of available commands, what would you suggest him?
Answer : A command ‘compgen ­c’ will show a full list of available commands.
[root@localhost ~]$ compgen -c

l.
ll
ls
which
if
then
else
elif
fi
case
esac
for
select
while
until
do
done
...
3. Your assistant needs to print directory stack, what would you suggest?
Answer : The Linux command ‘dirs’, will print the directory stack.
[root@localhost ~]# dirs

/usr/share/X11
4. You have lots of running jobs, how would you remove all the running processes, without restarting the machine?
Answer : The Linux command ‘disown -r’ will remove all the running Processes.
5. What does the command ‘hash’ is used for in bash Shell?
Answer : Linux command ‘hash’ manages internal hash table, fins and remember full path of the specified command, Display used command names and number of times the command is used.
[root@localhost ~]# hash

hits    command
   2    /bin/ls
   2    /bin/su
6. Which built­in Linux command performs arithmetic operation of Integers in Bash?
Answer : The ‘let’ command that performs, arithmetic operation of integer in bash shell.
#! /bin/bash

...

...

let c=a+b

...

...
7. You have a large text file, and you need to see one page at a time. What will you do?
Answer : You can achieve the above result by pipeling the output of ‘cat file_name.txt’ with ‘more’ command.
[root@localhost ~]# cat file_name.txt | more
8. Who own the data dictionary?
Answer : The user ‘SYS’ owns the data dictionary. Users ‘SYS’ and ‘SYSEM are created by default, automatically.
9. How to know a command summary and useability in Linux?

Assume you came across a command in /bin directory, which you are completely unaware of, and have no idea what it does. What will you do to know its useability?

Answer : The command ‘whatis’ display a summary of its useability from the man page. For example, you would like to see a summary of ‘zcat’ command which you don’t know previously.
[root@localhost ~]# whatis zcat

zcat [gzip]          (1)  - compress or expand files
10. What command should you use to check the number of files and disk space used by each user’s defined quotas?
Answer : The command ‘repquota’ comes to rescue here. Command repquota summaries quotas for a file system.

That’s all for now. Provide your valuable Feedback in our comment section. Stay tuned for more Linux and Foss posts.

10 VsFTP (Very Secure File Transfer Protocol) Interview Questions and Answers

FTP stands for ‘File Transfer Protocol‘ is one of the most widely used and standard protocol available over Internet. FTP works in a ServerClient architecture and is used to transfer file. Initially FTP client were command-­line based. Now most of the platform comes bundled with FTP client and server program and a lot of FTP Client/Server Program is available. Here we are presenting 10 Interview Questions based on Vsftp (Very Secure File Transfer Protocol) on a Linux Server.

VsFTP Interview Questions

10 VsFTP Interview Questions

1. What is the Difference between TFTP and FTP Server?
Answer : TFTP is File Transfer Protocol which usages User Datagram Protocol (UDP) whereas FTP usages Transmission Control Protocol (TCP). TCP usages port number 20 for Data and 21 for control by default whereas TFTP usages port 69 by default.

Note: Briefly you can say FTP usages port 21 by default when clarification between Data and Control is not required.

2. How to Restrict Users and Disallow browsing beyond their Home Directories? How?
Answer : Yes! It is possible to restrict users to their home directories and browsing beyond home directories. This can be done by enabling chroot option in ftp configuration file (i.e. vsftpd.conf).
chroot_local_user=YES
3. How would you manage number of FTP clients that connect to your FTP server?

Answer : We need to set ‘max_client parameter’. This parameter controls the number of clients connecting, if max_client is set to 0, it will allow unlimited clients to connect FTP server.The maximum client parameter needs to be changed in vsftpd.conf and the default value is 0.

4. How to limit the FTP login attempts to fight against botnet/illegal login attempts?
Answer : We need to edit ‘max_login_fails parameter’. This parameter manages the maximum number of login attempts before the session is killed. The default value is ‘3’ which means a maximum of ‘3’ login attempts are possible failing which the session will be killed.
5. How to enable file upload from anonymous users to FTP Server?
Answer : Anonymous users can be allowed to upload files to FTP server by modifying parameter ‘anon_upload_enable’. If Value of anon_upload_enable is set to Yes, Anonymous users are permitted to upload files. In order to have a working anonymous upload, we must have parameter ‘write_enable’ activated. The Default Value is NO, which means anonymous upload is disabled.
6. How would you disabled downloads from FTP server?
Answer : Disabling Downloads from FTP Server can be implemented by modifying the parameter ‘download_enable’. If set to NO, all download request will be denied. The Default value is YES which Means, Downloading is Enabled.
7. How to enable and permit FTP login to local users?
Answer : The parameter ‘Local_enable’ is responsible for managing local users login. In order to activate local users login, we must set ‘local_enable=yes’ in file vsftpd.conf. The default value is NO, which means Local User Login is not permitted.
8. Is it Possible to maintain log of FTP requests and responses?
Answer : Yes! We can log FTP requests and responses. What we need to do is to modify the binary value of parameter ‘log_ftp_protocol’. If set to Yes, it will log all the requests, responses. The log may be very useful in Debugging. The default value of above parameter is NO which means no logs are maintained by default.

Note: In order to create and maintains logs successfully, the parameter ‘xferlog_std_format’ must be enabled.

9. How to disable the login for few seconds, in case of failed login. How will you achieve this?
Answer : The number of seconds we need to pause in case of failed login attempt can be achieved by modifying the value of parameter ‘delay_failed_login’. The default value is 1.
10. How to display certain text message before a client connects to FTP server. How would you get this done?
Answer : We can achieve this by setting ‘banner_file’. We need to set ftpd_banner=/path/to/banner-file in vsftpd.conf file.

FTP is a very Useful tool and is vast yet very interesting. Moreover it is useful from Interview Point of View. We have taken the pain to bring these questions to you and will cover more of these questions in our future article. Till then stay tuned and connected to Tecmint.

Read Also10 Advance VsFTP Interview Questions and Answers – Part II

10 Advance VsFTP Interview Questions and Answers – Part II

We were overwhelmed with the response we’ve received on our last article. Where we’ve presented 10 wonderful question on Very Secure File Transfer Protocol. Continuing VSFTP interview article we are here presenting you yet another 10 Advance Interview Questions which surely will help you.

  1. 10 Basic Vsftp Interview Question/Answers – Part I

VsFTP Intervew Questions

VsFTP Intervew Questions – Part II

Please note vsftpd.conf file is used to control various aspects of configuration as specified in this article. By default, the vsftpd searches for the configuration file under /etc/vsftpd/vsftpd.conf. However, the format of file is very simple and it contains comment or directive. Comment lines begins with a ‘#‘ are ignored and a directive line has the following format.

option=value

Before we start the Question and their well explained Answer we would like to answer a question “Who is going to attend FTP Interview?”. Well no one. Perhaps no one would be attending FTP interview. But we are presenting subject wise questions to maintain a systematic approach so that in any Interview, you wont get a new question which you wont be knowing on any of the topics/subjects we covered here.

11. How would you block an IP which is acting malicious on your internal private VSFTP network?
Answer : We can Block IP either by adding the suspicious IP to ‘/etc/hosts.deny’ file or alternatively adding a DROP rule for the suspicious IP to iptables INPUT chain.
Block IP using host.deny file

Open ‘/etc/hosts.deny’ file.

# vi /etc/hosts.deny

Append the following line at the bottom of the file with the IP address that you want to block access to FTP.

#
# hosts.deny    This file contains access rules which are used to
#               deny connections to network services that either use
#               the tcp_wrappers library or that have been
#               started through a tcp_wrappers-enabled xinetd.
#
#               The rules in this file can also be set up in
#               /etc/hosts.allow with a 'deny' option instead.
#
#               See 'man 5 hosts_options' and 'man 5 hosts_access'
#               for information on rule syntax.
#               See 'man tcpd' for information on tcp_wrappers
#
vsftpd:172.16.16.1
Block IP using iptables rule

To block FTP access to particular IP address, add the following drop rule to iptables INPUT chain.

iptables -A RH-Firewall-1-INPUT -p tcp -s 172.16.16.1 -m state --state NEW -m tcp --dport 21 -j DROP
12. How to allow secured SSL connections to Anonymous users? How would you do?
Answer : Yes! It is possible to allow anonymous users to use secured SSL connections. The value of parameter ‘allow_anon_ssl’ should be ‘YES’ in the vsftpd.conf file. If it, set to NO it wont allow anonymous users to use SSL connections. The default value is NO.
# Add this line to enable secured SSL connection to anonymous users.
allow_anon_ssl=YES
13. How to allow Anonymous users to create new directory and write to that directory?
Answer : We need to edit the parameter ‘anon_mkdir_write_enable’ and set it’s value to ‘YES’. But in order to make the parameter working, ‘write_enable’ must be activated. The default is NO.
# Uncomment this to enable any form of FTP write command.
write_enable=YES
# Uncomment this if you want the anonymous FTP user to be able to create
# new directories.
anon_mkdir_write_enable=YES
14. How to enable Anonymous downloads, but disable permission to write?
Answer : In the above said scenario, we need to edit the parameter ‘anon_world_readable_only’. The parameter should be enabled and set to ‘YES’. The default value is YES.
# Add this line to enable read only permission to anonymous users.
allow_anon_ssl=YES
15. How to CHMOD all Anonymous uploads automatically. How would you do?
Answer : To chmod all anonymous uploads automatically, we need to edit the parameter ‘chmod_enable’ and set it to ‘YES’. Anonymous users never get to use SITE CHMOD. The default value is YES.
# Add this line to chmod all anonymous uploads automatically.
chmod_enable=YES
16. How to disable directory listing in a FTP server?
Answer : The parameter ‘dirlist_enable’ comes to rescue at this point. The value of ‘dirlist_enable’ should be set to NO. The default value is YES.
# Add this line to disable directory listing.
dirlist_enable=NO
17. How to maintain sessions for logins of VSFTP. How will you do?
Answer : The parameter ‘session_support’ needs to be modified. This parameter controls and manages vsftp attempts to maintain session for logins. The default value is NO.
# Add this line to maintain session logins.
session_support=YES
18. How to display time in local time zone, when listing the contents of directory?
Answer : The parameter ‘usr_localtime’ needs to be modified. If enabled, vsftpd will list directory files in local time zone format. The default is to display GMT. The default value is NO.
# Add this line to display directory listing in local time zone.
usr_localtime=YES
19. How will you limit the maximum transfer rate from VSFTP server?
Answer : To limit the maximum transfer rate of VSFTP server we need the parameter ‘anon_max_rate’ in bytes per second, for anonymous client. The default value is 0 which means unlimited.
# Add this line to limit the ftp transfer rate.
anon_max_rate=0 # 0 means unlimited
20. How will you timeout the idle session of VSFTP?
Answer : The parameter ‘idle_session_timeout’ needs to be modified here. The timeout in second, which is the maximum time an anonymous user can spend in a session between his client machine and VSFTP server. As soon as the the timeout triggers, the client is logged out. The default time is 300.
# Add this line to set the ftp timeout session.
idle_session_timeout=300

That’s all for now. We will be coming up with next article very soon, till then stay tuned and connected and don’t forget to provide us with your valuable feedback in our comment section.

10 Useful Random Linux Interview Questions and Answers

To a little surprise this time we are not presenting Interview question on any specific subject but on random topics. These question will surely help you in cracking Interviews beside adding to your Knowledge.

Linux Interview Questions

10 Random Linux Questions and Answers

1. Let’s say you maintains a backup on regular basis for the company you are working. The backups are maintained in Compressed file format. You need to examine a log, two months old. What would you suggest without decompressing the compressed file?
Answer : To check the contents of a compressed file without the need of decompressing it, we need to use ‘zcat’. The zcat utility makes it possible to view the contents of a compressed file.
# zcat ­f phpshell­2.4.tar.gz
2. You need to track events on your system. What will you do?
Answer : For tracking the events on the system, we need a daemon called syslogd. The syslogd daemon is useful in tracking the information of system and then saving it to specified log files.

Running ‘syslogd‘ application in terminal generates log file at the location ‘/var/log/syslog‘. The syslogd application is very useful in troubleshooting Linux sytems. A sample log file looks similar to below.

syslongd command

syslongd

3. How will you restrict IP so that the restricted IP’s may not use the FTP Server?
Answer : We can block suspicious IP by integrating tcp_wrapper. We need to enable the parameter “tcp_wrapper=YES” in the configuration file at ‘/etc/vsftpd.conf’. And then add the suspicious IP in the ‘host.deny’ file at location ‘/etc/host.deny’.
Block IP Address

Open ‘/etc/hosts.deny’ file.

# vi /etc/hosts.deny

Add the IP address that you want to block at the bottom of the file.

#
# hosts.deny    This file contains access rules which are used to
#               deny connections to network services that either use
#               the tcp_wrappers library or that have been
#               started through a tcp_wrappers-enabled xinetd.
#
#               The rules in this file can also be set up in
#               /etc/hosts.allow with a 'deny' option instead.
#
#               See 'man 5 hosts_options' and 'man 5 hosts_access'
#               for information on rule syntax.
#               See 'man tcpd' for information on tcp_wrappers
#
vsftpd:172.16.16.1
4. Tell us the difference between Telnet and SSH?
Answer : Telnet and SSH both are communication protocol which are used to manage remote system. SSH is Secured, which requires exchanging of key opposite of telnet which transmit data in plain text, which means telnet is less secure than SSH.
6. You need to stop your X server. When you tries to kill your X server, You got an error message that you cannot quit X server. What will you do?
Answer : When killing a X server, it won’t work normal way like doing ‘/etc/init.d/gdm stop’. We need to execute a special key combination ‘Ctrl+ Alt+ Back Space’ which will force X server to restart.
6. What is the difference between command ‘ping’ and ‘ping6’?
Answer : Both the commands are same and used for the same purpose except that the fact that ping6 is used with ipv6 IP address.
7. You want to search for all the *.tar files in your Home directory and wants to delete all at once. How will you do it?
Answer : We need to use find command with rm command to delete all “.tar” files.
# find /home/ ­name '*.tar' | xargs rm ­rf
8. What is the difference between locate and slocate command?
Answer : The slocate looks for the files that user have access whereas locate will search for the file with updated result.
9. You need to search for the string “Tecmint” in all the “.txt” files in the current directory. How will you do it?
Answer : We need to run the fine command to search for the text “Tecmint” in the current directory, recursively.
# find -­name “*.txt” | xargs grep “Tecmint”
10. You want to send a message to all connected users as “Server is going down for maintenance”, what will you do?
Answer : This can be achieved using the wall command. The wall command sends a message to all connected users on the sever.
# echo please save your work, immediately. The server is going down for Maintenance at 12:30 Pm, sharply. | wall

wall command

wall command

That’s all for now. Don’t Forget to give your valuable feedback in comment section below.

10 Useful Interview Questions on Linux Services and Daemons

Daemon is a computer program that runs as a background process and generally do not remains under the direct control of user. The parent process of a daemon in most cases are init, but not always.

In Linux, a Service is an application that runs in a background carrying out essential task or waiting for its execution.

Questions on Linux Services and Daemons

Questions on Linux Services and Daemons

Generally, there is no difference between a Daemon and a Service. Daemon is Service but service may be bigger than Daemon. Daemon provide some services and services may contain more than one Daemon.

Here in this series of Interview Article, we would be covering Services and Daemons in Linux.

1. What is Exim Service? What is the purpose of this Service?
Answer : Exim is an Open Source Mail Transfer Agent (MTA) which deals with routing, receiving and delivering of Electronic Mail. Exim service serves to be a great replacement of sendmail service which comes bundled with most of the distro.

2. What is NIS server? What is the purpose of NIS Server?

Answer : The NIS server, serves the purpose of dealing with Network Information Service which in-turn facilitates to login to other Systems with same log-in credentials. NIS is a directory service protocol which functions in Client-Server Model.
3. What will you prefer for a reverse proxy in Linux?
Answer : Reverse Proxy refers to the type of proxy that retrieves resources on account of client from server(s). The solution of ‘Reverse Proxy’ in Linux is squid as well as Apache reverse Proxy. However ‘squid’ is more preferred than ‘Apache reverse Proxy’ because of its simplicity and straight forward nature.
4. You are getting following codes (2xx, 3xx, 4xx, 5xx) in Apache, at some point of time. What does this means?

Answer : In Apache each error code points towards a specific area of problem.

  1. 2xx : Request of connection Successful
  2. 3xx: Redirection
  3. 4xx: Client Error
  4. 5xx: Server Error
5. You are asked to stop Apache Service through its control Script. What will you do?
Answer : The Apache service is controlled using a script called apachectl. In order to stop apache using its control script we need to run.
# apachectl stop		[On Debian based Systems]
# /etc/inid.t/httpd stop	[On Red Hat based Systems]
6. How is ‘apachectl restart’ different from ‘apachectl graceful’
Answer : The ‘apachectl restart’ when executed will force Apache to restart immediately, before the task complete whereas ‘apachectl graceful’ will wait for the current task to be finished before restarting the service. Not to mention ‘apachectl graceful’ is more safe to execute but the execution time for ‘apachectl restart’ is less as compared to ‘apachectl graceful’.
7. How will you configure the nfs mounts to export it, from your local machine?
Answer : The /etc/export allows the creation of nfs exports on local machine and make it available to the whole world.
8. You are supposed to create a new Apache VirtualHost configuration for the host www.Tecmint.com that is available at /home/Tecmint/public_html/ and maintains log at /var/log/httpd/ by default.
Answer : You need to create a Apache virtual host container in main apache configuration file located at ‘/etc/httpd/conf/httpd.conf’. The following is the virtual container for host www.tecmint.com.
<VirtualHost *:80>
DocumentRoot /home/Tecmint/public_html
ServerName www.Tecmint.com
Server Alias Tecmint.com
CustomLog /var/log/httpd/Tecmint.com.log combined
ErrorLog /var/log/httpd/Tecmint.com.error.log
</VirtualHost>
9. You are supposed to dump all the packets of http traffic in file http.out. What will you suggest?
Answer : In order to dump all the network traffic, we need to use command ‘tcpdump’ with the following switches.
# tcpdump tcp port 80 -s0 -w http.out
10. How will you add a service (say httpd) to start at INIT Level 3?
Answer : We need to use ‘chkconfig’ tool to hook up a service at INIT Level 3 by changing its runlevel parameter.
chkconfig –level 3 httpd on

That’s all for now. I’ll be here again with another interesting article very soon.

10 Useful SSH (Secure Shell) Interview Questions and Answers

SSH stands for Secure Shell is a network protocol, used to access remote machine in order to execute command-line network services and other commands over a Network. SSH is Known for its high security, cryptographic behavior and it is most widely used by Network Admins to control remote web servers primarily.

SSH Interview Questions

10 SSH Interview Questions

Here in this Interview Questions series article, we are presenting some useful 10 SSH (Secure Shell) Questionsand their Answers.

1. SSH is configured on what Port Number, by default? How to change the port of SSH?
Answer : SSH is configured on port 22, by default. We can change or set custom port number for SSH in configuration file.

We can check port number of SSH by running the below one liner script, directly on terminal.

# grep Port /etc/ssh/sshd_config		[On Red Hat based systems]

# grep Port /etc/ssh/ssh_config		        [On Debian based systems]

To change the port of SSH, we need to modify the configuration file of SSH which is located at ‘/etc/ssh/sshd_config‘ or ‘/etc/ssh/ssh_config‘.

# nano /etc/ssh/sshd_config	[On Red Hat based systems]

# nano /etc/ssh/ssh_config		[On Debian based systems]

Searh for the Line.

Port 22

And replace ‘22‘ with any UN-engaged port Number say ‘1080‘. Save the file and restart the SSH service to take the changes into effect.

# service sshd restart					[On Red Hat based systems]

# service ssh restart					[On Debian based systems]
2. As a security implementation, you need to disable root Login on SSH Server, in Linux. What would you suggest?
Answer : The above action can be implemented in the configuration file. We need to change the parameter ‘PermitRootLogin’ to ‘no’ in the configuration file to disable direct root login.

To disable SSH root login, open the configuration file located at ‘/etc/ssh/sshd_config‘ or ‘/etc/ssh/ssh_config‘.

# nano /etc/ssh/sshd_config			[On Red Hat based systems]

# nano Port /etc/ssh/ssh_config			[On Debian based systems]

Change the parameter ‘PermitRootLogin‘ to ‘no‘ and restart the SSH service as show above.

3. SSH or Telnet? Why?
Answer : Both SSH and Telnet are network Protocol. Both the services are used in order to connect and communicate to another machine over Network. SSH uses Port 22 and Telnet uses port 23 by default. Telnet send data in plain text and non-encrypted format everyone can understand whereas SSH sends data in encrypted format. Not to mention SSH is more secure than Telnet and hence SSH is preferred over Telnet.
4. Is it possible to login to SSH server without password? How
Answer : Yes! It is possible to login to a remote SSH server without entering password. We need to use ssh-keygen technology to create public and private keys.

Create ssh-keygen using the command below.

$ ssh-keygen

Copy public keys to remote host using the command below.

$ ssh-copy-id -i /home/USER/.ssh/id_rsa.pub REMOTE-SERVER

Note: Replace USER with user name and REMOTE-SERVER by remote server address.

The next time we try to login to SSH server, it will allow login without asking password, using the keygen. For more detailed instructions, read how to login remote SSH server without password.

5. How will you allows users and groups to have access to SSH Sever?
Answer : Yes! It is possible to allow users and groups to have access to SSH server.

Here again we need to edit the configuration file of SSH service. Open the configuration file and add users and groups at the bottom as show below and then, restart the service.

AllowUsers Tecmint Tecmint1 Tecmint2
AllowGroups group_1 group_2 group_3
6. How to add welcome/warning message as soon as a user login to SSH Server?
Answer : In order to add a welcome/warning message as soon as a user logged into SSH server, we need to edit file called ‘/etc/issue’ and add message there.
# nano /etc/issue

And add your custom message in this file. See, below a screen grab that shows a custom message as soon as user logged into server.

SSH Login Banner

SSH Login Message

7. SSH has two protocols? Justify this statement.
Answer : SSH uses two protocols – Protocol 1 and Protocol 2. Protocol 1 is older than protocol 2. Protocol 1 is less secure than protocol 2 and should be disabled in the config file.

Again, we need to open the SSH configuration file and add/edit the lines as shown below.

# protocol 2,1

to

Protocol 2

Save the configuration file and restart the service.

8. Is it possible to trace unauthorized login attempts to SSH Server with date of Intrusion along with their corresponding IP.
Answer : Yes! we can find the failed login attempts in the log file created at location ‘/var/log/secure’. We can make a filter using the grep command as shown below.
# cat /var/log/secure | grep “Failed password for”

Note: The grep command can be tweaked in any other way to produce the same result.

9. Is it possible to copy files over SSH? How?
Answer : Yes! We can copy files over SSH using command SCP, stands for ‘Secure CopY’. SCP copies file using SSH and is very secure in functioning.

A dummy SCP command in action is depicted below:

$ scp text_file_to_be_copied Your_username@Remote_Host_server:/Path/To/Remote/Directory

For more practical examples on how to copy files/folders using scp command, read the 10 SCP Commands to Copy Files/Folders in Linux.

10. Is it possible to pass input to SSH from a local file? If Yes! How?
Answer : Yes! We can pass input to SSH from a local file. We can do this simply as we do in scripting Language. Here is a simple one liner command, which will pass input from local files to SSH.
# ssh username@servername < local_file.txt

SSH is a very hot topic from interview point, of all times. The above questions would have surely added to your knowledge.

That’s all for now. I’ll soon be here with another interesting article.

10 Interview Questions and Answers on Various Commands in Linux

Our last article, “10 Useful SSH Interview Questions” was highly appreciated on various Social Networking sites as well as on Tecmint. This time we are presenting you with “10 Questions on various Linux commands“. These questions will prove to be brainstorming to you and will add to your knowledge which surely will help you in day-to-day interaction with Linux and in Interviews.

Linux Questions on Commands

Questions on Various Commands

Q1. You have a file (say virgin.txt). You want this file to be alter-proof so that no one can edit or delete this file, not even root. What will you do?
Answer : In order to make this file immune to editing and deleting we need to use command “chattr”. Chattr changes the attributes of a file on Linux System.

The Syntax of command chattr, for the above purpose is:

# chattr +i virgin.txt

Now try to remove the file using normal user.

$ rm -r virgin.txt 

rm: remove write-protected regular empty file `virgin.txt'? Y 
rm: cannot remove `virgin.txt': Operation not permitted

Now try to remove the file using root user.

# rm -r virgin.txt 

cannot remove `virgin.txt': Operation not permitted
Q2. If several users are using your Linux Server, how will you find the usage time of all the users, individually on your server?
Answer : To fulfill the above task, we need to execute command ‘ac’. The Linux command ‘ac’ may not be installed, in your Linux box, by default. On a Debian based System you need a package ‘acct’ installed to run ac.
# apt-get install acct
# ac -p 

(unknown)                     14.18 
server                             235.23 
total      249.42
Q3. Which is preferred tool to create Network Statistics for your server?
Answer : A mrtg stands for Multi Router Traffic Grapher is one of the most commonly used tool to monitor network Statistics. mrtg is most widely recommended FOSS tool, which is very powerful. mrtg may not be installed on your Linux Box, by default and you need to install it manually from repo.
# apt-get install mrtg
Q4. It is possible to send query to BIOS from Linux Command Line?
Answer : Yes! it is possible to send query and signals to BIOS, directly from the command line. For this you need a tool called “biosdecode”. On my Debian wheezy (7.4), it is already installed.
# biosdecode 

# biosdecode 2.11 

ACPI 2.0 present. 
	OEM Identifier: LENOVO 
	RSD Table 32-bit Address: 0xDDFCA028 
	XSD Table 64-bit Address: 0x00000000DDFCA078 
SMBIOS 2.7 present. 
	Structure Table Length: 3446 bytes 
	Structure Table Address: 0x000ED9D0 
	Number Of Structures: 89 
	Maximum Structure Size: 184 bytes 
PNP BIOS 1.0 present. 
	Event Notification: Not Supported 
	Real Mode 16-bit Code Address: F000:BD76 
	Real Mode 16-bit Data Address: F000:0000 
	16-bit Protected Mode Code Address: 0x000FBD9E 
	16-bit Protected Mode Data Address: 0x000F0000 
PCI Interrupt Routing 1.0 present. 
	Router ID: 00:1f.0 
	Exclusive IRQs: None 
	Compatible Router: 8086:27b8 
	Slot Entry 1: ID 00:1f, on-board 
	...
	Slot Entry 15: ID 02:0c, slot number 2
Q5. Most of the Linux Server are headless, i.e., they run in command mode only. No GUI is installed. How will you find hardware description and configuration of your box?
Answer : It is easy to find Hardware description and configuration of a headless Linux Server using command “dmidecode”, which is the DMI table decoder.
# dmidecode

The output of dmidecode is extensive. It will be a nice idea to redirect its output to a file.

# dmidecode > /path/to/text/file/text_file.txt
Q6. You need to know all the libraries being used and needed by a binary, say ‘/bin/echo’. How will you achieve desirable output?
Answer : The command ‘ldd’, which print shared library dependencies of a binary in Linux.
$ ldd /bin/echo 

linux-gate.so.1 =>  (0xb76f1000) 
libc.so.6 => /lib/i386-linux-gnu/i686/cmov/libc.so.6 (0xb7575000) 
/lib/ld-linux.so.2 (0xb76f2000)
Q7. You are working for the country’s army. You have a file (say “topsecret.txt”) which contains confidential and country’s security Information, Nuclear missiles, etc. What will be your preferred method to delete this file?
Answer : The file, being so confidential needs special deletion technique which can not be recovered by any means. For this, to implement practically we need to utilize an application “shred”. Shred tool overwrites a file repeatedly several times, thus making file recovery for any illegal activity almost nil and practically impossible.
# shred -n 15 -z topsecret.txt

shread – overwrite a file to hide its contents, and optionally delete it.

  1. -n – Overwrites the files n times
  2. -z – Add a final overwrite with zeros to hide shredding.

Note: The above command overwrites the file 15 times before overwriting with zero, to hide shredding.

Q8. Is it possible to mount an NTFS partition on Linux?
Answer : Yes! We can mount an NTFS partition/disk on Linux system using application ‘mount.ntfs’ which optionally is called as ‘ntfs-3g’ in order to mount ntfs partition on Linux System.

For more information, read article on how to monitor an NTFS Partition on Linux.

Q9. What and where you need to edit so that the default desktop at login will be KDE, which at present is GNOME.
Answer : We need to edit a file ‘/etc/sysconfig/desktop’ and add/edit the below lines to load KDE by default and not GNOME.
DESKTOP=”KDE”
DISPLAYMANAGER=”KDE”

Save the file with above content. Next time when machine boots, it automatically will load KDE as default display manager.

Q10. What does an intrid image file refers to?
Answer : An intrid is Initial Ram Disk Image that loads into Memory after Power On Self Test (POST) in order to improve machine I/O performance. intrid contains temporary root file system.

That’s all for now. I’ll be here again with another interesting topic, worth knowing.

10 Useful ‘Interview Questions and Answers’ on Linux Shell Scripting

Greeting of the day. The vastness of Linux makes it possible to come up with a unique post every time. Questions on Shell Scripting

Questions on Shell Scripting

We have lots of tutorials on Shell Scripting language and Interview Questions for readers of all kind, here are the links to those articles.

  1. Shell Scripting Series
  2. Interview Question and Answer Series

Adding to the shell scripting posts here, in this article we will be going through questions related to Linux Shell from interview point of view.

1. How will you abort a shell script before it is successfully executed?
Answer : We need to use ‘exit’ command to fulfil the above described situation. A ‘exit’ command when forced to output any value other than 0 (zero), the script will throw an error and will abort. The value 0 (zero) under Unix environment shell scripting represents successful execution. Hence putting ‘exit -1’, without quotes before script termination will abort the script.

For example, create a following shell script as ‘anything.sh‘.

#!/bin/bash
echo "Hello"
exit -1
echo "bye"

Save the file and execute it.

# sh anything.sh

Hello
exit.sh: 3: exit: Illegal number: -1

From the above script, it is clear that the execution went well before exit -1 command.

2. How to remove the headers from a file using command in Linux?
Answer : A ‘sed’ command comes to rescue here, when we need to delete certain lines of a file.

Here it the exact command to remove headers from a file (or first line of a file).

# sed '1 d' file.txt

The only problem with above command is that, it outputs the file on standard output without the first line. In order to save the output to file, we need to use redirect operator which will redirects the output to a file.

# sed '1 d' file.txt > new_file.txt

Well the built in switch ‘-i‘ for sed command, can perform this operation without a redirect operator.

# sed -i '1 d' file.txt
3. How will you check the length of a line from a text file?
Answer : Again ‘sed’ command is used to find or check the length of a line from a text file.

A ‘sed –n ‘n p’ file.txt‘, where ‘n‘ represents the line number and ‘p‘ print out the pattern space (to the standard output). This command is usually only used in conjunction with the -n command-line option. So, how to get the length count? Obviously! we need to pipeline the output with ‘wc‘ command.

# sed –n 'n p' file.txt | wc –c

To get the length of line number ‘5’ in the text file ‘tecmint.txt‘, we need to run.

# sed -n '5 p' tecmint.txt | wc -c
4. Is it possible to view all the non-printable characters from a text file on Linux System? How will you achieve this?
Answer : Yes! it is very much possible to view all the non-printable characters in Linux. In order to achieve the above said scenario, we need to take the help of editor ‘vi’.

How to show non-printable characters in ‘vi‘ editor?

  1. Open vi editor.
  2. Go to command mode of vi editor by pressing [esc] followed by ‘:’.
  3. The final step is to type execute [set list] command, from command interface of ‘vi’ editor.

Note: This way we can see all the non-printable characters from a text file including ctrl+m (^M).

5. You are a Team-Leader of a group of staffs working for a company xyz. The company ask you to create a directory ‘dir_xyz’, such that any member of the group can create a file or access a file under it, but no one can delete the file, except the one created it. what will you do?
Answer : An interesting scenario to work upon. Well in the above said scenario we need to implement the below steps which is as easy as cake walk.
# mkdir dir_xyz
# chmod g+wx dir_xyz
# chmod +t dir_xyz

The first line of command create a directory (dir_xyz). The second line of command above allow group (g) to have permission to ‘write‘ and ‘execute‘ and the last line of the above command – The ‘+t‘ in the end of the permissions is called the ‘sticky bit‘. It replaces the ‘x‘ and indicates that in this directory, files can only be deleted by their owners, the owner of the directory or the root superuser.

6. Can you tell me the various stages of a Linux process, it passes through?
Answer : A Linux process normally goes through four major stages in its processing life.

Here are the 4 stages of Linux process.

  1. Waiting: Linux Process waiting for a resource.
  2. Running : A Linux process is currently being executed.
  3. Stopped : A Linux Process is stopped after successful execution or after receiving kill signal.
  4. Zombie : A Process is said to be ‘Zombie’ if it has stopped but still active in process table.
7. What is the use of cut command in Linux?
Answer : A ‘cut’ is a very useful Linux command which proves to be helpful when we need to cut certain specific part of a file and print it on standard output, for better manipulation when the field of the file and file itself is too heavy.

For example, extract first 10 columns of a text file ‘txt_tecmint‘.

# cut -c1-10 txt_tecmint

To extract 2nd, 5th and 7th column of the same text file.

# cut -d;-f2 -f5 -f7 txt_tecmint
8. What is the difference between commands ‘cmp’ and ‘diff’?
Answer : The command ‘cmp’ and ‘diff’ means to obtain the same thing but with different mindset.

The ‘diff‘ command reports the changes one should make so that both the files look the same. Whereas ‘cmp‘ command compares the two files byte-by-byte and reports the first mismatch.

9. Is it possible to substitute ‘ls’ command with ‘echo’ command?
Answer : Yes! the ‘ls’ command can be substituted by ‘echo’ command. The command ‘ls’ lists the content of file. From the point of view of replacement of above command we can use ‘echo *’, obviously without quotes. The output of both the commands are same.
10. You might have heard about inodes. can you describe inode briefly?
Answer : A ‘inode’ is a ‘data-structure’, which is used for file identification on Linux. Each file on an Unix System has a separate ‘inode’ and an ‘Unique’ inode Number.

That’s all for now. We will be coming up with another interesting and knowledgeable Interview questions, in the next article.

Practical Interview Questions and Answers on Linux Shell Scripting

RHCE (Red Hat Certified Engineer)

RHCE Series: How to Setup and Test Static Network Routing – Part 1

RHCE (Red Hat Certified Engineer) is a certification from Red Hat company, which gives an open source operating system and software to the enterprise community, It also gives training, support and consulting services for the companies.

RHCE Exam Preparation Guide

RHCE Exam Preparation Guide

This RHCE (Red Hat Certified Engineer) is a performance-based exam (codename EX300), who possesses the additional skills, knowledge, and abilities required of a senior system administrator responsible for Red Hat Enterprise Linux (RHEL) systems.

ImportantRed Hat Certified System Administrator (RHCSA) certification is required to earn RHCE certification.

Following are the exam objectives based on the Red Hat Enterprise Linux 7 version of the exam, which will going to cover in this RHCE series:

Part 1How to Setup and Test Static Routing in RHEL 7

To view fees and register for an exam in your country, check the RHCE Certification page.

In this Part 1 of the RHCE series and the next, we will present basic, yet typical, cases where the principles of static routing, packet filtering, and network address translation come into play.

Setup Static Network Routing in RHEL

RHCE: Setup and Test Network Static Routing – Part 1

Please note that we will not cover them in depth, but rather organize these contents in such a way that will be helpful to take the first steps and build from there.

Static Routing in Red Hat Enterprise Linux 7

One of the wonders of modern networking is the vast availability of devices that can connect groups of computers, whether in relatively small numbers and confined to a single room or several machines in the same building, city, country, or across continents.

However, in order to effectively accomplish this in any situation, network packets need to be routed, or in other words, the path they follow from source to destination must be ruled somehow.

Static routing is the process of specifying a route for network packets other than the default, which is provided by a network device known as the default gateway. Unless specified otherwise through static routing, network packets are directed to the default gateway; with static routing, other paths are defined based on predefined criteria, such as the packet destination.

Let us define the following scenario for this tutorial. We have a Red Hat Enterprise Linux 7 box connecting to router #1 [192.168.0.1] to access the Internet and machines in 192.168.0.0/24.

A second router (router #2) has two network interface cards: enp0s3 is also connected to router #1 to access the Internet and to communicate with the RHEL 7 box and other machines in the same network, whereas the other (enp0s8) is used to grant access to the 10.0.0.0/24 network where internal services reside, such as a web and / or database server.

This scenario is illustrated in the diagram below:

Static Routing Network Diagram

Static Routing Network Diagram

In this article we will focus exclusively on setting up the routing table on our RHEL 7 box to make sure that it can both access the Internet through router #1 and the internal network via router #2.

In RHEL 7, you will use the ip command to configure and show devices and routing using the command line. These changes can take effect immediately on a running system but since they are not persistent across reboots, we will use ifcfg-enp0sX and route-enp0sX files inside /etc/sysconfig/network-scripts to save our configuration permanently.

To begin, let’s print our current routing table:

# ip route show

Check Routing Table in Linux

Check Current Routing Table

From the output above, we can see the following facts:

  1. The default gateway’s IP address is 192.168.0.1 and can be accessed via the enp0s3 NIC.
  2. When the system booted up, it enabled the zeroconf route to 169.254.0.0/16 (just in case). In few words, if a machine is set to obtain an IP address through DHCP but fails to do so for some reason, it is automatically assigned an address in this network. Bottom line is, this route will allow us to communicate, also via enp0s3, with other machines who have failed to obtain an IP address from a DHCP server.
  3. Last, but not least, we can communicate with other boxes inside the 192.168.0.0/24 network through enp0s3, whose IP address is 192.168.0.18.

These are the typical tasks that you would have to perform in such a setting. Unless specified otherwise, the following tasks should be performed in router #2:

Make sure all NICs have been properly installed:

# ip link show

If one of them is down, bring it up:

# ip link set dev enp0s8 up

and assign an IP address in the 10.0.0.0/24 network to it:

# ip addr add 10.0.0.17 dev enp0s8

Oops! We made a mistake in the IP address. We will have to remove the one we assigned earlier and then add the right one (10.0.0.18):

# ip addr del 10.0.0.17 dev enp0s8
# ip addr add 10.0.0.18 dev enp0s8

Now, please note that you can only add a route to a destination network through a gateway that is itself already reachable. For that reason, we need to assign an IP address within the 192.168.0.0/24 range to enp0s3 so that our RHEL 7 box can communicate with it:

# ip addr add 192.168.0.19 dev enp0s3

Finally, we will need to enable packet forwarding:

# echo "1" > /proc/sys/net/ipv4/ip_forward

and stop / disable (just for the time being – until we cover packet filtering in the next article) the firewall:

# systemctl stop firewalld
# systemctl disable firewalld

Back in our RHEL 7 box (192.168.0.18), let’s configure a route to 10.0.0.0/24 through 192.168.0.19 (enp0s3 in router #2):

# ip route add 10.0.0.0/24 via 192.168.0.19

After that, the routing table looks as follows:

# ip route show

Show Network Routing Table

Confirm Network Routing Table

Likewise, add the corresponding route in the machine(s) you’re trying to reach in 10.0.0.0/24:

# ip route add 192.168.0.0/24 via 10.0.0.18

You can test for basic connectivity using ping:

In the RHEL 7 box, run

# ping -c 4 10.0.0.20

where 10.0.0.20 is the IP address of a web server in the 10.0.0.0/24 network.

In the web server (10.0.0.20), run

# ping -c 192.168.0.18

where 192.168.0.18 is, as you will recall, the IP address of our RHEL 7 machine.

Alternatively, we can use tcpdump (you may need to install it with yum install tcpdump) to check the 2-way communication over TCP between our RHEL 7 box and the web server at 10.0.0.20.

To do so, let’s start the logging in the first machine with:

# tcpdump -qnnvvv -i enp0s3 host 10.0.0.20

and from another terminal in the same system let’s telnet to port 80 in the web server (assuming Apache is listening on that port; otherwise, indicate the right port in the following command):

# telnet 10.0.0.20 80

The tcpdump log should look as follows:

Check Network Communication between Servers

Check Network Communication between Servers

Where the connection has been properly initialized, as we can tell by looking at the 2-way communication between our RHEL 7 box (192.168.0.18) and the web server (10.0.0.20).

Please remember that these changes will go away when you restart the system. If you want to make them persistent, you will need to edit (or create, if they don’t already exist) the following files, in the same systems where we performed the above commands.

Though not strictly necessary for our test case, you should know that /etc/sysconfig/network contains system-wide network parameters. A typical /etc/sysconfig/network looks as follows:

# Enable networking on this system?
NETWORKING=yes
# Hostname. Should match the value in /etc/hostname
HOSTNAME=yourhostnamehere
# Default gateway
GATEWAY=XXX.XXX.XXX.XXX
# Device used to connect to default gateway. Replace X with the appropriate number.
GATEWAYDEV=enp0sX

When it comes to setting specific variables and values for each NIC (as we did for router #2), you will have to edit /etc/sysconfig/network-scripts/ifcfg-enp0s3 and /etc/sysconfig/network-scripts/ifcfg-enp0s8.

Following our case,

TYPE=Ethernet
BOOTPROTO=static
IPADDR=192.168.0.19
NETMASK=255.255.255.0
GATEWAY=192.168.0.1
NAME=enp0s3
ONBOOT=yes

and

TYPE=Ethernet
BOOTPROTO=static
IPADDR=10.0.0.18
NETMASK=255.255.255.0
GATEWAY=10.0.0.1
NAME=enp0s8
ONBOOT=yes

for enp0s3 and enp0s8, respectively.

As for routing in our client machine (192.168.0.18), we will need to edit /etc/sysconfig/network-scripts/route-enp0s3:

10.0.0.0/24 via 192.168.0.19 dev enp0s3

Now reboot your system and you should see that route in your table.

Summary

In this article we have covered the essentials of static routing in Red Hat Enterprise Linux 7. Although scenarios may vary, the case presented here illustrates the required principles and the procedures to perform this task. Before wrapping up, I would like to suggest you to take a look at Chapter 4 of the Securing and Optimizing Linuxsection in The Linux Documentation Project site for further details on the topics covered here.

Free ebook on Securing & Optimizing Linux: The Hacking Solution (v.3.0) – This 800+ eBook contains comprehensive collection of Linux security tips and how to use them safely and easily to configure Linux-based applications and services.

Linux Security and Optimization Book

Linux Security and Optimization Book

Download Now

In the next article we will talk about packet filtering and network address translation to sum up the networking basic skills needed for the RHCE certification.

As always, we look forward to hearing from you, so feel free to leave your questions, comments, and suggestions using the form below.

How to Perform Packet Filtering, Network Address Translation and Set Kernel Runtime Parameters – Part 2

As promised in Part 1 (“Setup Static Network Routing”), in this article (Part 2 of RHCE series) we will begin by introducing the principles of packet filtering and network address translation (NAT) in Red Hat Enterprise Linux 7, before diving into setting runtime kernel parameters to modify the behavior of a running kernel if certain conditions change or needs arise.

Network Packet Filtering in RHEL

RHCE: Network Packet Filtering – Part 2

Network Packet Filtering in RHEL 7

When we talk about packet filtering, we refer to a process performed by a firewall in which it reads the header of each data packet that attempts to pass through it. Then, it filters the packet by taking the required action based on rules that have been previously defined by the system administrator.

As you probably know, beginning with RHEL 7, the default service that manages firewall rules is firewalld. Like iptables, it talks to the netfilter module in the Linux kernel in order to examine and manipulate network packets. Unlike iptables, updates can take effect immediately without interrupting active connections – you don’t even have to restart the service.

Another advantage of firewalld is that it allows us to define rules based on pre-configured service names (more on that in a minute).

In Part 1, we used the following scenario:

Static Routing Network Diagram

Static Routing Network Diagram

However, you will recall that we disabled the firewall on router #2 to simplify the example since we had not covered packet filtering yet. Let’s see now how we can enable incoming packets destined for a specific service or port in the destination.

First, let’s add a permanent rule to allow inbound traffic in enp0s3 (192.168.0.19) to enp0s8 (10.0.0.18):

# firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 0 -i enp0s3 -o enp0s8 -j ACCEPT

The above command will save the rule to /etc/firewalld/direct.xml:

# cat /etc/firewalld/direct.xml

Check Firewalld Saved Rules in CentOS 7

Check Firewalld Saved Rules

Then enable the rule for it to take effect immediately:

# firewall-cmd --direct --add-rule ipv4 filter FORWARD 0 -i enp0s3 -o enp0s8 -j ACCEPT

Now you can telnet to the web server from the RHEL 7 box and run tcpdump again to monitor the TCP traffic between the two machines, this time with the firewall in router #2 enabled.

# telnet 10.0.0.20 80
# tcpdump -qnnvvv -i enp0s3 host 10.0.0.20

What if you want to only allow incoming connections to the web server (port 80) from 192.168.0.18 and block connections from other sources in the 192.168.0.0/24 network?

In the web server’s firewall, add the following rules:

# firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.18/24" service name="http" accept'
# firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.18/24" service name="http" accept' --permanent
# firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.0/24" service name="http" drop'
# firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.0.0/24" service name="http" drop' --permanent

Now you can make HTTP requests to the web server, from 192.168.0.18 and from some other machine in 192.168.0.0/24. In the first case the connection should complete successfully, whereas in the second it will eventually timeout.

To do so, any of the following commands will do the trick:

# telnet 10.0.0.20 80
# wget 10.0.0.20

I strongly advise you to check out the Firewalld Rich Language documentation in the Fedora Project Wiki for further details on rich rules.

Network Address Translation in RHEL 7

Network Address Translation (NAT) is the process where a group of computers (it can also be just one of them) in a private network are assigned an unique public IP address. As result, they are still uniquely identified by their own private IP address inside the network but to the outside they all “seem” the same.

In addition, NAT makes it possible that computers inside a network sends requests to outside resources (like the Internet) and have the corresponding responses be sent back to the source system only.

Let’s now consider the following scenario:

Network Address Translation in RHEL

Network Address Translation

In router #2, we will move the enp0s3 interface to the external zone, and enp0s8 to the internal zone, where masquerading, or NAT, is enabled by default:

# firewall-cmd --list-all --zone=external
# firewall-cmd --change-interface=enp0s3 --zone=external
# firewall-cmd --change-interface=enp0s3 --zone=external --permanent
# firewall-cmd --change-interface=enp0s8 --zone=internal
# firewall-cmd --change-interface=enp0s8 --zone=internal --permanent

For our current setup, the internal zone – along with everything that is enabled in it will be the default zone:

# firewall-cmd --set-default-zone=internal

Next, let’s reload firewall rules and keep state information:

# firewall-cmd --reload

Finally, let’s add router #2 as default gateway in the web server:

# ip route add default via 10.0.0.18

You can now verify that you can ping router #1 and an external site (tecmint.com, for example) from the web server:

# ping -c 2 192.168.0.1
# ping -c 2 tecmint.com

Verify Network Routing

Verify Network Routing

Setting Kernel Runtime Parameters in RHEL 7

In Linux, you are allowed to change, enable, and disable the kernel runtime parameters, and RHEL is no exception. The /proc/sys interface (sysctl) lets you set runtime parameters on-the-fly to modify the system’s behavior without much hassle when operating conditions change.

To do so, the echo shell built-in is used to write to files inside /proc/sys/<category>, where <category> is most likely one of the following directories:

  1. dev: parameters for specific devices connected to the machine.
  2. fs: filesystem configuration (quotas and inodes, for example).
  3. kernel: kernel-specific configuration.
  4. net: network configuration.
  5. vm: use of the kernel’s virtual memory.

To display the list of all the currently available values, run

# sysctl -a | less

In Part 1, we changed the value of the net.ipv4.ip_forward parameter by doing

# echo 1 > /proc/sys/net/ipv4/ip_forward

in order to allow a Linux machine to act as router.

Another runtime parameter that you may want to set is kernel.sysrq, which enables the Sysrq key in your keyboard to instruct the system to perform gracefully some low-level functions, such as rebooting the system if it has frozen for some reason:

# echo 1 > /proc/sys/kernel/sysrq

To display the value of a specific parameter, use sysctl as follows:

# sysctl <parameter.name>

For example,

# sysctl net.ipv4.ip_forward
# sysctl kernel.sysrq

Some parameters, such as the ones mentioned above, require only one value, whereas others (for example, fs.inode-state) require multiple values:

Check Kernel Parameters in Linux

Check Kernel Parameters

In either case, you need to read the kernel’s documentation before making any changes.

Please note that these settings will go away when the system is rebooted. To make these changes permanent, we will need to add .conf files inside the /etc/sysctl.d as follows:

# echo "net.ipv4.ip_forward = 1" > /etc/sysctl.d/10-forward.conf

(where the number 10 indicates the order of processing relative to other files in the same directory).

and enable the changes with

# sysctl -p /etc/sysctl.d/10-forward.conf

Summary

In this tutorial we have explained the basics of packet filtering, network address translation, and setting kernel runtime parameters on a running system and persistently across reboots. I hope you have found this information useful, and as always, we look forward to hearing from you!
Don’t hesitate to share with us your questions, comments, or suggestions using the form below.

How to Produce and Deliver System Activity Reports Using Linux Toolsets – Part 3

As a system engineer, you will often need to produce reports that show the utilization of your system’s resources in order to make sure that: 1) they are being utilized optimally, 2) prevent bottlenecks, and 3) ensure scalability, among other reasons.

Monitor Linux Performance Activity Reports

RHCE: Monitor Linux Performance Activity Reports – Part 3

Besides the well-known native Linux tools that are used to check disk, memory, and CPU usage – to name a few examples, Red Hat Enterprise Linux 7 provides two additional toolsets to enhance the data you can collect for your reports: sysstat and dstat.

In this article we will describe both, but let’s first start by reviewing the usage of the classic tools.

Native Linux Tools

With df, you will be able to report disk space and inode usage of by filesystem. You need to monitor both because a lack of space will prevent you from being able to save further files (and may even cause the system to crash), just like running out of inodes will mean you can’t link further files with their corresponding data structures, thus producing the same effect: you won’t be able to save those files to disk.

# df -h 		[Display output in human-readable form]
# df -h --total         [Produce a grand total]

Check Linux Total Disk Usage

Check Linux Total Disk Usage

# df -i 		[Show inode count by filesystem]
# df -i --total 	[Produce a grand total]

Check Linux Total inode Numbers

Check Linux Total inode Numbers

With du, you can estimate file space usage by either file, directory, or filesystem.

For example, let’s see how much space is used by the /home directory, which includes all of the user’s personal files. The first command will return the overall space currently used by the entire /home directory, whereas the second will also display a disaggregated list by sub-directory as well:

# du -sch /home
# du -sch /home/*

Check Linux Directory Disk Size

Check Linux Directory Disk Size

Don’t Miss:

  1. 12 ‘df’ Command Examples to Check Linux Disk Space Usage
  2. 10 ‘du’ Command Examples to Find Disk Usage of Files/Directories

Another utility that can’t be missing from your toolset is vmstat. It will allow you to see at a quick glance information about processes, CPU and memory usage, disk activity, and more.

If run without arguments, vmstat will return averages since the last reboot. While you may use this form of the command once in a while, it will be more helpful to take a certain amount of system utilization samples, one after another, with a defined time separation between samples.

For example,

# vmstat 5 10

will return 10 samples taken every 5 seconds:

Check Linux System Performance

Check Linux System Performance

As you can see in the above picture, the output of vmstat is divided by columns: procs (processes), memoryswapiosystem, and cpu. The meaning of each field can be found in the FIELD DESCRIPTION sections in the man page of vmstat.

Where can vmstat come in handy? Let’s examine the behavior of the system before and during a yum update:

# vmstat -a 1 5

Vmstat Linux Performance Monitoring

Vmstat Linux Performance Monitoring

Please note that as files are being modified on disk, the amount of active memory increases and so does the number of blocks written to disk (bo) and the CPU time that is dedicated to user processes (us).

Or during the saving process of a large file directly to disk (caused by dsync):

# vmstat -a 1 5
# dd if=/dev/zero of=dummy.out bs=1M count=1000 oflag=dsync

VmStat Linux Disk Performance Monitoring

VmStat Linux Disk Performance Monitoring

In this case, we can see a yet larger number of blocks being written to disk (bo), which was to be expected, but also an increase of the amount of CPU time that it has to wait for I/O operations to complete before processing tasks (wa).

Don’t MissVmstat – Linux Performance Monitoring

Other Linux Tools

As mentioned in the introduction of this chapter, there are other tools that you can use to check the system status and utilization (they are not only provided by Red Hat but also by other major distributions from their officially supported repositories).

The sysstat package contains the following utilities:

  1. sar (collect, report, or save system activity information).
  2. sadf (display data collected by sar in multiple formats).
  3. mpstat (report processors related statistics).
  4. iostat (report CPU statistics and I/O statistics for devices and partitions).
  5. pidstat (report statistics for Linux tasks).
  6. nfsiostat (report input/output statistics for NFS).
  7. cifsiostat (report CIFS statistics) and
  8. sa1 (collect and store binary data in the system activity daily data file.
  9. sa2 (write a daily report in the /var/log/sa directory) tools.

whereas dstat adds some extra features to the functionality provided by those tools, along with more counters and flexibility. You can find an overall description of each tool by running yum info sysstat or yum info dstat, respectively, or checking the individual man pages after installation.

To install both packages:

# yum update && yum install sysstat dstat

The main configuration file for sysstat is /etc/sysconfig/sysstat. You will find the following parameters in that file:

# How long to keep log files (in days).
# If value is greater than 28, then log files are kept in
# multiple directories, one for each month.
HISTORY=28
# Compress (using gzip or bzip2) sa and sar files older than (in days):
COMPRESSAFTER=31
# Parameters for the system activity data collector (see sadc manual page)
# which are used for the generation of log files.
SADC_OPTIONS="-S DISK"
# Compression program to use.
ZIP="bzip2"

When sysstat is installed, two cron jobs are added and enabled in /etc/cron.d/sysstat. The first job runs the system activity accounting tool every 10 minutes and stores the reports in /var/log/sa/saXX where XX is the day of the month.

Thus, /var/log/sa/sa05 will contain all the system activity reports from the 5th of the month. This assumes that we are using the default value in the HISTORY variable in the configuration file above:

*/10 * * * * root /usr/lib64/sa/sa1 1 1

The second job generates a daily summary of process accounting at 11:53 pm every day and stores it in /var/log/sa/sarXX files, where XX has the same meaning as in the previous example:

53 23 * * * root /usr/lib64/sa/sa2 -A

For example, you may want to output system statistics from 9:30 am through 5:30 pm of the sixth of the month to a .csv file that can easily be viewed using LibreOffice Calc or Microsoft Excel (this approach will also allow you to create charts or graphs):

# sadf -s 09:30:00 -e 17:30:00 -dh /var/log/sa/sa06 -- | sed 's/;/,/g' > system_stats20150806.csv

You could alternatively use the -j flag instead of -d in the sadf command above to output the system stats in JSON format, which could be useful if you need to consume the data in a web application, for example.

Linux System Statistics

Linux System Statistics

Finally, let’s see what dstat has to offer. Please note that if run without arguments, dstat assumes -cdngy by default (short for CPU, disk, network, memory pages, and system stats, respectively), and adds one line every second (execution can be interrupted anytime with Ctrl + C):

# dstat

Linux Disk Statistics Monitoring

Linux Disk Statistics Monitoring

To output the stats to a .csv file, use the –output flag followed by a file name. Let’s see how this looks on LibreOffice Calc:

Monitor Linux Statistics Output

Monitor Linux Statistics Output

I strongly advise you to check out the man page of dstat along with the man page of sysstat in PDF format for your reading convenience. You will find several other options that will help you create custom and detailed system activity reports.

Don’t Miss: Sysstat – Linux Usage Activity Monitoring Tool

Summary

In this guide we have explained how to use both native Linux tools and specific utilities provided with RHEL 7 in order to produce reports on system utilization. At one point or another, you will come to rely on these reports as best friends.

You will probably have used other tools that we have not covered in this tutorial. If so, feel free to share them with the rest of the community along with any other suggestions / questions / comments that you may have- using the form below.

We look forward to hearing from you.

Using Shell Scripting to Automate Linux System Maintenance Tasks – Part 4

Some time ago I read that one of the distinguishing characteristics of an effective system administrator / engineer is laziness. It seemed a little contradictory at first but the author then proceeded to explain why:

Automate Linux System Maintenance Tasks

RHCE Series: Automate Linux System Maintenance Tasks – Part 4

if a sysadmin spends most of his time solving issues and doing repetitive tasks, you can suspect he or she is not doing things quite right. In other words, an effective system administrator / engineer should develop a plan to perform repetitive tasks with as less action on his / her part as possible, and should foresee problems by using,

for example, the tools reviewed in Part 3 – Monitor System Activity Reports Using Linux Toolsets of this series. Thus, although he or she may not seem to be doing much, it’s because most of his / her responsibilities have been taken care of with the help of shell scripting, which is what we’re going to talk about in this tutorial.

What is a shell script?

In few words, a shell script is nothing more and nothing less than a program that is executed step by step by a shell, which is another program that provides an interface layer between the Linux kernel and the end user.

By default, the shell used for user accounts in RHEL 7 is bash (/bin/bash). If you want a detailed description and some historical background, you can refer to this Wikipedia article.

To find out more about the enormous set of features provided by this shell, you may want to check out its man page, which is downloaded in in PDF format at (Bash Commands). Other than that, it is assumed that you are familiar with Linux commands (if not, I strongly advise you to go through A Guide from Newbies to SysAdminarticle in Tecmint.com before proceeding). Now let’s get started.

Writing a script to display system information

For our convenience, let’s create a directory to store our shell scripts:

# mkdir scripts
# cd scripts

And open a new text file named system_info.sh with your preferred text editor. We will begin by inserting a few comments at the top and some commands afterwards:

#!/bin/bash

# Sample script written for Part 4 of the RHCE series
# This script will return the following set of system information:
# -Hostname information:
echo -e "\e[31;43m***** HOSTNAME INFORMATION *****\e[0m"
hostnamectl
echo ""
# -File system disk space usage:
echo -e "\e[31;43m***** FILE SYSTEM DISK SPACE USAGE *****\e[0m"
df -h
echo ""
# -Free and used memory in the system:
echo -e "\e[31;43m ***** FREE AND USED MEMORY *****\e[0m"
free
echo ""
# -System uptime and load:
echo -e "\e[31;43m***** SYSTEM UPTIME AND LOAD *****\e[0m"
uptime
echo ""
# -Logged-in users:
echo -e "\e[31;43m***** CURRENTLY LOGGED-IN USERS *****\e[0m"
who
echo ""
# -Top 5 processes as far as memory usage is concerned
echo -e "\e[31;43m***** TOP 5 MEMORY-CONSUMING PROCESSES *****\e[0m"
ps -eo %mem,%cpu,comm --sort=-%mem | head -n 6
echo ""
echo -e "\e[1;32mDone.\e[0m"

Next, give the script execute permissions:

# chmod +x system_info.sh

and run it:

./system_info.sh

Note that the headers of each section are shown in color for better visualization:

Server Monitoring Shell Script

Server Monitoring Shell Script

That functionality is provided by this command:

echo -e "\e[COLOR1;COLOR2m<YOUR TEXT HERE>\e[0m"

Where COLOR1 and COLOR2 are the foreground and background colors, respectively (more info and options are explained in this entry from the Arch Linux Wiki) and <YOUR TEXT HERE> is the string that you want to show in color.

Automating Tasks

The tasks that you may need to automate may vary from case to case. Thus, we cannot possibly cover all of the possible scenarios in a single article, but we will present three classic tasks that can be automated using shell scripting:

1) update the local file database, 2) find (and alternatively delete) files with 777 permissions, and 3) alert when filesystem usage surpasses a defined limit.

Let’s create a file named auto_tasks.sh in our scripts directory with the following content:

#!/bin/bash

# Sample script to automate tasks:
# -Update local file database:
echo -e "\e[4;32mUPDATING LOCAL FILE DATABASE\e[0m"
updatedb
if [ $? == 0 ]; then
        echo "The local file database was updated correctly."
else
        echo "The local file database was not updated correctly."
fi
echo ""

# -Find and / or delete files with 777 permissions.
echo -e "\e[4;32mLOOKING FOR FILES WITH 777 PERMISSIONS\e[0m"
# Enable either option (comment out the other line), but not both.
# Option 1: Delete files without prompting for confirmation. Assumes GNU version of find.
#find -type f -perm 0777 -delete
# Option 2: Ask for confirmation before deleting files. More portable across systems.
find -type f -perm 0777 -exec rm -i {} +;
echo ""
# -Alert when file system usage surpasses a defined limit 
echo -e "\e[4;32mCHECKING FILE SYSTEM USAGE\e[0m"
THRESHOLD=30
while read line; do
        # This variable stores the file system path as a string
        FILESYSTEM=$(echo $line | awk '{print $1}')
        # This variable stores the use percentage (XX%)
        PERCENTAGE=$(echo $line | awk '{print $5}')
        # Use percentage without the % sign.
        USAGE=${PERCENTAGE%?}
        if [ $USAGE -gt $THRESHOLD ]; then
                echo "The remaining available space in $FILESYSTEM is critically low. Used: $PERCENTAGE"
        fi
done < <(df -h --total | grep -vi filesystem)

Please note that there is a space between the two < signs in the last line of the script.

Shell Script to Find 777 Permissions

Shell Script to Find 777 Permissions

Using Cron

To take efficiency one step further, you will not want to sit in front of your computer and run those scripts manually. Rather, you will use cron to schedule those tasks to run on a periodic basis and sends the results to a predefined list of recipients via email or save them to a file that can be viewed using a web browser.

The following script (filesystem_usage.sh) will run the well-known df -h command, format the output into a HTML table and save it in the report.html file:

#!/bin/bash
# Sample script to demonstrate the creation of an HTML report using shell scripting
# Web directory
WEB_DIR=/var/www/html
# A little CSS and table layout to make the report look a little nicer
echo "<HTML>
<HEAD>
<style>
.titulo{font-size: 1em; color: white; background:#0863CE; padding: 0.1em 0.2em;}
table
{
border-collapse:collapse;
}
table, td, th
{
border:1px solid black;
}
</style>
<meta http-equiv='Content-Type' content='text/html; charset=UTF-8' />
</HEAD>
<BODY>" > $WEB_DIR/report.html
# View hostname and insert it at the top of the html body
HOST=$(hostname)
echo "Filesystem usage for host <strong>$HOST</strong><br>
Last updated: <strong>$(date)</strong><br><br>
<table border='1'>
<tr><th class='titulo'>Filesystem</td>
<th class='titulo'>Size</td>
<th class='titulo'>Use %</td>
</tr>" >> $WEB_DIR/report.html
# Read the output of df -h line by line
while read line; do
echo "<tr><td align='center'>" >> $WEB_DIR/report.html
echo $line | awk '{print $1}' >> $WEB_DIR/report.html
echo "</td><td align='center'>" >> $WEB_DIR/report.html
echo $line | awk '{print $2}' >> $WEB_DIR/report.html
echo "</td><td align='center'>" >> $WEB_DIR/report.html
echo $line | awk '{print $5}' >> $WEB_DIR/report.html
echo "</td></tr>" >> $WEB_DIR/report.html
done < <(df -h | grep -vi filesystem)
echo "</table></BODY></HTML>" >> $WEB_DIR/report.html

In our RHEL 7 server (192.168.0.18), this looks as follows:

Server Monitoring Report

Server Monitoring Report

You can add to that report as much information as you want. To run the script every day at 1:30 pm, add the following crontab entry:

30 13 * * * /root/scripts/filesystem_usage.sh

Summary

You will most likely think of several other tasks that you want or need to automate; as you can see, using shell scripting will greatly simplify this effort. Feel free to let us know if you find this article helpful and don’t hesitate to add your own ideas or comments via the form below.

How to Manage System Logs (Configure, Rotate and Import Into Database) in RHEL 7 – Part 5

In order to keep your RHEL 7 systems secure, you need to know how to monitor all of the activities that take place on such systems by examining log files. Thus, you will be able to detect any unusual or potentially malicious activity and perform system troubleshooting or take another appropriate action.

Linux Rotate Log Files Using Rsyslog and Logrotate

RHCE Exam: Manage System LogsUsing Rsyslogd and Logrotate – Part 5

In RHEL 7, the rsyslogd daemon is responsible for system logging and reads its configuration from /etc/rsyslog.conf (this file specifies the default location for all system logs) and from files inside /etc/rsyslog.d, if any.

Rsyslogd Configuration

A quick inspection of the rsyslog.conf will be helpful to start. This file is divided into 3 main sections: Modules(since rsyslog follows a modular design), Global directives (used to set global properties of the rsyslogd daemon), and Rules. As you will probably guess, this last section indicates what gets logged or shown (also known as the selector) and where, and will be our focus throughout this article.

A typical line in rsyslog.conf is as follows:

Rsyslogd Configuration

Rsyslogd Configuration

In the image above, we can see that a selector consists of one or more pairs Facility:Priority separated by semicolons, where Facility describes the type of message (refer to section 4.1.1 in RFC 3164 to see the complete list of facilities available for rsyslog) and Priority indicates its severity, which can be one of the following self-explanatory words:

  1. debug
  2. info
  3. notice
  4. warning
  5. err
  6. crit
  7. alert
  8. emerg

Though not a priority itself, the keyword none means no priority at all of the given facility.

Note: That a given priority indicates that all messages of such priority and above should be logged. Thus, the line in the example above instructs the rsyslogd daemon to log all messages of priority info or higher (regardless of the facility) except those belonging to mailauthpriv, and cron services (no messages coming from this facilities will be taken into account) to /var/log/messages.

You can also group multiple facilities using the colon sign to apply the same priority to all of them. Thus, the line:

*.info;mail.none;authpriv.none;cron.none                /var/log/messages

Could be rewritten as

*.info;mail,authpriv,cron.none                /var/log/messages

In other words, the facilities mailauthpriv, and cron are grouped and the keyword none is applied to the three of them.

Creating a custom log file

To log all daemon messages to /var/log/tecmint.log, we need to add the following line either in rsyslog.conf or in a separate file (easier to manage) inside /etc/rsyslog.d:

daemon.*    /var/log/tecmint.log

Let’s restart the daemon (note that the service name does not end with a d):

# systemctl restart rsyslog

And check the contents of our custom log before and after restarting two random daemons:

Linux Create Custom Log File

Create Custom Log File

As a self-study exercise, I would recommend you play around with the facilities and priorities and either log additional messages to existing log files or create new ones as in the previous example.

Rotating Logs using Logrotate

To prevent log files from growing endlessly, the logrotate utility is used to rotate, compress, remove, and alternatively mail logs, thus easing the administration of systems that generate large numbers of log files.

Suggested Read: How to Setup and Manage Log Rotation Using Logrotate in Linux

Logrotate runs daily as a cron job (/etc/cron.daily/logrotate) and reads its configuration from /etc/logrotate.conf and from files located in /etc/logrotate.d, if any.

As with the case of rsyslog, even when you can include settings for specific services in the main file, creating separate configuration files for each one will help organize your settings better.

Let’s take a look at a typical logrotate.conf:

Logrotate Configuration

Logrotate Configuration

In the example above, logrotate will perform the following actions for /var/loh/wtmp: attempt to rotate only once a month, but only if the file is at least 1 MB in size, then create a brand new log file with permissions set to 0664 and ownership given to user root and group utmp. Next, only keep one archived log, as specified by the rotate directive:

Logrotate Logs Monthly

Logrotate Logs Monthly

Let’s now consider another example as found in /etc/logrotate.d/httpd:

Rotate Apache Log Files

Rotate Apache Log Files

You can read more about the settings for logrotate in its man pages (man logrotate and man logrotate.conf). Both files are provided along with this article in PDF format for your reading convenience.

As a system engineer, it will be pretty much up to you to decide for how long logs will be stored and in what format, depending on whether you have /var in a separate partition / logical volume. Otherwise, you really want to consider removing old logs to save storage space. On the other hand, you may be forced to keep several logs for future security auditing according to your company’s or client’s internal policies.

Saving Logs to a Database

Of course examining logs (even with the help of tools such as grep and regular expressions) can become a rather tedious task. For that reason, rsyslog allows us to export them into a database (OTB supported RDBMS include MySQL, MariaDB, PostgreSQL, and Oracle.

This section of the tutorial assumes that you have already installed the MariaDB server and client in the same RHEL 7 box where the logs are being managed:

# yum update && yum install mariadb mariadb-server mariadb-client rsyslog-mysql
# systemctl enable mariadb && systemctl start mariadb

Then use the mysql_secure_installation utility to set the password for the root user and other security considerations:

Secure MySQL Database

Secure MySQL Database

Note: If you don’t want to use the MariaDB root user to insert log messages to the database, you can configure another user account to do so. Explaining how to do that is out of the scope of this tutorial but is explained in detail in MariaDB knowledge base. In this tutorial we will use the root account for simplicity.

Next, download the createDB.sql script from GitHub and import it into your database server:

# mysql -u root -p < createDB.sql

Save Server Logs to Database

Save Server Logs to Database

Finally, add the following lines to /etc/rsyslog.conf:

$ModLoad ommysql
$ActionOmmysqlServerPort 3306
*.* :ommysql:localhost,Syslog,root,YourPasswordHere

Restart rsyslog and the database server:

# systemctl restart rsyslog 
# systemctl restart mariadb

Querying the Logs using SQL syntax

Now perform some tasks that will modify the logs (like stopping and starting services, for example), then log to your DB server and use standard SQL commands to display and search in the logs:

USE Syslog;
SELECT ReceivedAt, Message FROM SystemEvents;

Query Logs in Database

Query Logs in Database

Summary

In this article we have explained how to set up system logging, how to rotate logs, and how to redirect the messages to a database for easier search. We hope that these skills will be helpful as you prepare for the RHCE exam and in your daily responsibilities as well.

As always, your feedback is more than welcome. Feel free to use the form below to reach us.

Setting Up Samba and Configure FirewallD and SELinux to Allow File Sharing on Linux/Windows Clients – Part 6

Since computers seldom work as isolated systems, it is to be expected that as a system administrator or engineer, you know how to set up and maintain a network with multiple types of servers.

In this article and in the next of this series we will go through the essentials of setting up Samba and NFSservers with Windows/Linux and Linux clients, respectively.

Setup Samba File Sharing on Linux

RHCE: Setup Samba File Sharing – Part 6

This article will definitely come in handy if you’re called upon to set up file servers in corporate or enterprise environments where you are likely to find different operating systems and types of devices.

Since you can read about the background and the technical aspects of both Samba and NFS all over the Internet, in this article and the next we will cut right to the chase with the topic at hand.

Step 1: Installing Samba Server

Our current testing environment consists of two RHEL 7 boxes and one Windows 8 machine, in that order:

1. Samba / NFS server [box1 (RHEL 7): 192.168.0.18], 
2. Samba client #1 [box2 (RHEL 7): 192.168.0.20]
3. Samba client #2 [Windows 8 machine: 192.168.0.106]

Testing Setup for Samba

Testing Setup for Samba

On box1, install the following packages:

# yum update && yum install samba samba-client samba-common

On box2:

# yum update && yum install samba samba-client samba-common cifs-utils

Once the installation is complete, we’re ready to configure our share.

Step 2: Setting Up File Sharing Through Samba

One of the reason why Samba is so relevant is because it provides file and print services to SMB/CIFS clients, which causes those clients to see the server as if it was a Windows system (I must admit I tend to get a little emotional while writing about this topic as it was my first setup as a new Linux system administrator some years ago).

Adding system users and setting up permissions and ownership

To allow for group collaboration, we will create a group named finance with two users (user1 and user2) with useradd command and a directory /finance in box1.

We will also change the group owner of this directory to finance and set its permissions to 0770 (read, write, and execution permissions for the owner and the group owner):

# groupadd finance
# useradd user1
# useradd user2
# usermod -a -G finance user1
# usermod -a -G finance user2
# mkdir /finance
# chmod 0770 /finance
# chgrp finance /finance

Step 3:​ Configuring SELinux and Firewalld

In preparation to configure /finance as a Samba share, we will need to either disable SELinux or set the proper boolean and security context values as follows (otherwise, SELinux will prevent clients from accessing the share):

# setsebool -P samba_export_all_ro=1 samba_export_all_rw=1
# getsebool –a | grep samba_export
# semanage fcontext –at samba_share_t "/finance(/.*)?"
# restorecon /finance

In addition, we must ensure that Samba traffic is allowed by the firewalld.

# firewall-cmd --permanent --add-service=samba
# firewall-cmd --reload

Step 4: Configure Samba Share

Now it’s time to dive into the configuration file /etc/samba/smb.conf and add the section for our share: we want the members of the finance group to be able to browse the contents of /finance, and save / create files or subdirectories in it (which by default will have their permission bits set to 0770 and finance will be their group owner):

smb.conf
[finance]
comment=Directory for collaboration of the company's finance team
browsable=yes
path=/finance
public=no
valid users=@finance
write list=@finance
writeable=yes
create mask=0770
Force create mode=0770
force group=finance

Save the file and then test it with the testparm utility. If there are any errors, the output of the following command will indicate what you need to fix. Otherwise, it will display a review of your Samba server configuration:

Test Samba Configuration

Test Samba Configuration

Should you want to add another share that is open to the public (meaning without any authentication whatsoever), create another section in /etc/samba/smb.conf and under the new share’s name copy the section above, only changing public=no to public=yes and not including the valid users and write list directives.

Step 5: Adding Samba Users

Next, you will need to add user1 and user2 as Samba users. To do so, you will use the smbpasswd command, which interacts with Samba’s internal database. You will be prompted to enter a password that you will later use to connect to the share:

# smbpasswd -a user1
# smbpasswd -a user2

Finally, restart Samba, enable the service to start on boot, and make sure the share is actually available to network clients:

# systemctl start smb
# systemctl enable smb
# smbclient -L localhost –U user1
# smbclient -L localhost –U user2

Verify Samba Share

Verify Samba Share

At this point, the Samba file server has been properly installed and configured. Now it’s time to test this setup on our RHEL 7 and Windows 8 clients.

Step 6:​ Mounting the Samba Share in Linux

First, make sure the Samba share is accessible from this client:

# smbclient –L 192.168.0.18 -U user2

Mount Samba Share on Linux

Mount Samba Share on Linux

(repeat the above command for user1)

As any other storage media, you can mount (and later unmount) this network share when needed:

# mount //192.168.0.18/finance /media/samba -o username=user1

Mount Samba Network Share

Mount Samba Network Share

(where /media/samba is an existing directory)

or permanently, by adding the following entry in /etc/fstab file:

fstab
//192.168.0.18/finance /media/samba cifs credentials=/media/samba/.smbcredentials,defaults 0 0

Where the hidden file /media/samba/.smbcredentials (whose permissions and ownership have been set to 600and root:root, respectively) contains two lines that indicate the username and password of an account that is allowed to use the share:

.smbcredentials
username=user1
password=PasswordForUser1

Finally, let’s create a file inside /finance and check the permissions and ownership:

# touch /media/samba/FileCreatedInRHELClient.txt

Create File in Samba Share

Create File in Samba Share

As you can see, the file was created with 0770 permissions and ownership set to user1:finance.

Step 7: Mounting the Samba Share in Windows

To mount the Samba share in Windows, go to My PC and choose Computer, then Map network drive. Next, assign a letter for the drive to be mapped and check Connect using different credentials (the screenshots below are in Spanish, my native language):

Mount Samba Share in Windows

Mount Samba Share in Windows

Finally, let’s create a file and check the permissions and ownership:

Create Files on Windows Samba Share

Create Files on Windows Samba Share

# ls -l /finance

This time the file belongs to user2 since that’s the account we used to connect from the Windows client.

Summary

In this article we have explained not only how to set up a Samba server and two clients using different operating systems, but also how to configure the firewalld and SELinux on the server to allow the desired group collaboration capabilities.

Last, but not least, let me recommend the reading of the online man page of smb.conf to explore other configuration directives that may be more suitable for your case than the scenario described in this article.

As always, feel free to drop a comment using the form below if you have any comments or suggestions.

Setting Up NFS Server with Kerberos-based Authentication for Linux Clients – Part 7

In the last article of this series, we reviewed how to set up a Samba share over a network that may consist of multiple types of operating systems. Now, if you need to set up file sharing for a group of Unix-like clients you will automatically think of the Network File System, or NFS for short.

Setting Up NFS Server with Kerberos Authentication

RHCE Series: Setting Up NFS Server with Kerberos Authentication – Part 7

In this article we will walk you through the process of using Kerberos-based authentication for NFS shares. It is assumed that you already have set up a NFS server and a client. If not, please refer to install and configure NFS server – which will list the necessary packages that need to be installed and explain how to perform initial configurations on the server before proceeding further.

In addition, you will want to configure both SELinux and firewalld to allow for file sharing through NFS.

The following example assumes that your NFS share is located in /nfs in box2:

# semanage fcontext -a -t public_content_rw_t "/nfs(/.*)?"
# restorecon -R /nfs
# setsebool -P nfs_export_all_rw on
# setsebool -P nfs_export_all_ro on

(where the -P flag indicates persistence across reboots).

Finally, don’t forget to:

Create NFS Group and Configure NFS Share Directory

1. Create a group called nfs and add the nfsnobody user to it, then change the permissions of the /nfs directory to 0770 and its group owner to nfs. Thus, nfsnobody (which is mapped to the client requests) will have write permissions on the share) and you won’t need to use no_root_squash in the /etc/exports file.

# groupadd nfs
# usermod -a -G nfs nfsnobody
# chmod 0770 /nfs
# chgrp nfs /nfs

2. Modify the exports file (/etc/exports) as follows to only allow access from box1 using Kerberos security (sec=krb5).

Note: that the value of anongid has been set to the GID of the nfs group that we created previously:

exports – Add NFS Share
/nfs box1(rw,sec=krb5,anongid=1004)

3. Re-export (-r) all (-a) the NFS shares. Adding verbosity to the output (-v) is a good idea since it will provide helpful information to troubleshoot the server if something goes wrong:

# exportfs -arv

4. Restart and enable the NFS server and related services. Note that you don’t have to enable nfs-lock and nfs-idmapd because they will be automatically started by the other services on boot:

# systemctl restart rpcbind nfs-server nfs-lock nfs-idmap
# systemctl enable rpcbind nfs-server

Testing Environment and Other Prerequisites

In this guide we will use the following test environment:

  1. Client machine [box1: 192.168.0.18]
  2. NFS / Kerberos server [box2: 192.168.0.20] (also known as Key Distribution Center, or KDC for short).

Note: that Kerberos service is crucial to the authentication scheme.

As you can see, the NFS server and the KDC are hosted in the same machine for simplicity, although you can set them up in separate machines if you have more available. Both machines are members of the mydomain.comdomain.

Last but not least, Kerberos requires at least a basic schema of name resolution and the Network Time Protocolservice to be present in both client and server since the security of Kerberos authentication is in part based upon the timestamps of tickets.

To set up name resolution, we will use the /etc/hosts file in both client and server:

host file – Add DNS for Domain
192.168.0.18    box1.mydomain.com    box1
192.168.0.20    box2.mydomain.com    box2

In RHEL 7chrony is the default software that is used for NTP synchronization:

# yum install chrony
# systemctl start chronyd
# systemctl enable chronyd

To make sure chrony is actually synchronizing your system’s time with time servers you may want to issue the following command two or three times and make sure the offset is getting nearer to zero:

# chronyc tracking

Synchronize Server Time with Chrony

Synchronize Server Time with Chrony

Installing and Configuring Kerberos

To set up the KDC, install the following packages on both server and client (omit the server package in the client):

# yum update && yum install krb5-server krb5-workstation pam_krb5

Once it is installed, edit the configuration files (/etc/krb5.conf and /var/kerberos/krb5kdc/kadm5.acl) and replace all instances of example.com (lowercase and uppercase) with mydomain.com as follows.

Now create the Kerberos database (please note that this may take a while as it requires a some level of entropy in your system. To speed things up, I opened another terminal and ran ping -f localhost for 30-45 seconds):

# kdb5_util create -s

Create Kerberos Database

Create Kerberos Database

Next, enable Kerberos through the firewall and start / enable the related services.

Importantnfs-secure must be started and enabled on the client as well:

# firewall-cmd --permanent --add-service=kerberos
# systemctl start krb5kdc kadmin nfs-secure   
# systemctl enable krb5kdc kadmin nfs-secure       

Next, using the kadmin.local tool, create an admin principal for root:

# kadmin.local
# addprinc root/admin

And add the Kerberos server to the database:

# addprinc -randkey host/box2.mydomain.com

Same with the NFS service for both client (box1) and server (box2). Please note that in the screenshot below I forgot to do it for box1 before quitting:

# addprinc -randkey nfs/box2.mydomain.com
# addprinc -randkey nfs/box1.mydomain.com

And exit by typing quit and pressing Enter:

Add Kerberos to NFS Server

Add Kerberos to NFS Server

Then obtain and cache Kerberos ticket-granting ticket for root/admin:

# kinit root/admin
# klist

Cache Kerberos

Cache Kerberos

The last step before actually using Kerberos is storing into a keytab file (in the server) the principals that are authorized to use Kerberos authentication:

# kadmin.local
# ktadd host/box2.mydomain.com
# ktadd nfs/box2.mydomain.com
# ktadd nfs/box1.mydomain.com

Finally, mount the share and perform a write test:

# mount -t nfs4 -o sec=krb5 box2:/nfs /mnt
# echo "Hello from Tecmint.com" > /mnt/greeting.txt

Mount NFS Share

Mount NFS Share

Let’s now unmount the share, rename the keytab file in the client (to simulate it’s not present) and try to mount the share again:

# umount /mnt
# mv /etc/krb5.keytab /etc/krb5.keytab.orig

Mount Unmount Kerberos NFS Share

Mount Unmount Kerberos NFS Share

Now you can use the NFS share with Kerberos-based authentication.

Summary

In this article we have explained how to set up NFS with Kerberos authentication. Since there is much more to the topic than we can cover in a single guide, feel free to check the online Kerberos documentation and since Kerberos is a bit tricky to say the least, don’t hesitate to drop us a note using the form below if you run into any issue or need help with your testing or implementation.

RHCE Series: Implementing HTTPS through TLS using Network Security Service (NSS) for Apache – Part 8

If you are a system administrator who is in charge of maintaining and securing a web server, you can’t afford to not devote your very best efforts to ensure that data served by or going through your server is protected at all times.

Setup Apache HTTPS Using SSL/TLS

RHCE Series: Implementing HTTPS through TLS using Network Security Service (NSS) for Apache – Part 8

In order to provide more secure communications between web clients and servers, the HTTPS protocol was born as a combination of HTTP and SSL (Secure Sockets Layer) or more recently, TLS (Transport Layer Security).

Due to some serious security breaches, SSL has been deprecated in favor of the more robust TLS. For that reason, in this article we will explain how to secure connections between your web server and clients using TLS.

This tutorial assumes that you have already installed and configured your Apache web server. If not, please refer to following article in this site before proceeding further.

  1. Install LAMP (Linux, MySQL/MariaDB, Apache and PHP) on RHEL/CentOS 7

Installation of OpenSSL and Utilities

First off, make sure that Apache is running and that both http and https are allowed through the firewall:

# systemctl start http
# systemctl enable http
# firewall-cmd --permanent –-add-service=http
# firewall-cmd --permanent –-add-service=https

Then install the necessary packages:

# yum update && yum install openssl mod_nss crypto-utils

Important: Please note that you can replace mod_nss with mod_ssl in the command above if you want to use OpenSSL libraries instead of NSS (Network Security Service) to implement TLS (which one to use is left entirely up to you, but we will use NSS in this article as it is more robust; for example, it supports recent cryptography standards such as PKCS #11).

Finally, uninstall mod_ssl if you chose to use mod_nss, or viceversa.

# yum remove mod_ssl

Configuring NSS (Network Security Service)

After mod_nss is installed, its default configuration file is created as /etc/httpd/conf.d/nss.conf. You should then make sure that all of the Listen and VirtualHost directives point to port 443 (default port for HTTPS):

nss.conf – Configuration File
Listen 443
VirtualHost _default_:443

Then restart Apache and check whether the mod_nss module has been loaded:

# apachectl restart
# httpd -M | grep nss

Check Mod_NSS Module in Apache

Check Mod_NSS Module Loaded in Apache

Next, the following edits should be made in /etc/httpd/conf.d/nss.conf configuration file:

1. Indicate NSS database directory. You can use the default directory or create a new one. In this tutorial we will use the default:

NSSCertificateDatabase /etc/httpd/alias

2. Avoid manual passphrase entry on each system start by saving the password to the database directory in /etc/httpd/nss-db-password.conf:

NSSPassPhraseDialog file:/etc/httpd/nss-db-password.conf

Where /etc/httpd/nss-db-password.conf contains ONLY the following line and mypassword is the password that you will set later for the NSS database:

internal:mypassword

In addition, its permissions and ownership should be set to 0640 and root:apache, respectively:

# chmod 640 /etc/httpd/nss-db-password.conf
# chgrp apache /etc/httpd/nss-db-password.conf

3. Red Hat recommends disabling SSL and all versions of TLS previous to TLSv1.0 due to the POODLE SSLv3vulnerability (more information here).

Make sure that every instance of the NSSProtocol directive reads as follows (you are likely to find only one if you are not hosting other virtual hosts):

NSSProtocol TLSv1.0,TLSv1.1

4. Apache will refuse to restart as this is a self-signed certificate and will not recognize the issuer as valid. For this reason, in this particular case you will have to add:

NSSEnforceValidCerts off

5. Though not strictly required, it is important to set a password for the NSS database:

# certutil -W -d /etc/httpd/alias

Set Password for NSS Database

Set Password for NSS Database

Creating a Apache SSL Self-Signed Certificate

Next, we will create a self-signed certificate that will identify the server to our clients (please note that this method is not the best option for production environments; for such use you may want to consider buying a certificate verified by a 3rd trusted certificate authority, such as DigiCert).

To create a new NSS-compliant certificate for box1 which will be valid for 365 days, we will use the genkeycommand. When this process completes:

# genkey --nss --days 365 box1

Choose Next:

Create Apache SSL Key

Create Apache SSL Key

You can leave the default choice for the key size (2048), then choose Next again:

Select Apache SSL Key Size

Select Apache SSL Key Size

Wait while the system generates random bits:

Generating Random Key Bits

Generating Random Key Bits

To speed up the process, you will be prompted to enter random text in your console, as shown in the following screencast. Please note how the progress bar stops when no input from the keyboard is received. Then, you will be asked to:

1. Whether to send the Certificate Sign Request (CSR) to a Certificate Authority (CA): Choose No, as this is a self-signed certificate.

2. to enter the information for the certificate.

Finally, you will be prompted to enter the password to the NSS certificate that you set earlier:

# genkey --nss --days 365 box1

Apache NSS Certificate Password

Apache NSS Certificate Password

At anytime, you can list the existing certificates with:

# certutil –L –d /etc/httpd/alias

List Apache NSS Certificates

List Apache NSS Certificates

And delete them by name (only if strictly required, replacing box1 by your own certificate name) with:

# certutil -d /etc/httpd/alias -D -n "box1"

if you need to.c

Testing Apache SSL HTTPS Connections

Finally, it’s time to test the secure connection to our web server. When you point your browser to https://<web server IP or hostname>, you will get the well-known message “This connection is untrusted“:

Check Apache SSL Connection

Check Apache SSL Connection

In the above situation, you can click on Add Exception and then Confirm Security Exception – but don’t do it yet. Let’s first examine the certificate to see if its details match the information that we entered earlier (as shown in the screencast).

To do so, click on View… –> Details tab above and you should see this when you select Issuer from the list:

Confirm Apache SSL Certificate Details

Confirm Apache SSL Certificate Details

Now you can go ahead, confirm the exception (either for this time or permanently) and you will be taken to your web server’s DocumentRoot directory via https, where you can inspect the connection details using your browser’s builtin developer tools:

In Firefox you can launch it by right clicking on the screen, and choosing Inspect Element from the context menu, specifically through the Network tab:

Inspect Apache HTTPS Connection

Inspect Apache HTTPS Connection

Please note that this is the same information as displayed before, which was entered during the certificate previously. There’s also a way to test the connection using command line tools:

On the left (testing SSLv3):

# openssl s_client -connect localhost:443 -ssl3

On the right (testing TLS):

# openssl s_client -connect localhost:443 -tls1

Testing Apache SSL and TLS Connections

Testing Apache SSL and TLS Connections

Refer to the screenshot above for more details.

Summary

As I’m sure you already know, the presence of HTTPS inspires trust in visitors who may have to enter personal information in your site (from user names and passwords all the way to financial / bank account information).

In that case, you will want to get a certificate signed by a trusted Certificate Authority as we explained earlier (the steps to set it up are identical with the exception that you will need to send the CSR to a CA, and you will get the signed certificate back); otherwise, a self-signed certificate as the one used in this tutorial will do.

For more details on the use of NSS, please refer to the online help about mod-nss. And don’t hesitate to let us know if you have any questions or comments.

How to Setup Postfix Mail Server (SMTP) using null-client Configuration – Part 9

Regardless of the many online communication methods that are available today, email remains a practical way to deliver messages from one end of the world to another, or to a person sitting in the office next to ours.

The following image illustrates the process of email transport starting with the sender until the message reaches the recipient’s inbox:

How Mail Setup Works

How Mail Setup Works

To make this possible, several things happen behind the scenes. In order for an email message to be delivered from a client application (such as Thunderbird, Outlook, or webmail services such as Gmail or Yahoo! Mail) to a mail server, and from there to the destination server and finally to its intended recipient, a SMTP (Simple Mail Transfer Protocol) service must be in place in each server.

That is the reason why in this article we will explain how to set up a SMTP server in RHEL 7 where emails sent by local users (even to other local users) are forwarded to a central mail server for easier access.

In the exam’s requirements this is called a null-client setup.

Our test environment will consist of an originating mail server and a central mail server or relayhost.

Original Mail Server: (hostname: box1.mydomain.com / IP: 192.168.0.18) 
Central Mail Server: (hostname: mail.mydomain.com / IP: 192.168.0.20)

For name resolution we will use the well-known /etc/hosts file on both boxes:

192.168.0.18    box1.mydomain.com       box1
192.168.0.20    mail.mydomain.com       mail

Installing Postfix and Firewall / SELinux Considerations

To begin, we will need to (in both servers):

1. Install Postfix:

# yum update && yum install postfix

2. Start the service and enable it to run on future reboots:

# systemctl start postfix
# systemctl enable postfix

3. Allow mail traffic through the firewall:

# firewall-cmd --permanent --add-service=smtp
# firewall-cmd --add-service=smtp

Open Mail Server Port in Firewall

Open Mail Server SMTP Port in Firewall

4. Configure Postfix on box1.mydomain.com.

Postfix’s main configuration file is located in /etc/postfix/main.cf. This file itself is a great documentation source as the included comments explain the purpose of the program’s settings.

For brevity, let’s display only the lines that need to be edited (yes, you need to leave mydestination blank in the originating server; otherwise the emails will be stored locally as opposed to in a central mail server which is what we actually want):

Configure Postfix on box1.mydomain.com
myhostname = box1.mydomain.com
mydomain = mydomain.com
myorigin = $mydomain
inet_interfaces = loopback-only
mydestination =
relayhost = 192.168.0.20

5. Configure Postfix on mail.mydomain.com.

Configure Postfix on mail.mydomain.com
myhostname = mail.mydomain.com
mydomain = mydomain.com
myorigin = $mydomain
inet_interfaces = all
mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain
mynetworks = 192.168.0.0/24, 127.0.0.0/8

And set the related SELinux boolean to true permanently if not already done:

# setsebool -P allow_postfix_local_write_mail_spool on

Set Postfix SELinux Permission

Set Postfix SELinux Permission

The above SELinux boolean will allow Postfix to write to the mail spool in the central server.

5. Restart the service on both servers for the changes to take effect:

# systemctl restart postfix

If Postfix does not start correctly, you can use following commands to troubleshoot.

# systemctl –l status postfix
# journalctl –xn
# postconf –n

Testing the Postfix Mail Servers

To test the mail servers, you can use any Mail User Agent (most commonly known as MUA for short) such as mail or mutt.

Since mutt is a personal favorite, I will use it in box1 to send an email to user tecmint using an existing file (mailbody.txt) as message body:

# mutt -s "Part 9-RHCE series" tecmint@mydomain.com < mailbody.txt

Test Postfix Mail Server

Test Postfix Mail Server

Now go to the central mail server (mail.mydomain.com), log on as user tecmint, and check whether the email was received:

# su – tecmint
# mail

Check Postfix Mail Server Delivery

Check Postfix Mail Server Delivery

If the email was not received, check root’s mail spool for a warning or error notification. You may also want to make sure that the SMTP service is running on both servers and that port 25 is open in the central mail server using nmap command:

# nmap -PN 192.168.0.20

Troubleshoot Postfix Mail Server

Troubleshoot Postfix Mail Server

Summary

Setting up a mail server and a relay host as shown in this article is an essential skill that every system administrator must have, and represents the foundation to understand and install a more complex scenario such as a mail server hosting a live domain for several (even hundreds or thousands) of email accounts.

(Please note that this kind of setup requires a DNS server, which is out of the scope of this guide), but you can use following article to setup DNS Server:

  1. Setup Cache only DNS Server in CentOS/RHEL 07

Finally, I highly recommend you become familiar with Postfix’s configuration file (main.cf) and the program’s man page. If in doubt, don’t hesitate to drop us a line using the form below or using our forum, Linuxsay.com, where you will get almost immediate help from Linux experts from all around the world.

Install and Configure Caching-Only DNS Server in RHEL/CentOS 7 – Part 10

DNS servers comes in several types such as master, slave, forwarding and cache, to name a few examples, with cache-only DNS being the one that is easier to setup. Since DNS uses the UDP protocol, it improves the query time because it does not require an acknowledgement.

Setup Cache-Only DNS in RHEL and CentOS 7

RHCE Series: Setup Cache-Only DNS in RHEL and CentOS 7 – Part 11

The cache-only DNS server is also known as resolver, which will query DNS records and fetch all the DNS details from other servers, and keep each query request in its cache for later use so that when we perform the same request in the future, it will serve from its cache, thus reducing the response time even more.

If you’re looking to setup DNS Caching-Only Server in CentOS/RHEL 6, follow this guide here:

Setting Up Caching-Only DNS Name Server in CentOS/RHEL 6

My Testing Environment

DNS server		:	dns.tecmintlocal.com (Red Hat Enterprise Linux 7.1)
Server IP Address	:	192.168.0.18
Client			:	node1.tecmintlocal.com (CentOS 7.1)
Client IP Address	:	192.168.0.29

Step 1: Installing Cache-Only DNS Server in RHEL/CentOS 7

1. The Cache-Only DNS server, can be installed via the bind package. If you don’t remember the package name, you can do a quick search for the package name using the command below.

# yum search bind

Search DNS Bind Package

Search DNS Bind Package

2. In the above result, you will see several packages. From those, we need to choose and install only bind and bind-utils packages using following yum command.

# yum install bind bind-utils -y

Install DNS Bind in RHEL/CentOS 7

Install DNS Bind in RHEL/CentOS 7

Step 2: Configure Cache-Only DNS in RHEL/CentOS 7

3. Once DNS packages are installed we can go ahead and configure DNS. Open and edit /etc/named.confusing your preferred text editor. Make the changes suggested below (or you can use your settings as per your requirements).

listen-on port 53 { 127.0.0.1; any; };
allow-query     { localhost; any; };
allow-query-cache       { localhost; any; };

Configure Cache-Only DNS in CentOS and RHEL 7

Configure Cache-Only DNS in CentOS and RHEL 7

These directives instruct the DNS server to listen on UDP port 53, and to allow queries and caches responses from localhost and any other machine that reaches the server.

4. It is important to note that the ownership of this file must be set to root:named and also if SELinux is enabled, after editing the configuration file we need to make sure that its context is set to named_conf_t as shown in Fig. 4 (same thing for the auxiliary file /etc/named.rfc1912.zones):

# ls -lZ /etc/named.conf
# ls -lZ /etc/named.rfc1912.zones

Otherwise, configure the SELinux context before proceeding:

# semanage fcontext -a -t named_conf_t /etc/named.conf
# semanage fcontext -a -t named_conf_t /etc/named.rfc1912.zones

5. Additionally, we need to test the DNS configuration now for some syntax error before starting the bind service:

# named-checkconf /etc/named.conf

6. After the syntax verification results seems perfect, restart the named service to take new changes into effect and also make the service to auto start across system boots, and then check its status:

# systemctl restart named
# systemctl enable named
# systemctl status named

Configure and Start DNS Named Service

Configure and Start DNS Named Service

7. Next, open the port 53 on the firewall.

# firewall-cmd --add-port=53/udp
# firewall-cmd --add-port=53/udp --permanent

Open DNS Port 53 on Firewall

Open DNS Port 53 on Firewall

Step 3: Chroot Cache-Only DNS Server in RHEL and CentOS 7

8. If you wish to deploy the Cache-only DNS server within chroot environment, you need to have the package chroot installed on the system and no further configuration is needed as it by default hard-link to chroot.

# yum install bind-chroot -y

Once chroot package has been installed, you can restart named to take the new changes into effect:

# systemctl restart named

9. Next, create a symbolic link (also named /etc/named.conf) inside /var/named/chroot/etc/:

# ln -s /etc/named.conf /var/named/chroot/etc/named.conf

Step 4: Configure DNS on Client Machine

10. Add the DNS Cache servers IP 192.168.0.18 as resolver to the client machine. Edit /etc/sysconfig/network-scripts/ifcfg-enp0s3 as shown in the following figure:

DNS=192.168.0.18

Configure DNS on Client Machine

Configure DNS on Client Machine

And /etc/resolv.conf as follows:

nameserver 192.168.0.18

11. Finally it’s time to check our cache server. To do this, you can use dig utility or nslookup command.

Choose any website and query it twice (we will use facebook.com as an example). Note that with dig the second time the query is completed much faster because it is being served from the cache.

# dig facebook.com

Check Cache only DNS Queries

Check Cache only DNS Queries

You can also use nslookup to verify that the DNS server is working as expected.

# nslookup facebook.com

Checking DNS Query with nslookup

Checking DNS Query with nslookup

Summary

In this article we have explained how to set up a DNS Cache-only server in Red Hat Enterprise Linux 7 and CentOS 7, and tested it in a client machine. Feel free to let us know if you have any questions or suggestions using the form below.

How to Setup and Configure Network Bonding or Teaming in RHEL/CentOS 7 – Part 11

When a system administrator wants to increase the bandwidth available and provide redundancy and load balancing for data transfers, a kernel feature known as network bonding allows to get the job done in a cost-effective way.

Read more about how to increase or bandwidth throttling in Linux

View image on Twitter

In simple words, bonding means aggregating two or more physical network interfaces (called slaves) into a single, logical one (called master). If a specific NIC (Network Interface Card) experiences a problem, communications are not affected significantly as long as the other(s) remain active.

Read more about network bonding in Linux systems here:

  1. Network Teaming or NiC Bondin in RHEL/CentOS 6/5
  2. Network NIC Bonding or Teaming on Debian based Systems
  3. How to Configure Network Bonding or Teaming in Ubuntu

Enabling and Configuring Network Bonding or Teaming

By default, the bonding kernel module is not enabled. Thus, we will need to load it and ensure it is persistent across boots. When used with the --first-time option, modprobe will alert us if loading the module fails:

# modprobe --first-time bonding

The above command will load the bonding module for the current session. In order to ensure persistency, create a .conf file inside /etc/modules-load.d with a descriptive name, such as /etc/modules-load.d/bonding.conf:

# echo "# Load the bonding kernel module at boot" > /etc/modules-load.d/bonding.conf
# echo "bonding" >> /etc/modules-load.d/bonding.conf

Now reboot your server and once it restarts, make sure the bonding module is loaded automatically, as seen in Fig. 1:

Check Network Bonding Module Loaded in Kernel

Check Network Bonding Module Loaded in Kernel

In this article we will use 3 interfaces (enp0s3enp0s8, and enp0s9) to create a bond, named conveniently bond0.

To create bond0, we can either use nmtui, the text interface for controlling NetworkManager. When invoked without arguments from the command line, nmtui brings up a text interface that allows you to edit an existing connection, activate a connection, or set the system hostname.

Choose Edit connection –> Add –> Bond as illustrated in Fig. 2:

Create Network Bonding Channel

Create Network Bonding Channel

In the Edit Connection screen, add the slave interfaces (enp0s3enp0s8, and enp0s9 in our case) and give them a descriptive (Profile) name (for example, NIC #1NIC #2, and NIC #3, respectively).

In addition, you will need to set a name and device for the bond (TecmintBond and bond0 in Fig. 3, respectively) and an IP address for bond0, enter a gateway address, and the IPs of DNS servers.

Note that you do not need to enter the MAC address of each interface since nmtui will do that for you. You can leave all other settings as default. See Fig. 3 for more details.

Network Bonding Teaming Configuration

Network Bonding Teaming Configuration

When you’re done, go to the bottom of the screen and choose OK (see Fig. 4):

Configuration of bond0

Configuration of bond0

And you’re done. Now you can exit the text interface and return to the command line, where you will enable the newly created interface using ip command:

# ip link set dev bond0 up

After that, you can see that bond0 is UP and is assigned 192.168.0.200, as seen in Fig. 5:

# ip addr show bond0

Check Network Bond Interface Status

Check Network Bond Interface Status

Testing Network Bonding or Teaming in Linux

To verify that bond0 actually works, you can either ping its IP address from another machine, or what’s even better, watch the kernel interface table in real time (well, the refresh time in seconds is given by the -n option) to see how network traffic is distributed between the three network interfaces, as shown in Fig. 6.

The -d option is used to highlight changes when they occur:

# watch -d -n1 netstat -i

Check Kernel Interface Table

Check Kernel Interface Table

It is important to note that there are several bonding modes, each with its distinguishing characteristics. They are documented in section 4.5 of the Red Hat Enterprise Linux 7 Network Administration guide. Depending on your needs, you will choose one or the other.

In our current setup, we chose the Round-robin mode (see Fig. 3), which ensures packets are transmitted beginning with the first slave in sequential order, ending with the last slave, and starting with the first again.

The Round-robin alternative is also called mode 0, and provides load balancing and fault tolerance. To change the bonding mode, you can use nmtui as explained before (see also Fig. 7):

Changing Bonding Mode Using nmtui

Changing Bonding Mode Using nmtui

If we change it to Active Backup, we will be prompted to choose a slave that will the only one active interface at a given time. If such card fails, one of the remaining slaves will take its place and becomes active.

Let’s choose enp0s3 to be the primary slave, bring bond0 down and up again, restart the network, and display the kernel interface table (see Fig. 8).

Note how data transfers (TX-OK and RX-OK) are now being made over enp0s3 only:

# ip link set dev bond0 down
# ip link set dev bond0 up
# systemctl restart network

Bond Acting in Active Backup Mode

Bond Acting in Active Backup Mode

Alternatively, you can view the bond as the kernel sees it (see Fig. 9):

# cat /proc/net/bonding/bond0

Check Network Bond as Kernel

Check Network Bond as Kernel

Summary

In this chapter we have discussed how to set up and configure bonding in Red Hat Enterprise Linux 7 (also works on CentOS 7 and Fedora 22+) in order to increase bandwidth along with load balancing and redundancy for data transfers.

As you take the time to explore other bonding modes, you will come to master the concepts and practice related with this topic of the certification.

If you have questions about this article, or suggestions to share with the rest of the community, feel free to let us know using the comment form below.

Create Centralized Secure Storage using iSCSI Target / Initiator on RHEL/CentOS 7 – Part 12

iSCSI is a block level Protocol for managing storage devices over TCP/IP Networks, specially over long distances. iSCSI target is a remote hard disk presented from an remote iSCSI server (or) target. On the other hand, the iSCSI client is called the Initiator, and will access the storage that is shared in the Target machine.

The following machines have been used in this article:

Server (Target):

Operating System – Red Hat Enterprise Linux 7
iSCSI Target IP – 192.168.0.29
Ports Used : TCP 860, 3260

Client (Initiator):

Operating System – Red Hat Enterprise Linux 7
iSCSI Target IP – 192.168.0.30
Ports Used : TCP 3260

Step 1: Installing Packages on iSCSI Target

To install the packages needed for the target (we will deal with the client later), do:

# yum install targetcli -y

When the installation completes, we will start and enable the service as follows:

# systemctl start target
# systemctl enable target

Finally, we need to allow the service in firewalld:

# firewall-cmd --add-service=iscsi-target
# firewall-cmd --add-service=iscsi-target --permanent

And last but not least, we must not forget to allow the iSCSI target discovery:

# firewall-cmd --add-port=860/tcp
# firewall-cmd --add-port=860/tcp --permanent
# firewall-cmd --reload

Step 2: Defining LUNs in Target Server

Before proceeding to defining LUNs in the Target, we need to create two logical volumes as explained in Part 6 of RHCSA series (“Configuring system storage”).

This time we will name them vol_projects and vol_backups and place them inside a volume group called vg00, as shown in Fig. 1. Feel free to choose the space allocated to each LV:

Two Logical Volumes Named vol_projects and vol_backups

Fig 1: Two Logical Volumes Named vol_projects and vol_backups

After creating the LVs, we are ready to define the LUNs in the Target in order to make them available for the client machine.

As shown in Fig. 2, we will open a targetcli shell and issue the following commands, which will create two block backstores (local storage resources that represent the LUN the initiator will actually use) and an Iscsi Qualified Name (IQN), a method of addressing the target server.

Please refer to Page 32 of RFC 3720 for more details on the structure of the IQN. In particular, the text after the colon character (:tgt1) specifies the name of the target, while the text before (server:) indicates the hostname of the target inside the domain.

# targetcli
# cd backstores
# cd block
# create server.backups /dev/vg00/vol_backups
# create server.projects /dev/vg00/vol_projects
# cd /iscsi
# create iqn.2016-02.com.tecmint.server:tgt1

Define LUNs in Target Server

Fig 2: Define LUNs in Target Server

With the above step, a new TPG (Target Portal Group) was created along with the default portal (a pair consisting of an IP address and a port which is the way initiators can reach the target) listening on port 3260 of all IP addresses.

If you want to bind your portal to a specific IP (the Target’s main IP, for example), delete the default portal and create a new one as follows (otherwise, skip the following targetcli commands. Note that for simplicity we have skipped them as well):

# cd /iscsi/iqn.2016-02.com.tecmint.server:tgt1/tpg1/portals
# delete 0.0.0.0 3260
# create 192.168.0.29 3260

Now we are ready to proceed with the creation of LUNs. Note that we are using the backstores we previously created (server.backups and server.projects). This process is illustrated in Fig. 3:

# cd iqn.2016-02.com.tecmint.server:tgt1/tpg1/luns
# create /backstores/block/server.backups
# create /backstores/block/server.projects

Create LUNs in iSCSI Target Server

Fig 3: Create LUNs in iSCSI Target Server

The last part in the Target configuration consists of creating an Access Control List to restrict access on a per-initiator basis. Since our client machine is named “client”, we will append that text to the IQN. Refer to Fig. 4 for details:

# cd ../acls
# create iqn.2016-02.com.tecmint.server:client

Create Access Control List for Initiator

Fig 4: Create Access Control List for Initiator

At this point we can the targetcli shell to show all configured resources, as we can see in Fig. 5:

# targetcli
# cd /
# ls

User targetcli to Check Configured Resources

Fig 5: User targetcli to Check Configured Resources

To quit the targetcli shell, simply type exit and press Enter. The configuration will be saved automatically to /etc/target/saveconfig.json.

As you can see in Fig. 5 above, we have a portal listening on port 3260 of all IP addresses as expected. We can verify that using netstat command (see Fig. 6):

# netstat -npltu | grep 3260

Verify iSCSI Target Server Port Listening

Fig 6: Verify iSCSI Target Server Port Listening

This concludes the Target configuration. Feel free to restart the system and verify that all settings survive a reboot. If not, make sure to open the necessary ports in the firewall configuration and to start the target service on boot. We are now ready to set up the Initiator and to connect to the client.

Step 3: Setting up the Client Initiator

In the client we will need to install the iscsi-initiator-utils package, which provides the server daemon for the iSCSI protocol (iscsid) as well as iscsiadm, the administration utility:

# yum update && yum install iscsi-initiator-utils

Once the installation completes, open /etc/iscsi/initiatorname.iscsi and replace the default initiator name (commented in Fig. 7) with the name that was previously set in the ACL on the server (iqn.2016-02.com.tecmint.server:client).

Then save the file and run iscsiadm in discovery mode pointing to the target. If successful, this command will return the target information as shown in Fig. 7:

# iscsiadm -m discovery -t st -p 192.168.0.29

Setting Up Client Initiator

Fig 7: Setting Up Client Initiator

The next step consists in restarting and enabling the iscsid service:

# systemctl start iscsid
# systemctl enable iscsid

and contacting the target in node mode. This should result in kernel-level messages, which when captured through dmesg show the device identification that the remote LUNs have been given in the local system (sdeand sdf in Fig. 8):

# iscsiadm -m node -T iqn.2016-02.com.tecmint.server:tgt1 -p 192.168.0.29 -l
# dmesg | tail

Connecting to iSCSCI Target Server in Node Mode

Fig 8: Connecting to iSCSCI Target Server in Node Mode

From this point on, you can create partitions, or even LVs (and filesystems on top of them) as you would do with any other storage device. For simplicity, we will create a primary partition on each disk that will occupy its entire available space, and format it with ext4.

Finally, let’s mount /dev/sde1 and /dev/sdf1 on /projects and /backups, respectively (note that these directories must be created first):

# mount /dev/sde1 /projects
# mount /dev/sdf1 /backups

Additionally, you can add two entries in /etc/fstab in order for both filesystems to be mounted automatically at boot using each filesystem’s UUID as returned by blkid.

Note that the _netdev mount option must be used in order to defer the mounting of these filesystems until the network service has been started:

Find Filesystem UUID

Fig 9: Find Filesystem UUID

You can now use these devices as you would with any other storage media.

Summary

In this article we have covered how to set up and configure an iSCSI Target and an Initiator in RHEL/CentOS 7disitributions. Although the first task is not part of the required competencies of the EX300 (RHCE) exam, it is needed in order to implement the second topic.

Don’t hesitate to let us know if you have any questions or comments about this article – feel free to drop us a line using the comment form below.

Looking to setup iSCSI Target and Client Initiator on RHEL/CentOS 6, follow this guide: Setting Up Centralized iSCSI Storage with Client Initiator.

Setting Up “NTP (Network Time Protocol) Server” in RHEL/CentOS 7

Network Time Protocol – NTP- is a protocol which runs over port 123 UDP at Transport Layer and allows computers to synchronize time over networks for an accurate time. While time is passing by, computers internal clocks tend to drift which can lead to inconsistent time issues, especially on servers and clients logs files or if you want to replicate servers resources or databases.

NTP Server Install in CentOS

NTP Server Installation in CentOS and RHEL 7

Requirements:

  1. CentOS 7 Installation Procedure
  2. RHEL 7 Installation Procedure

Additional Requirements:

  1. Register and Enbale RHEL 7 Subscription for Updates
  2. Configure Static IP Address on CentOS/Rhel 7
  3. Disable and Remove Unwanted Services in CentOS/RHEL 7

This tutorial will demonstrate how you can install and configure NTP server on CentOS/RHEL 7 and automatically synchronize time with the closest geographically peers available for your server location by using NTP Public Pool Time Servers list.

Step 1: Install and configure NTP daemon

1. NTP server package is provided by default from official CentOS /RHEL 7 repositories and can be installed by issuing the following command.

# yum install ntp

Install NTP in CentOS

Install NTP Server

2. After the server is installed, first go to official NTP Public Pool Time Servers, choose your Continent area where the server physically is located, then search for your Country location and a list of NTP servers should appear.

NTP Pool Server

NTP Pool Server

3. Then open NTP daemon main configuration file for editing, comment the default list of Public Servers from pool.ntp.org project and replace it with the list provided for your country like in the screenshot below.

Configure NTP Server in CentOS

Configure NTP Server

4. Further, you need to allow clients from your networks to synchronize time with this server. To accomplish this, add the following line to NTP configuration file, where restrict statement controls, what network is allowed to query and sync time – replace network IPs accordingly.

restrict 192.168.1.0 netmask 255.255.255.0 nomodify notrap

The nomodify notrap statements suggest that your clients are not allowed to configure the server or be used as peers for time sync.

5. If you need additional information for troubleshooting in case there are problems with your NTP daemon add a log file statement which will record all NTP server issues into one dedicated log file.

logfile /var/log/ntp.log

Enable NTP Logs in CentOS

Enable NTP Logs

6. After you have edited the file with all configuration explained above save and close ntp.conf file. Your final configuration should look like in the screenshot below.

NTP Server Configuration in CentOS

NTP Server Configuration

Step 2: Add Firewall Rules and Start NTP Daemon

7. NTP service uses UDP port 123 on OSI transport layer (layer 4). It is designed particularly to resist the effects of variable latency (jitter). To open this port on RHEL/CentOS 7 run the following commands against Firewalld service.

# firewall-cmd --add-service=ntp --permanent
# firewall-cmd --reload

Open NTP Port in Firewall

Open NTP Port in Firewall

8. After you have opened Firewall port 123, start NTP server and make sure you enable it system-wide. Use the following commands to manage the service.

# systemctl start ntpd
# systemctl enable ntpd
# systemctl status ntpd

Start NTP Service

Start NTP Service

Step 3: Verify Server Time Sync

9. After NTP daemon has been started, wait a few minutes for the server to synchronize time with its pool list servers, then run the following commands to verify NTP peers synchronization status and your system time.

# ntpq -p
# date -R

Verify NTP Server Time

Verify NTP Time Sync

10. If you want to query and synchronize against a pool of your choice use ntpdate command, followed by the server or servers addresses, as suggested in the following command line example.

# ntpdate -q  0.ro.pool.ntp.org  1.ro.pool.ntp.org

Synchronize NTP Time

Synchronize NTP Time

Step 4: Setup Windows NTP Client

11. If your windows machine is not a part of a Domain Controller you can configure Windows to synchronize time with your NTP server by going to Time from the right side of Taskbar -> Change Date and Time Settings -> Internet Time tab -> Change Settings -> Check Synchronize with an Internet time server -> put your server’s IP or FQDN on Server filed -> Update now -> OK.

Synchronize Windows Time with NTP

Synchronize Windows Time with NTP

That’s all! Setting up a local NTP Server on your network ensures that all your servers and clients have the same time set in case of an Internet connectivity failure and they all are synchronized with each other.

Source

WP2Social Auto Publish Powered By : XYZScripts.com