How to Install Countly Analytics in CentOS and Debian Based Systems

Countly is a feature-rich, open source, highly-extensible real-time mobile & web analytics, push notifications and crash reporting software powering more than 2.5k web sites and 12k mobile applications.

It works in a client/server model; the server gathers data from mobile devices and other Internet-connected devices, while the client (mobile, web or desktop SDK) displays this information in a format which analyzes application usage and end-user behavior.

Watch a 1 minute video introduction to Countly.

Countly Analytics Features:

  • Supports for centralized management.
  • Powerful dashboard user interface (supports multiple, custom and API dashboards).
  • Provides user, application and permission management functionalities.
  • Offers multiple application support.
  • Supports for reading / writing APIs.
  • Supports a variety of plugins.
  • Offers analytics features for mobile, web and desktop.
  • Supports crash reporting for iOS and Android and error reporting for Javascript.
  • Supports for rich and interactive push notifications for iOS and Android.
  • Also supports for custom email reporting.

Requirements

Countly can be easily installed via beautiful installation script on a freshly installed CentOS, RHEL, Debian and Ubuntu systems without any services listening on port 80 or 443.

  1. Installation of CentOS 7 Minimal
  2. Installation of RHEL 7 Minimal
  3. Installation of Debian 9 Minimal

In this article, we will guide you on how to install and manage Countly Analytics from the command line in CentOS and Debian based systems.

Step 1: Install Countly Server

1. Luckily, there is an installation script prepared for you which will install all the dependencies as well as Countly server on your system.

Simply download the script using the wget command and run it thereafter as follows.

# wget -qO- http://c.ly/install | bash

Important: Disable SELinux on CentOS or RHEL if it’s enabled. Countly will not work on a server where SELinux is enabled.

Installation will take between 6-8 minutes, once complete open the URL from a web browser to create your admin account and login to your dashboard.

http://localhost 
OR
http://SERVER_IP

Create Countly Admin Account

Create Countly Admin Account

2. You will land in the interface below where you can add an App to your account to start collecting data. To populate an app with random/demo data, check the option “Demo data”.

Countly Add App

Countly Add App

3. Once the app has been populated, you will get the overview of the test app as shown. To manage applications, users plugins etc, click on the Management Menu item.

Countly App Analytics

Countly App Analytics

Step 2: Manage Countly From Linux Terminal

4. Countly ships in with several commands to manage the process. You can execute most of the tasks via Countly user interface, but the countly command which can be run in the following syntax – does the needful for command line geeks.

$ sudo countly version		#prints Countly version
$ sudo countly start  		#starts Countly 
$ sudo countly stop	  	#stops Countly 
$ sudo countly restart  	#restarts Countly 
$ sudo countly status  	        #used to view process status
$ sudo countly test 		#runs countly test set 
$ sudo countly dir 		#prints Countly is installed path

Step 3: Backup and Restore Countly

5. To configure automatic backups for Countly, you can run countly backup command or assign a cron job that runs every day or week. This cron job ideally backup Countly data to a directory of your choice.

The following command backup Countly database, Countly configuration & user files (e.g app images, user images, certificates, etc).

$ sudo countly backup /var/backups/countly

Additionally you can back up files or database separately by executing.

$ sudo countly backupdb /var/backups/countly
$ sudo countly backupfiles /var/backups/countly

6. To restore Countly from backup, issue the command below (specify the backup directory).

$ sudo countly restore /var/backups/countly

Likewise restore only files or database separately as follows.

$ sudo countly restorefiles /var/backups/countly
$ sudo countly restoredb /var/backups/countly

Step 4: Upgrade Countly Server

7. To initiate an upgrade process, run the command below which will run npm to install any new dependencies, if any. It will also run grunt dist-all to minify all files and create production files from them for enhanced effective loading.

And lastly restarts Countly’s Node.js process to effect new files changes during the two previous processes.

$ sudo countly upgrade 	
$ countly usage 

For more information visit official site: https://github.com/countly/countly-server

In this article, we guided you on how to install and manage Countly Analytics server from the command line in CentOS and Debian based systems. As usual, send us your queries or thoughts concerning this article via the response form below.

Source

5 Ways to Find a ‘Binary Command’ Description and Location on File System

With the thousands of commands/programs available in Linux systems, knowing the type and purpose of a given command as well as its location (absolute path) on the system can be a little challenge for newbies.

Knowing a few details of commands/programs not only helps a Linux user master the numerous commands, but it also enables a user understand what operations on the system to use them for, either from the command line or a script.

Therefore, in this article we will explain to you five useful commands for showing a short description and the location of a given command.

To discover new commands on your system look into all the directories in your PATH environmental variable. These directories store all the installed commands/programs on the system.

Once you find an interesting command name, before you proceed to read more about it probably in the man page, try to gather some shallow information about it as follows.

Assuming you have echoed the values of PATH and moved into the directory /usr/local/bin and noticed a new command called fswatch (monitors file modification changes):

$ echo $PATH
$ cd /usr/local/bin

Find New Commands in Linux

Find New Commands in Linux

Now let’s find out the description and location of the fswatch command using following different ways in Linux.

1. whatis Command

whatis is used to display one-line manual page descriptions of the command name (such as fswatch in the command below) you enter as an argument.

If the description is too long some parts are trimmed of by default, use the -l flag to show a complete description.

$ whatis fswatch
$ whatis -l fswatch

Linux whatis Command Example

Linux whatis Command Example

2. apropos Command

apropos searches for the manual page names and descriptions of the keyword (considered a regex, which is the command name) provided.

The -l option enables showing of the compete description.

$ apropos fswatch 
$ apropos -l fswatch

Linux apropos Command Example

Linux apropos Command Example

By default, apropos may show an output of all matched lines, as in the example below. You can only match the exact keyword using the -e switch:

$ apropos fmt
$ apropos -e fmt

Linux apropos Command Show by Keyword

Linux apropos Command Show by Keyword

3. type Command

type tells you the full pathname of a given command, additionally, in case the command name entered is not a program that exists as a separate disk file, type also tells you the command classification:

  1. Shell built-in command or
  2. Shell keyword or reserved word or
  3. An alias
$ type fswatch 

Linux type Command Example

Linux type Command Example

When the command is an alias for another command, type shows the command executed when the alias is run. Use the alias command to view all aliases created on your system:

$ alias
$ type l
$ type ll

Show All Aliases in Linux

Show All Aliases in Linux

4. which Command

which helps to locate a command, it prints the absolute command path as below:

$ which fswatch 

Find Linux Command Location

Find Linux Command Location

Some binaries can be stored in more than one directory under the PATH, use the -a flag to show all matching pathnames.

5. whereis Command

whereis command locates the binary, source, and manual page files for the command name provided as follows:

$ whereis fswatch
$ whereis mkdir 
$ whereis rm

Linux whereis Command Example

Linux whereis Command Example

Although the commands above may be vital in finding some quick info about a command/program, opening and reading through its manual page always provides a full documentation, including a list of other related programs:

$ man fswatch

In this article, we reviewed five simple commands used to display short manual page descriptions and location of a command. You can make a contribution to this post or ask a question via the feedback section below.

Source

How to Install Snipe-IT (IT Asset Management) on CentOS and Ubuntu

Snipe-IT is a free and open source, cross-platform, feature-rich IT asset management system built using a PHP framework called Laravel. It is web-based software, which enables IT administrators in medium to large enterprises to track physical assets, software licenses, accessories and consumables in a single place.

Check out a live, up-to-date version of Snipe-IT Asset Management Tool: https://snipeitapp.com/demo

Snipe-IT Features:

  1. It is a cross-platform – works on Linux, Windows and Mac OS X.
  2. It is mobile-friendly for easy asset updates.
  3. Easily Integrates with Active Directory and LDAP.
  4. Slack notification integration for checkin/checkout.
  5. Supports one-click (or cron) backups and automated backups.
  6. Supports optional two-factor authentication with Google authenticator.
  7. Supports generation of custom reports.
  8. Supports custom status labels.
  9. Supports bulk user actions and user role management for different levels of access.
  10. Supports several languages for easy localization and so much more.

In this article, I will explain how to install a IT asset management system called Snipe-IT using a LAMP (Linux, Apache, MySQL & PHP) stack on CentOS and Debian based systems.

Step 1: Install LAMP Stack

1. First update the system (meaning update the list of packages that needs to be upgraded and add new packages that have entered in repositories enabled on the system).

$ sudo apt update        [On Debian/Ubuntu]
$ sudo yum update        [On CentOS/RHEL] 

2. Once system has been updated, now you can install LAMP (Linux, Apache, MySQL & PHP) stack with all needed PHP modules as shown.

Install LAMP on Debian/Ubuntu

$ sudo apt install apache2 apache2-utils libapache2-mod-php mariadb-server mariadb-client php php-pdo php-mbstring php-tokenizer php-curl php-mysql php-ldap php-zip php-fileinfo php-gd php-dom php-mcrypt 

Install LAMP on CentOS/RHEL

3. Snipe-IT requires PHP greater than 5.5.9 and PHP 5.5 has reached end of life, so to have PHP 5.6, you need to enable the Remi repository as shown.

$ sudo rpm -Uvh http://rpms.remirepo.net/enterprise/remi-release-7.rpm
$ sudo yum -y install yum-utils
$ sudo yum-config-manager --enable remi-php56

4. Next, install PHP 5.6 on CentOS 7 with the required modules needed by Snipe-IT.

$ sudo yum install httpd mariadb mariadb-server php php-openssl php-pdo php-mbstring php-tokenizer php-curl php-mysql php-ldap php-zip php-fileinfo php-gd php-dom php-mcrypt

5. After the LAMP stack installation completes, start the web server for the mean time, and enable it to start on the next system boot with the following command.

$ sudo systemctl start enable status apache2       [On Debian/Ubuntu]
$ sudo systemctl start enable status httpd         [On CentOS/RHEL]

6. Next verify Apache and PHP installation and all its current configurations from a web browser, let’s create a info.php file in the Apache DocumentRoot (/var/www/html) using the following command.

$ sudo echo "<?php  phpinfo(); ?>" | sudo tee -a /var/www/html/info.php

Now open a web browser and navigate to following URL’s to verify Apache and PHP configuration.

http://SERVER_IP/
http://SERVER_IP/info.php 

7. Next, you need to secure and harden your MySQL installation using the following command.

$ sudo mysql_secure_installation     

You will be asked you to set a strong root password for your MariaDB and answer Y to all of the other questions asked (self explanatory).

8. Finally start MySQL server and enable it to start at the next system boot.

$ sudo systemctl start mariadb            
OR
$ sudo systemctl start mysql

Step 2: Create Snipe-IT Database on MySQL

9. Now log in to the MariaDB shell and create a database for Snipe-IT, a database user and set a suitable password for the user as follows.

$ mysql -u root -p

Provide the password for the MariaDB root user.

MariaDB [(none)]> CREATE DATABASE snipeit_db;
MariaDB [(none)]> CREATE USER 'tecmint'@'localhost' IDENTIFIED BY 't&cmint@190root';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON snipeit_db.* TO 'tecmint'@'localhost';
MariaDB [(none)]> FLUSH PRIVILEGES;
MariaDB [(none)]> exit

Step 3: Install Composer – PHP Manager

10. Now you need to install Composer – a dependency manager for PHP, with the commands below.

$ sudo curl -sS https://getcomposer.org/installer | php
$ sudo mv composer.phar /usr/local/bin/composer

Step 4: Install Snipe-IT Asset Management

11. First install Git to fetch and clone the latest version of Snipe-IT under Apache web-root directory.

$ sudo apt -y install git      [On Debian/Ubuntu]
$ sudo yum -y install git      [On CentOS/RHEL]

$ cd  /var/www/
$ sudo git clone https://github.com/snipe/snipe-it.git

12. Now go into the snipe-it directory and rename the .env.example file to .env.

$ cd snipe-it
$ ls
$ sudo mv .env.example .env

Step 5: Configure Snipe-IT Asset Management

13. Next, configure the snipe-it environment, here you’ll provide the database connection settings and many more.

First open the .env file.

$ sudo vi .env

Then Find and change the following variables according to instructions given.

APP_TIMEZONE=Africa/Kampala                                   #Change it according to your country
APP_URL=http://10.42.0.1/setup                                #set your domain name or IP address
APP_KEY=base64:BrS7khCxSY7282C1uvoqiotUq1e8+TEt/IQqlh9V+6M=   #set your app key
DB_HOST=localhost                                             #set it to localhost
DB_DATABASE=snipeit_db                                        #set the database name
DB_USERNAME=tecmint                                           #set the database username
DB_PASSWORD=password                                          #set the database user password

Save and close the file.

14. Now you need to set the appropriate permissions on certain directories as follows.

$ sudo chmod -R 755 storage 
$ sudo chmod -R 755 public/uploads
$ sudo chown -R www-data:www-data storage public/uploads   [On Debian/Ubuntu]
sudo chown -R apache:apache storage public/uploads         [On CentOS/RHEL]

15. Next, install all the dependencies required by PHP using Composer dependency manager as follows.

$ sudo composer install --no-dev –prefer-source

16. Now you can generate the “APP_KEY” value with the following command (this will be set automatically in the .env file).

$ sudo php artisan key:generate

17. Now, you need to create a virtual host file on the web server for Snipe-IT.

$ sudo vi /etc/apache2/sites-available/snipeit.example.com.conf     [On Debian/Ubuntu]
$ sudo vi /etc/httpd/conf.d/snipeit.example.com.conf                [On CentOS/RHEL]

Then add/modify the line below in your Apache config file (use your server IP address here).

<VirtualHost 10.42.0.1:80>
    ServerName snipeit.tecmint.lan
    DocumentRoot /var/www/snipe-it/public
    <Directory /var/www/snipe-it/public>
        Options Indexes FollowSymLinks MultiViews
        AllowOverride All
        Order allow,deny
        allow from all
    </Directory>
</VirtualHost>

Save and close the file.

18. On Debian/Ubuntu, you need to enable virtual host, mod_rewrite and mcrypt using the following commands.

$ sudo a2ensite snipeit.conf
$ sudo a2enmod rewrite
$ sudo php5enmod mcrypt

19. Lastly, restart Apache web server to take new changes into effect.

$ sudo systemctl restart apache2       [On Debian/Ubuntu]
$ sudo systemctl restart httpd         [On CentOS/RHEL]

Step 6: Snipe-IT Web Installation

20. Now open your web browser and enter the URL: http://SERVER_IP to view the Snipe-IT web installation interface.

First you will see the Pre-Flight Check page below, click Next: Create Database Tables.

Snipe-IT Pre Flight Check

Snipe-IT Pre Flight Check

21. You will now see all the tables created, click Next: Create User.

Create Snipe-IT User

Create Snipe-IT User

22. Here, provide all the admin user information and click Next: Save User.

Snipe-IT User Information

Snipe-IT User Information

23. Finally open the login page using the URL http://SERVER_IP/login as shown below and login to view the Snipe-IT dashboard.

Snipe-IT Login

Snipe-IT Login

Snipe-IT Dashboard

Snipe-IT Dashboard

Snipe-IT Homepagehttps://snipeitapp.com/

In this article, we discussed how to setup Snipe-IT with LAMP (Linux Apache MySQL PHP) stack on CentOS and Debian based systems. If any issues, do share with us using our comment form below.

Source

5 Useful Tools to Remember Linux Commands Forever

There are thousands of tools, utilities, and programs that come pre-installed on a Linux system. You can run them from a terminal window or virtual console as commands via a shell such as Bash.

A command is typically the pathname (eg. /usr/bin/top) or basename (e.g top) of a program including arguments passed to it. However, there is a common misconception among Linux users that a command is the actual program or tool.

Read AlsoA – Z Linux Commands – Overview with Examples

Remembering Linux commands and their usage is not easy, especially for new Linux users. In this article, we will share 5 command-line tools for remembering Linux commands.

1. Bash History

Bash records all unique commands executed by users on the system in a history file. Each user’s bash history file is stored in their home directory (e.g. /home/tecmint/.bash_history for user tecmint). A user can only view his/her own history file content and root can view the bash history file for all users on a Linux system.

To view your bash history, use the history command as shown.

$ history  

View User History Command

View User History Command

To fetch a command from bash history, press the Up arrow key continuously to search through a list of all unique commands that you run previously. If you have skipped the command your looking for or failed to get it, use the Down arrow key to perform a reverse search.

This bash feature is one of the many ways of easily remembering Linux commands. You can find more examples of the history command in these articles:

  1. The Power of Linux “History Command” in Bash Shell
  2. How to Clear BASH Command Line History in Linux

2. Friendly Interactive Shell (Fish)

Fish is a modern, powerful, user-friendly, feature-rich and interactive shell which is compatible to Bash or Zsh. It supports automatic suggestions of file names and commands in the current directory and history respectively, which helps you to easily remember commands.

In the following screenshot, the command “uname -r” is in the bash history, to easily remember it, type the later “u” or “un” and fish will auto-suggest the complete command. If the command auto-suggested is the one you wish to run, use the Right arrow key to select it and run it.

Fish - Friendly Interactive Shell

Fish – Friendly Interactive Shell

Fish is a fully-fledged shell program with a wealth of features for you to remember Linux commands in a straightforward manner.

3. Apropos Tool

Apropos searches and displays the name and short description of a keyword, for instance a command name, as written in the man page of that command.

Read Also5 Ways to Find a Linux Command Description and Location

If you do not know the exact name of a command, simply type a keyword (regular expression) to search for it. For example if you are searching for the description of docker-commit command, you can type docker, apropos will search and list all commands with the string docker, and their description as well.

$ apropos docker

Find Linux Command Description

Find Linux Command Description

You can get the description of the exact keyword or command name you have provided as shown.

$ apropos docker-commit
OR
$ apropos -a docker-commit

This is another useful way of remembering Linux commands, to guide you on what command to use for a specific task or if you have forgotten what a command is used for. Read on, because the next tool is even more interesting.

4. Explain Shell Script

Explain Shell is a small Bash script that explains shell commands. It requires the curl program and a working internet connection. It displays a command description summary and in addition, if the command includes a flag, it also shows a description of that flag.

To use it, first you need to add the following code at the bottom of you $HOME/.bashrc file.

# explain.sh begins
explain () {
  if [ "$#" -eq 0 ]; then
    while read  -p "Command: " cmd; do
      curl -Gs "https://www.mankier.com/api/explain/?cols="$(tput cols) --data-urlencode "q=$cmd"
    done
    echo "Bye!"
  elif [ "$#" -eq 1 ]; then
    curl -Gs "https://www.mankier.com/api/explain/?cols="$(tput cols) --data-urlencode "q=$1"
  else
    echo "Usage"
    echo "explain                  interactive mode."
    echo "explain 'cmd -o | ...'   one quoted command to explain it."
  fi
}

Save and close the file, then source it or open a fresh terminal windows.

$ source .bashrc

Assuming you have forgotten what the command “apropos -a” does, you can use explain command to help you remember it, as shown.

$ explain 'apropos -a'

Show Linux Command Manual

Show Linux Command Manual

This script can explain to you any shell command effectively, thus helping you remember Linux commands. Unlike the explain shell script, the next tool brings a distinct approach, it shows usage examples of a command.

5. Cheat Program

Cheat is a simple, interactive command-line cheat-sheet program which shows use cases of a Linux command with a number of options and their short understandable function. It is useful for Linux newbies and sysadmins.

To install and use it, check out our complete article about Cheat program and its usage with examples:

  1. Cheat – An Ultimate Command Line ‘Cheat-Sheet’ for Linux Beginners

That’s all! In this article, we have shared 5 command-line tools for remembering Linux commands. If you know any other tools for the same purpose that are missing in the list above, let us know via the feedback form below.

Source

How to Enable, Disable and Install Yum Plug-ins

YUM plug-ins are small programs that extend and improve the overall performance of the package manager. A few of them are installed by default, while many are not. Yum always notify you which plug-ins, if any, are loaded and active whenever you run any yum command.

In this short article, we will explain how to turn on or off and configure YUM package manager plug-ins in CentOS/RHEL distributions.

To see all active plug-ins, run a yum command on the terminal. From the output below, you can see that the fastestmirror plug-in is loaded.

# yum search nginx

Loaded plugins: fastestmirror
Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast
Determining fastest mirrors
...

Enabling YUM Plug-ins

To enable yum plug-ins, ensure that the directive plugins=1 (1 meaning on) exists under the [main] section in the /etc/yum.conf file, as shown below.

# vi /etc/yum.conf
Yum Configuration File
[main]
cachedir=/var/cache/yum/$basearch/$releasever
keepcache=0
debuglevel=2
logfile=/var/log/yum.log
exactarch=1
obsoletes=1
gpgcheck=1
plugins=1 installonly_limit=5

This is a general method of enabling yum plug-ins globally. As we will see later on, you can enable them individually in their receptive configuration files.

Disabling YUM Plug-ins

To disable yum plug-ins, simply change the value above to 0 (meaning off), which disables all plug-ins globally.

plugins=0	

At this stage, it is useful to note that:

  • Since a few plug-ins (such as product-id and subscription-manager) offer fundamental yum functionalities, it is not recommended to turn off all plug-ins especially globally.
  • Secondly, disabling plug-ins globally is allowed as an easy way out, and this implies that you can use this provision when investigating a likely problem with yum.
  • Configurations for various plug-ins are located in /etc/yum/pluginconf.d/.
  • Disabling plug-ins globally in /etc/yum.conf overrides settings in individual configuration files.
  • And you can also disable a single or all yum plug-ins when running yum, as described later on.

Installing and Configuring Extra YUM Plug-ins

You can view a list of all yum plug-ins and their descriptions using this command.

# yum search yum-plugin

Loaded plugins: fastestmirror
Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast
Loading mirror speeds from cached hostfile
 * base: mirror.sov.uk.goscomb.net
 * epel: www.mirrorservice.org
 * extras: mirror.sov.uk.goscomb.net
 * updates: mirror.sov.uk.goscomb.net
========================================================================= N/S matched: yum-plugin ==========================================================================
PackageKit-yum-plugin.x86_64 : Tell PackageKit to check for updates when yum exits
fusioninventory-agent-yum-plugin.noarch : Ask FusionInventory agent to send an inventory when yum exits
kabi-yum-plugins.noarch : The CentOS Linux kernel ABI yum plugin
yum-plugin-aliases.noarch : Yum plugin to enable aliases filters
yum-plugin-auto-update-debug-info.noarch : Yum plugin to enable automatic updates to installed debuginfo packages
yum-plugin-changelog.noarch : Yum plugin for viewing package changelogs before/after updating
yum-plugin-fastestmirror.noarch : Yum plugin which chooses fastest repository from a mirrorlist
yum-plugin-filter-data.noarch : Yum plugin to list filter based on package data
yum-plugin-fs-snapshot.noarch : Yum plugin to automatically snapshot your filesystems during updates
yum-plugin-keys.noarch : Yum plugin to deal with signing keys
yum-plugin-list-data.noarch : Yum plugin to list aggregate package data
yum-plugin-local.noarch : Yum plugin to automatically manage a local repo. of downloaded packages
yum-plugin-merge-conf.noarch : Yum plugin to merge configuration changes when installing packages
yum-plugin-ovl.noarch : Yum plugin to work around overlayfs issues
yum-plugin-post-transaction-actions.noarch : Yum plugin to run arbitrary commands when certain pkgs are acted on
yum-plugin-priorities.noarch : plugin to give priorities to packages from different repos
yum-plugin-protectbase.noarch : Yum plugin to protect packages from certain repositories.
yum-plugin-ps.noarch : Yum plugin to look at processes, with respect to packages
yum-plugin-remove-with-leaves.noarch : Yum plugin to remove dependencies which are no longer used because of a removal
yum-plugin-rpm-warm-cache.noarch : Yum plugin to access the rpmdb files early to warm up access to the db
yum-plugin-show-leaves.noarch : Yum plugin which shows newly installed leaf packages
yum-plugin-tmprepo.noarch : Yum plugin to add temporary repositories
yum-plugin-tsflags.noarch : Yum plugin to add tsflags by a commandline option
yum-plugin-upgrade-helper.noarch : Yum plugin to help upgrades to the next distribution version
yum-plugin-verify.noarch : Yum plugin to add verify command, and options
yum-plugin-versionlock.noarch : Yum plugin to lock specified packages from being updated

To install a plug-in, use the same method for installing a package. For instance we will install the changelogplug-in which is used to display package changelogs before/after updating.

# yum install yum-plugin-changelog 

Once you have installed, changelog will be enabled by default, to confirm take look into its configuration file.

# vi /etc/yum/pluginconf.d/changelog.conf

Now you can view the changelog for a package (httpd in this case) like this.

# yum changelog httpd

Loaded plugins: changelog, fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.linode.com
 * epel: mirror.freethought-internet.co.uk
 * extras: mirrors.linode.com
 * updates: mirrors.linode.com

Listing all changelogs

==================== Installed Packages ====================
httpd-2.4.6-45.el7.centos.4.x86_64       installed
* Wed Apr 12 17:30:00 2017 CentOS Sources <bugs@centos.org> - 2.4.6-45.el7.centos.4
- Remove index.html, add centos-noindex.tar.gz
- change vstring
- change symlink for poweredby.png
- update welcome.conf with proper aliases
...

Disable YUM Plug-ins in Command Line

As stated before, we can also turn off one or more plug-ins while running a yum command by using these two important options.

  • --noplugins – turns off all plug-ins
  • --disableplugin=plugin_name – disables a single plug-ins

You can disable all plug-ins as in this yum command.

# yum search --noplugins yum-plugin

The next command disables the plug-in, fastestmirror while installing httpd package.

# yum install --disableplugin=fastestmirror httpd

Loaded plugins: changelog
Resolving Dependencies
--> Running transaction check
---> Package httpd.x86_64 0:2.4.6-45.el7.centos.4 will be updated
--> Processing Dependency: httpd = 2.4.6-45.el7.centos.4 for package: 1:mod_ssl-2.4.6-45.el7.centos.4.x86_64
---> Package httpd.x86_64 0:2.4.6-67.el7.centos.6 will be an update
...

That’s it for now! you may also like to read these following YUM related articles.

  1. How to Use ‘Yum History’ to Find Out Installed or Removed Packages Info
  2. How to Fix Yum Error: Database Disk Image is Malformed

In this guide, we showed how to activate, configure or deactivate YUM package manager plug-ins in CentOS/RHEL 7. Use the comment form below to ask any question or share your views about this article.

Source

How to Backup/Restore MySQL/MariaDB and PostgreSQL Using ‘Automysqlbackup’ and ‘Autopostgresqlbackup’ Tools

If you are a database administrator (DBA) or are responsible for maintaining, backing up, and restoring databases, you know you can’t afford to lose data. The reason is simple: losing data not only means the loss of important information, but also may damage your business financially.

MySQL/MariaDB & PostgreSQL Backup

MySQL/MariaDB & PostgreSQL Backup/Restore

For that reason, you must always make sure that:

1. your databases are backed up on a periodic basis,
2. those backups are stored in a safe place, and
3. you perform restoration drills regularly.

This last activity should not be overlooked, as you don’t want to run into a major issue without having practiced what needs to be done in such situation.

In this tutorial we will introduce you to two nice utilities to back up MySQL / MariaDB and PostgreSQLdatabases, respectively: automysqlbackup and autopostgresqlbackup.

Since the latter is based on the former, we will focus our explanation on automysqlbackup and highlight differences with autopgsqlbackup, if any at all.

It is strongly recommended to store the backups in a network share mounted in the backup directory so that in the event of a system-wide crash, you will still be covered.

Read following useful guides on MySQL:

Installing MySQL / MariaDB / PostgreSQL Databases

1. This guide assumes the you must have MySQL / MariaDB / PostgreSQL instance running, If not, please install the following packages:

Fedora-based distributions:

# yum update && yum install mariadb mariadb-server mariadb-libs postgresql postgresql-server postgresql-libs

Debian and derivatives:

# aptitude update && aptitude install mariadb-client mariadb-server mariadb-common postgresql-client postgresql postgresql-common

2. You have a testing MySQL / MariaDB / PostgreSQL database that you can use (you are advised to NOT use either automysqlbackup or autopostgresqlbackup in a production environment until you have become acquainted with these tools).

Otherwise, create two sample databases and populate them with data before proceeding. In this article I will use the following databases and tables:

MySQL/MariaDB
CREATE DATABASE mariadb_db;
CREATE TABLE tecmint_tbl (UserID INT AUTO_INCREMENT PRIMARY KEY, 
UserName VARCHAR(50), 
IsActive BOOL);

Create MySQL Database

Create MySQL Database

PostgreSQL
CREATE DATABASE postgresql_db;
CREATE TABLE tecmint_tbl (
UserID SERIAL PRIMARY KEY,
UserName VARCHAR(50),
IsActive BOOLEAN);

Create PostgreSQL Database

Create PostgreSQL Database

Installing automysqlbackup and autopgsqlbackup in CentOS 7 and Debian 8

3. In Debian 8, both tools are available in the repositories, so installing them is as simple as running:

# aptitude install automysqlbackup autopostgresqlbackup

Whereas in CentOS 7 you will need to download the installation scripts and run them. In the sections below we will focus exclusively on installing, configuring, and testing these tools on CentOS 7 since for Debian 8 – where they almost work out of the box, we will make the necessary clarifications later in this article.

Installing and configuring automysqlbackup in CentOS 7

4. Let us begin by creating a working directory inside /opt to download the installation script and run it:

# mkdir /opt/automysqlbackup
# cd /opt/automysqlbackup
# wget http://ufpr.dl.sourceforge.net/project/automysqlbackup/AutoMySQLBackup/AutoMySQLBackup%20VER%203.0/automysqlbackup-v3.0_rc6.tar.gz
# tar zxf automysqlbackup-v3.0_rc6.tar.gz
# ./install.sh

.

Installing AutoMysqlBackup in CentOS-7

Installing AutoMysqlBackup in CentOS-7

5. The configuration file for automysqlbackup is located inside /etc/automysqlbackup under the name myserver.conf. Let’s take a look at most relevant configuration directives:

myserver.conf – Configure Automysqlbackup
# Username to access the MySQL server
CONFIG_mysql_dump_username='root'
# Password
CONFIG_mysql_dump_password='YourPasswordHere'
# Host name (or IP address) of MySQL server
CONFIG_mysql_dump_host='localhost'
# Backup directory
CONFIG_backup_dir='/var/backup/db/automysqlbackup'
# List of databases for Daily/Weekly Backup e.g. ( 'DB1' 'DB2' 'DB3' ... )
# set to (), i.e. empty, if you want to backup all databases
CONFIG_db_names=(AddYourDatabase Names Here)
# List of databases for Monthly Backups.
# set to (), i.e. empty, if you want to backup all databases
CONFIG_db_month_names=(AddYourDatabase Names Here)
# Which day do you want monthly backups? (01 to 31)
# If the chosen day is greater than the last day of the month, it will be done
# on the last day of the month.
# Set to 0 to disable monthly backups.
CONFIG_do_monthly="01"
# Which day do you want weekly backups? (1 to 7 where 1 is Monday)
# Set to 0 to disable weekly backups.
CONFIG_do_weekly="5"
# Set rotation of daily backups. VALUE*24hours
# If you want to keep only today's backups, you could choose 1, i.e. everything older than 24hours will be removed.
CONFIG_rotation_daily=6
# Set rotation for weekly backups. VALUE*24hours. A value of 35 means 5 weeks.
CONFIG_rotation_weekly=35
# Set rotation for monthly backups. VALUE*24hours. A value of 150 means 5 months.
CONFIG_rotation_monthly=150
# Include CREATE DATABASE statement in backup?
CONFIG_mysql_dump_create_database='no'
# Separate backup directory and file for each DB? (yes or no)
CONFIG_mysql_dump_use_separate_dirs='yes'
# Choose Compression type. (gzip or bzip2)
CONFIG_mysql_dump_compression='gzip'
# What would you like to be mailed to you?
# - log   : send only log file
# - files : send log file and sql files as attachments (see docs)
# - stdout : will simply output the log to the screen if run manually.
# - quiet : Only send logs if an error occurs to the MAILADDR.
CONFIG_mailcontent='quiet'
# Email Address to send mail to? (user@domain.com)
CONFIG_mail_address='root'
# Do you wish to encrypt your backups using openssl?
#CONFIG_encrypt='no'
# Choose a password to encrypt the backups.
#CONFIG_encrypt_password='password0123'
# Command to run before backups (uncomment to use)
#CONFIG_prebackup="/etc/mysql-backup-pre"
# Command run after backups (uncomment to use)
#CONFIG_postbackup="/etc/mysql-backup-post"

Once you have configured automysqlbackup as per your needs, you are strongly advise to check out the README file found in /etc/automysqlbackup/README.

MySQL Database Backup

6. When you’re ready, go ahead and run the program, passing the configuration file as argument:

# automysqlbackup /etc/automysqlbackup/myserver.conf

Configure Automysqlbackup on CentOS 7

Configure Automysqlbackup on CentOS 7

A quick inspection of the daily directory will show that automysqlbackup has run successfully:

# pwd
# ls -lR daily

MySQL Daily Database Backup

MySQL Daily Database Backup

Of course you can add a crontab entry to run automysqlbackup at a time of day that best suits your needs (1:30am every day in the below example):

30 01 * * * /usr/local/bin/automysqlbackup /etc/automysqlbackup/myserver.conf

Restoring a MySQL Backup

7. Now let’s drop the mariadb_db database on purpose:

Drop MariaDB Database

Drop MariaDB Database

Let’s create it again and restore the backup. In the MariaDB prompt, type:

CREATE DATABASE mariadb_db;
exit

Then locate:

# cd /var/backup/db/automysqlbackup/daily/mariadb_db
# ls

Locate MariaDB Database backup

Locate MariaDB Database backup

And restore the backup:

# mysql -u root -p mariadb_db < daily_mariadb_db_2015-09-01_23h19m_Tuesday.sql
# mysql -u root -p
MariaDB [(none)]> USE mariadb_db; 
MariaDB [(none)]> SELECT * FROM tecmint_tb1;

Restore MariaDB Backup

Restore MariaDB Backup

Installing and configuring autopostgresqlbackup in CentOS 7

8. In order for autopostgresql to work flawlessly in CentOS 7, we will need to install some dependencies first:

# yum install mutt sendmail

Then let’s repeat the process as before:

# mkdir /opt/autopostgresqlbackup
# cd /opt/autopostgresqlbackup
# wget http://ufpr.dl.sourceforge.net/project/autopgsqlbackup/AutoPostgreSQLBackup/AutoPostgreSQLBackup-1.0/autopostgresqlbackup.sh.1.0
# mv autopostgresqlbackup.sh.1.0 /opt/autopostgresqlbackup/autopostgresqlbackup.sh

Let’s make the script executable and start / enable the service:

# chmod 755 autopostgresqlbackup.sh
# systemctl start postgresql
# systemctl enable postgresql

Finally, we will edit the value of the backup directory setting to:

autopostgresqlbackup.sh – Configure Autopostgresqlbackup
BACKUPDIR="/var/backup/db/autopostgresqlbackup"

After having through the configuration file of automysqlbackup, configuring this tool is very easy (that part of the task is left up to you).

9. In CentOS 7, as opposed to Debian 8autopostgresqlbackup is best run as the postgres system user, so in order to do that you should either switch to that account or add a cron job to its crontab file:

# crontab -u postgres -e
30 01 * * * /opt/autopostgresqlbackup/autopostgresqlbackup.sh

The backup directory, by the way, needs to be created and its permissions and group ownership must be set recursively to 0770 and postgres (again, this will NOT be necessary in Debian):

# mkdir /var/backup/db/autopostgresqlbackup
# chmod -R 0770 /var/backup/db/autopostgresqlbackup
# chgrp -R postgres /var/backup/db/autopostgresqlbackup

The result:

# cd /var/backup/db/autopostgresqlbackup
# pwd
# ls -lR daily

PostgreSQL Daily Database Backup

PostgreSQL Daily Database Backup

10. Now you can restore the files when needed (remember to do this as user postgres after recreating the empty database):

# gunzip -c postgresql_db_2015-09-02.Wednesday.sql.gz | psql postgresql_db

Considerations in Debian 8

As we mentioned earlier, not only the installation of these tools in Debian is more straightforward, but also their respective configurations. You will find the configuration files in:

  1. Automysqlbackup: /etc/default/automysqlbackup
  2. Autopostgresqlbackup: /etc/default/autopostgresqlbackup

Summary

In this article we have explained how to install and use automysqlbackup and autopostgresqlbackup (learning how to use the first will help you master the second as well), two great database back up tools that can make your tasks as a DBA or system administrator / engineer much easier.

Please note that you can expand on this topic by setting up email notifications or sending backup files as attachments via email – not strictly required, but may come in handy sometimes.

As a final note, remember that the permissions of configuration files should be set to the minimum (0600 in most cases). We look forward to hearing what you think about this article. Feel free to drop us a note using the form below.

Source

Installing and Configuring X2Go Server and Client on Debian 8

Much of the power behind Linux comes from the command line and the ability for a system to be managed easily remotely. However, for most users from the Windows world or novice Linux administrators, there may be a preference to have access to the graphical user interface for remote management functionality.

Other users may simply have a desktop at home that may need to have graphical applications managed remotely as well. Which ever situation may be the case, there are some inherent security risks such as the remote traffic not being encrypted thus allowing malicious users to sniff the remote desktop session.

Install X2Go Server and Client in Linux

Install X2Go Server and Client in Debian

To solve this common issue with remote desktop systems, X2Go tunnels the remote desktop session through secure shell (SSH). While only one of many of the benefits of X2Go, it is a very important one!

Features of X2Go

  1. Graphical remote desktop control.
  2. Tunneled through SSH.
  3. Sound support.
  4. File and printer sharing from client to server.
  5. Ability to access a single application rather than a whole desktop session.

Environment Setup

  1. This guide assumes a working Debian 8 (Jessie) setup with LXDE (other desktop environments are support however; please see this link).
  2. Another Linux client to install the X2Go client software (This guide uses Linux Mint 17.1 with the Cinnamon desktop environment).
  3. Working network connection with openssh-server already installed and working.
  4. Root access

Installation of X2Go Server and Client on Debian 8

This part of the process will require setting up the X2Go server as well as an X2Go client in order to have a remote desktop connection. The guide will start first with the server setup and then proceed to the client setup.

X2Go Server Installation

The server in this tutorial will be the Debian 8 system running LXDE. The start of the installation process, is to install the X2Go Debian repository and obtain the GPG keys. The first step is to obtain the keys which can be easily accomplished the apt.

# apt-key adv --recv-keys --keyserver keys.gnupg.net E1F958385BFE2B6E

Once the keys have been obtained, a repository file needs to be created for apt to look for the X2Go packages at a specific repository location. This can all be accomplished with one simple command that creates the needed apt list file and puts the appropriate entry into that file.

# echo "deb http://packages.x2go.org/debian jessie main" >> /etc/apt/sources.list.d/x2go.list
# apt-get update

The above commands will instruct apt to search this newly provided repository for packages and more specifically the X2Go packages. At this point, the system is ready to have the X2Go server installed using the apt meta-packager.

# apt-get install x2goserver

At this point the X2Go server should be installed and started. It is always a good idea to confirm that installed servers are running though.

# ps aux | grep x2go

Confirm X2Go Server Installed and Running

Confirm X2Go Server Installed and Running

In the event that the system doesn’t automatically start X2Go, run the following command to attempt to start the service.

# service x2goserver start

At this point the basic server configuration should be done and the system should be waiting for connections from the X2Go client system.

X2Go Client Installation

The client installation is easier than the server installation. Most distributions already have the client in their provided repositories and this package can easily be installed with the apt meta-packager.

NOTE: Remember that this is done on a computer that is going to connect to the server setup in the previous paragraphs.

# apt-get install x2goclient

Assuming that apt doesn’t return any issues, the X2Go client should be ready to go. Navigate to the X2Goexecutable via the client’s distribution’s file explorer or launch the utility from the command line with the following command.

# x2goclient

X2Go Client Session

X2Go Client Session

The above windows are the initial windows once the X2Go client is launched. Let’s connect to the Debian Servernow!

In the server field in the window on the right, enter the Debian system’s ip address. The next box needs to have the user name of someone who can SSH into the Debian system.

The next thing to change is the Session Type at the bottom. Since the Debian server is using LXDE, it is a good idea to select LXDE from the drop down.

Again, not all desktop environments are supported at the moment, please reference the link at the top of this guide to see what desktop environments are supported or if any work-arounds are needed.

Once the above information have been input, click the “Ok” button at the bottom of the window to finish setting up the session profile. The next step is to click and activate the newly created session. To do this simply click on the session just created on the right in the X2Go Client window.

Connect Remote Debian Desktop

Connect Remote Debian Desktop

Once this session is selected, it will prompt for the user on the remote machine’s credentials. Again these credentials will be the user on the Debian server’s credentials!

Remote Linux Desktop Login

Remote Debian Desktop Login

Once the correct password is provided the system will then display the remote system’s graphical display in a scalable window on the client system!

Linux Remote Desktop Access

Debian Remote Desktop Access

Hopefully at this point, your X2Go system is working like the above systems and you are enjoying a secure remote desktop connection to a Debian server!

Best of luck with this new (and more secure) remote desktop solution for a Debian Linux system! Please feel free to share any comments or questions below and we’d be happy to assist.

Source

FreeFileSync – Compare and Synchronize Files in Ubuntu

FreeFileSync is a free, open source and cross platform folder comparison and synchronization software, which helps you synchronize files and folders on Linux, Windows and Mac OS.

It is portable and can also be installed locally on a system, it’s feature-rich and is intended to save time in setting up and executing backup operations while having attractive graphical interface as well.

FreeFileSync Features

Below are it’s key features:

  1. It can synchronize network shares and local disks.
  2. It can synchronize MTP devices (Android, iPhone, tablet, digital camera).
  3. It can also synchronize via SFTP (SSH File Transfer Protocol).
  4. It can identify moved and renamed files and folders.
  5. Displays disk space usage with directory trees.
  6. Supports copying locked files (Volume Shadow Copy Service).
  7. Identifies conflicts and propagate deletions.
  8. Supports comparison of files by content.
  9. It can be configured to handle Symbolic Links.
  10. Supports automation of sync as a batch job.
  11. Enables processing of multiple folder pairs.
  12. Supports in-depth and detailed error reporting.
  13. Supports copying of NTFS extended attributes such as (compressed, encrypted, sparse).
  14. Also supports copying of NTFS security permissions and NTFS Alternate Data Streams.
  15. Support long file paths with more than 260 characters.
  16. Supports Fail-safe file copy prevents data corruption.
  17. Allows expanding of environment variables such as %UserProfile%.
  18. Supports accessing of variable drive letters by volume name (USB sticks).
  19. Supports managing of versions of deleted/updated files.
  20. Prevent disc space issues via optimal sync sequence.
  21. Supports full Unicode.
  22. Offers a highly optimized run time performance.
  23. Supports filters to include and exclude files plus lots more.

How To Install FreeFileSync in Ubuntu Linux

We will add official FreeFileSync PPA, which is available for Ubuntu 14.04 and Ubuntu 15.10 only, then update the system repository list and install it like so:

-------------- On Ubuntu 14.04 and 15.10 -------------- 
$ sudo apt-add-repository ppa:freefilesync/ffs
$ sudo apt-get update
$ sudo apt-get install freefilesync

On Ubuntu 16.04 and newer version, go to the FreeFileSync download page and get the appropriate package file for Ubuntu and Debian Linux.

Next, move into the Download folder, extract the FreeFileSync_*.tar.gz into the /opt directory as follows:

$ cd Downloads/
$ sudo tar xvf FreeFileSync_*.tar.gz -C /opt/
$ cd /opt/
$ ls
$ sudo unzip FreeFileSync/Resources.zip -d /opt/FreeFileSync/Resources/

Now we will create an application launcher (.desktop file) using Gnome Panel. To view examples of .desktopfiles on your system, list the contents of the directory /usr/share/applications:

$ ls /usr/share/applications

In case you do not have Gnome Panel installed, type the command below to install it:

$ sudo apt-get install --no-install-recommends gnome-panel

Next, run the command below to create the application launcher:

$ sudo gnome-desktop-item-edit /usr/share/applications/ --create-new

And define the values below:

Type: 	   Application 
Name: 	   FreeFileSync
Command:   /opt/FreeFileSync/FreeFileSync		
Comment:   Folder Comparison and Synchronization

To add an icon for the launcher, simply clicking on the spring icon to select it: /opt/FreeFileSync/Resources/FreeFileSync.png.

When you have set all the above, click OK create it.

Create Desktop Launcher

Create Desktop Launcher

If you don’t want to create desktop launcher, you can start FreeFileSync from the directory itself.

$ ./FreeFileSync

How to Use FreeFileSync in Ubuntu

In Ubuntu, search for FreeFileSync in the Unity Dash, whereas in Linux Mint, search for it in the System Menu, and click on the FreeFileSync icon to open it.

FreeFileSync

FreeFileSync

Compare Two Folders Using FreeFileSync

In the example below, we’ll use:

Source Folder:	/home/aaronkilik/bin
Destination Folder:	/media/aaronkilik/J_CPRA_X86F/scripts

To compare the file time and size of the two folders (default setting), simply click on the Compare button.

Compare Two Folders in Linux

Compare Two Folders in Linux

Press F6 to change what to compare by default, in the two folders: file time and size, content or file size from the interface below. Note that the meaning of the each option you select is included as well.

File Comparison Settings

File Comparison Settings

Synchronization Two Folders Using FreeFileSync

You can start by comparing the two folders, and then click on Synchronize button, to start the synchronization process; click Start from the dialog box the appears thereafter:

Source Folder: /home/aaronkilik/Desktop/tecmint-files
Destination Folder: /media/aaronkilik/Data/Tecmint

Compare and Synchronize Two Folders

Compare and Synchronize Two Folders

Start File Synchronization

Start File Synchronization

File Synchronization Completed

File Synchronization Completed

To set the default synchronization option: two way, mirror, update or custom, from the following interface; press F8. The meaning of the each option is included there.

File Synchronization Settings

File Synchronization Settings

For more information, visit FreeFileSync homepage at http://www.freefilesync.org/

That’s all! In this article, we showed you how to install FreeFileSync in Ubuntu and it’s derivatives such as Linux Mint, Kubuntu and many more. Drop your comments via the feedback section below.

Source

How to Setup Two-Factor Authentication (Google Authenticator) for SSH Logins

By default, SSH already uses a secure data communication between remote machines, but if you want to add some extra security layer to your SSH connections, you can add a Google Authenticator (two-factor authentication) module that allow you to enter a random one-time password (TOTP) verification code while connecting to SSH servers. You’ll have to enter the verification code from your smartphone or PC when you connect.

The Google Authenticator is an open-source module that includes implementations of one-time passcodes (TOTP) verification token developed by Google. It supports several mobile platforms, as well as PAM (Pluggable Authentication Module). These one-time passcodes are generated using open standards created by the OATH(Initiative for Open Authentication).

SSH Two Factor Authentication

SSH Two Factor Authentication

In this article I will show you how to setup and configure SSH for two-factor authentication under Red HatCentOSFedora and UbuntuLinux Mint and Debian.

Installing Google Authenticator Module

Open the machine that you want to setup two factor authentication and install following PAM libraries along with development libraries that are needed for the PAM module to work correctly with Google authenticatormodule.

On Red HatCentOS and Fedora systems install the ‘pam-devel‘ package.

# yum install pam-devel make gcc-c++ wget

On UbuntuLinux Mint and Debian systems install ‘libpam0g-dev‘ package.

# apt-get install libpam0g-dev make gcc-c++ wget

Download and extract Google authenticator module under Home directory (assume you are already logged in home directory of root).

# cd /root
# wget https://google-authenticator.googlecode.com/files/libpam-google-authenticator-1.0-source.tar.bz2
# tar -xvf libpam-google-authenticator-1.0-source.tar.bz2

Type the following commands to compile and install Google authenticator module on the system.

# cd libpam-google-authenticator-1.0
# make
# make install
# google-authenticator

Once you run ‘google-authenticator‘ command, it will prompt you with a serious of question. Simply type “y” (yes) as the answer in most situation. If something goes wrong, you can type again ‘google-authenticator‘ command to reset the settings.

  1. Do you want authentication tokens to be time-based (y/n) y

After this question, you will get your ‘secret key‘ and ‘emergency codes‘. Write down these details somewhere, we will need the ‘secret key‘ later on to setup Google Authenticator app.

[root@tecmint libpam-google-authenticator-1.0]# google-authenticator

Do you want authentication tokens to be time-based (y/n) y
https://www.google.com/chart?chs=200x200&chld=M|0&cht=qr&chl=otpauth://totp/root@tecmint.com%3Fsecret%3DXEKITDTYCBA2TLPL
Your new secret key is: XEKITDTYCBA2TLPL
Your verification code is 461618
Your emergency scratch codes are:
  65083399
  10733609
  47588351
  71111643
  92017550

Next, follow the setup wizard and in most cases type answer as “y” (yes) as shown below.

Do you want me to update your "/root/.google_authenticator" file (y/n) y

Do you want to disallow multiple uses of the same authentication
token? This restricts you to one login about every 30s, but it increases
your chances to notice or even prevent man-in-the-middle attacks (y/n) y

By default, tokens are good for 30 seconds and in order to compensate for
possible time-skew between the client and the server, we allow an extra
token before and after the current time. If you experience problems with poor
time synchronization, you can increase the window from its default
size of 1:30min to about 4min. Do you want to do so (y/n) y

If the computer that you are logging into isn't hardened against brute-force
login attempts, you can enable rate-limiting for the authentication module.
By default, this limits attackers to no more than 3 login attempts every 30s.
Do you want to enable rate-limiting (y/n) y

Configuring SSH to use Google Authenticator Module

Open the PAM configuration file ‘/etc/pam.d/sshd‘ and add the following line to the top of the file.

auth       required     pam_google_authenticator.so

Next, open the SSH configuration file ‘/etc/ssh/sshd_config‘ and scroll for fine the line that says.

ChallengeResponseAuthentication no

Change it to “yes“. So, it becomes like this.

ChallengeResponseAuthentication yes

Finally, restart SSH service to take new changes.

# /etc/init.d/sshd restart

Configuring Google Authenticator App

Launch Google Authenticator app in your smartphone. Press Menu and choose “Setup an account“. If you don’t have this app, you can download and install Google Authenticator app on your Android/iPhone/Blackberrydevices.

Google Authenticator Setup Account

Google Authenticator Setup Account

Press “Enter key provided”.

Google Authenticator Secret Key

Enter Google Authenticator Secret Key

Add your account ‘Name‘ and enter the ‘secret key‘ generated earlier.

Google Authenticator Account Name

Google Authenticator Account Name and Secret Key

It will generate one time password (verification code) that will constantly changing every 30sec on your phone.

Google Authenticator One Time Password

Google Authenticator One Time Password

Now try to login via SSH, you will be prompted with Google Authenticator code (Verification code) and Password whenever you attempt to log in via SSH. You have only 30 seconds to enter this verification code, if you miss it will regenerate new verification code.

login as: tecmint
Access denied
Using keyboard-interactive authentication.
Verification code:
Using keyboard-interactive authentication.
Password:
Last login: Tue Apr 23 13:58:29 2013 from 172.16.25.125
[root@tecmint ~]#

If you don’t have smartphone, you can also use a Firefox add-on called GAuth Authenticator to do two-factor authentication.

Important: The two-factor authentication works with password based SSH login. If you are using any private/public key SSH session, it will ignore two-factor authentication and log you in directly.

Source

Useful ‘host’ Command Examples for Querying DNS Lookups

Host command is a minimal and easy-to-use CLI utility for performing DNS lookups which translate domain names to IP addresses and vice versa. It can also be used to list and verify various types of DNS records such as NS and MX, test and validate ISP DNS server and Internet connectivity, spam and blacklisting records, detecting and troubleshooting DNS server issues among others.

In this article, we will learn how to use host command with a few useful examples in Linux to perform DNS lookups. In previous articles, we showed the most used 8 Nslookup commands for testing and troubleshooting DNS servers and to query specific DNS resource records (RR) as well.

We also explained 10 Linux Dig (Domain Information Groper) commands to query DNS info, it works more like the Nslookup tool. The host utility also works in a similar way and comes preinstalled on most if not all mainstream Linux distros.

With that said, let’s look at these 14 host commands below.

Find the Domain IP Address

This is the simplest host command you can run, just provide a domain name such as google.com to get the associated IP addresses.

$ host google.com

google.com has address 216.58.201.46
google.com has address 216.58.201.46
google.com has address 216.58.201.46
google.com has IPv6 address 2a00:1450:4009:80b::200e
google.com mail is handled by 20 alt1.aspmx.l.google.com.
google.com mail is handled by 30 alt2.aspmx.l.google.com.
google.com mail is handled by 10 aspmx.l.google.com.
google.com mail is handled by 40 alt3.aspmx.l.google.com.
google.com mail is handled by 50 alt4.aspmx.l.google.com.

Find Domain Name Servers

To find out the domain name servers use the -t option.

$ host -t ns google.com

google.com name server ns1.google.com.
google.com name server ns2.google.com.
google.com name server ns3.google.com.
google.com name server ns4.google.com.

Find Domain CNAME Record

To find out the domain CNAME, run.

$ host -t cname mail.google.com

mail.google.com is an alias for googlemail.l.google.com.

Find Domain MX Record

To find out the MX records for a domain.

$ host -n -t mx google.com

ogle.com mail is handled by 50 alt4.aspmx.l.google.com.
google.com mail is handled by 10 aspmx.l.google.com.
google.com mail is handled by 40 alt3.aspmx.l.google.com.
google.com mail is handled by 30 alt2.aspmx.l.google.com.
google.com mail is handled by 20 alt1.aspmx.l.google.com.

Find Domain TXT Record

To find out the TXT records for a domain.

$ host -t txt google.com

google.com descriptive text "v=spf1 include:_spf.google.com ~all"

Find Domain SOA Record

You can make host attempt to display the SOA records for specified zone, from all the listed authoritative name servers for that zone with the -C flag.

$ host -C google.com

Nameserver 216.239.38.10:
	google.com has SOA record ns1.google.com. dns-admin.google.com. 156142728 900 900 1800 60
Nameserver 216.239.32.10:
	google.com has SOA record ns3.google.com. dns-admin.google.com. 156142728 900 900 1800 60
Nameserver 216.239.34.10:
	google.com has SOA record ns4.google.com. dns-admin.google.com. 156142728 900 900 1800 60
Nameserver 216.239.36.10:
	google.com has SOA record ns2.google.com. dns-admin.google.com. 156142728 900 900 1800 60

Query Particular Name Server

To query particual domain name server.

$ host google.com ns4.google.com

Using domain server:
Name: ns4.google.com
Address: 216.239.38.10#53
Aliases: 

google.com has address 172.217.19.46
google.com has address 172.217.19.46
google.com has address 172.217.19.46
google.com has IPv6 address 2a00:1450:4005:808::200e
google.com mail is handled by 30 alt2.aspmx.l.google.com.
google.com mail is handled by 20 alt1.aspmx.l.google.com.
google.com mail is handled by 50 alt4.aspmx.l.google.com.
google.com mail is handled by 10 aspmx.l.google.com.
google.com mail is handled by 40 alt3.aspmx.l.google.com.

Find All Information of Domain Records and Zones

To make a query of type ANY, use the -a (all) option which is equivalent to setting the -v option.

$ host -a google.com

Trying "google.com"
;; ->>HEADER<

Get Domain TTL Information

To find out domain TTL information.

$ host -v -t a google.com

Trying "google.com"
;; ->>HEADER<

Use Either IPv4 or IPv6

The -4 or -6 option forces host to use only IPv4 or only IPV6 query transport respectively.

$ host -4 google.com
OR
$ host -6 google.com

Perform Non-Recursive Queries

The -r option performs non-recursive queries, note that setting this option clears the RD (recursion desired), the bit in the query which host makes.

$ host -rR 5 google.com

google.com has address 216.58.201.46
google.com has address 216.58.201.46
google.com has address 216.58.201.46
google.com has IPv6 address 2a00:1450:4009:80b::200e
google.com mail is handled by 30 alt2.aspmx.l.google.com.
google.com mail is handled by 40 alt3.aspmx.l.google.com.
google.com mail is handled by 50 alt4.aspmx.l.google.com.
google.com mail is handled by 20 alt1.aspmx.l.google.com.
google.com mail is handled by 10 aspmx.l.google.com.

Set UDP Retries for a Lookup

By default the number of UDP tries is 1, to change it, use the -R flag.

$ host -R 5 google.com

google.com has address 216.58.201.46
google.com has address 216.58.201.46
google.com has address 216.58.201.46
google.com has IPv6 address 2a00:1450:4009:80b::200e
google.com mail is handled by 30 alt2.aspmx.l.google.com.
google.com mail is handled by 40 alt3.aspmx.l.google.com.
google.com mail is handled by 50 alt4.aspmx.l.google.com.
google.com mail is handled by 20 alt1.aspmx.l.google.com.
google.com mail is handled by 10 aspmx.l.google.com.

Set Query Time Wait for Reply

Using the -W switch, you can instruct host to wait for a reply for the specified time in seconds and if the -wflag is used, it makes host to wait forever for a reply:

$ host -T -W 10 google.com

google.com has address 216.58.201.46
google.com has address 216.58.201.46
google.com has address 216.58.201.46
google.com has IPv6 address 2a00:1450:4009:80b::200e
google.com mail is handled by 10 aspmx.l.google.com.
google.com mail is handled by 40 alt3.aspmx.l.google.com.
google.com mail is handled by 30 alt2.aspmx.l.google.com.
google.com mail is handled by 20 alt1.aspmx.l.google.com.
google.com mail is handled by 50 alt4.aspmx.l.google.com.

That’s it! In this article, we learned how to use host command with a few useful examples in Linux. Use the feedback form below to share any thoughts with us concerning this guide.

Source

WP2Social Auto Publish Powered By : XYZScripts.com