8 Best Open Source “Disk Cloning/Backup” Softwares for Linux Servers

Disk cloning is the process of copying data from a hard disk to another one, in fact you can do this process by copy & paste but you won’t be able to copy the hidden files and folders or the in-use files, that’s why you need a cloning software to do the job, also you may need the cloning process to save a backup image from your files and folders.

Linux Disk Cloning Tools

8 Linux Disk Cloning Tools

Basically, the cloning software job is to take all disk data, convert them into a single .img file and give it to you, so you can copy it to another hard drive, and here we have the best 8 Open Source Cloning software to do the job for you.

1. Clonezilla

Clonezilla is a Live CD based on Ubuntu & Debian to clone all your hard drive data or to take a backup, licensed under GPL 3, it is similar to Norton Ghost on Windows but more effective.

Features

  1. Support for many filesystems like: ext2, ext3, ext4, btrfs, xfs, and many other filesystems.
  2. Support for BIOS and UEFI.
  3. Support for MPR and GPT partitions.
  4. Ability to reinstall grub 1 and 2 on any attached hard drive.
  5. Works on weak computers ( 200 MB of RAM is needed only).
  6. Many other features.

Clonezilla for Linux

Clonezilla for Linux

Suggested Read: How to Clone or Backup Linux Disk Using Clonezilla

2. Redo Backup

Redo Bakcup also a Live CD tool to clone your drivers easily, Redo Backup is a free & Open Source Live System licensed under GPL 3 to do the job, Features are as the website says.

  1. Easy GUI boots from CD in less than a minute.
  2. No installation required; runs from a CD-ROM or a USB device.
  3. Saves and restores Linux and Windows systems.
  4. Automatically locates local network shares.
  5. Access files even without login.
  6. Recover deleted files, documents, media files quickly.
  7. Internet access with a Chromium browser to download drivers.
  8. Small in size only 250MB Live CD.

Redo Backup for Linux

Redo Backup

  1. Install Redo Backup to Clone/Backup Linux Systems

3. Mondo Rescue

Unlike other cloning software, Mondo doesn’t convert your hard drivers into an .img file, but it will convert them into an .iso image, you can also create a custom Live CD with Mondo using “mindi” which is a special tool developed by Mondo Rescue to clone your data from the Live CD.

It supports most Linux distributions, it also supports FreeBSD, and it is licensed under GPL, You can install Mondo Rescue by using the following link.

MondoRescue for Linux

MondoRescue

  1. Install Mondo Rescue to Clone/Backup Linux Systems

4. Partimage

Partimage is an open-source software backup, by default it works under Linux system and available to install from the package manager for most Linux distributions, if you don’t have a Linux system installed by default you can use “SystemRescueCd” which is a Live CD that include Partimage by default to do the cloning process that you want.

Partimage is very fast in cloning hard drivers, but the problem is that it doesn’t support ext4 or btrfs partitions, although that you can use it to clone other filesystems like ext3 and NTFS.

Partimage for Linux

Partimage

Suggested Read: How to Backup or Clone Linux Partitions Using ‘cat’ Command

5. FSArchiver

FSArchiver is a continuation of Partimage, also a good tool to clone hard disks, it supports cloning Ext4 partitions and NTFS partitions, here’s a list of features:

Features

  1. Support for basic file attributes like owner, permissions, etc.
  2. Support for extended attributes like those used by SELinux.
  3. Support the basic file­system attributes (label, uuid, block­size) for all Linux file­systems.
  4. Support for NTFS partitions of Windows and Ext of Linux and Unix­Like.
  5. Support for checksums which enables you to check for data corruption.
  6. Ability to restore corrupted archive by just skipping the corrupted file.
  7. Ability to have more than one filesystem in an archive.
  8. Ability to compress the archive in many formats like lzo, gzip, bzip2, lzma/xz.
  9. Ability to split big files in size to a smaller one.

You can download FSArchiver and install it on your system, or you can download SystemRescueCD which also contains FSArchiver.

FSArchiver for Linux

FSArchiver

6. Partclone

Partclone is a free tool to clone & restore partitions, written in C in first appeared in 2007, it supports many filesystems like ext2, ext3, ext4, xfs, nfs, reiserfs, reiser4, hfs+, btrfs  and it is very simple to use.

Licensed under GPL, it is available as a tool in Clonezilla as well, you can download it as a package.

Partclone for Linux

Partclone

7. G4L

G4L is a free Live CD system to clone hard disk easily, it’s main feature is that you can compress the filesystem, send it via FTP or CIFS or SSHFS or NFS to any location you want, it also supports GPT partitions since version 0.41, it is licensed under BSD license and available to download for free.

G4L for Linux

G4L

Suggested Read: 14 Outstanding Backup Utilities for Linux Systems

8. doClone

doClone is also a free software project that is developed to clone Linux system partitions easily, written in C++, it supports up to 12 different filesystems, it can preform Grub bootloader restoration and can transform the clone image to another computers via LAN, it also supports live cloning which means that you can create a clone from the system even when it is up and running, doClone.

doClone for Linux

doClone

There are many other tools to clone your Linux hard disks, Have you used any cloning software from the above list to backup your hard drivers? Which one is the best for you? and also tell us if any other tool if you know, which is not listed here.

Source

rdiff-backup – A Remote Incremental Backup Tool for Linux

rdiff-backup is a powerful and easy-to-use Python script for local/remote incremental backup, which works on any POSIX operating system such as Linux, Mac OS X or Cygwin. It brings together the remarkable features of a mirror and an incremental backup.

Significantly, it preserves subdirectories, dev files, hard links, and critical file attributes such as permissions, uid/gid ownership, modification times, extended attributes, acls, and resource forks. It can work in a bandwidth-efficient mode over a pipe, in a similar way as the popular rsync backup tool.

rdiff-backup backs up a single directory to another over a network using SSH, implying that the data transfer is encrypted thus secure. The target directory (on the remote system) ends up an exact copy of the source directory, however extra reverse diffs are stored in a special subdirectory in the target directory, making it possible to recover files lost some time ago.

Dependencies

To use rdiff-backup in Linux, you’ll need the following packages installed on your system:

  • Python v2.2 or later
  • librsync v0.9.7 or later
  • pylibacl and pyxattr Python modules are optional but necessary for POSIX access control list(ACL) and extended attribute support respectively.
  • rdiff-backup-statistics requires Python v2.4 or later.

How to Install rdiff-backup in Linux

Important: If you are operating over a network, you’ll have to install rdiff-backup both systems, preferably both installations of rdiff-backup will have to be the exact same version.

The script is already present in the official repositories of the mainstream Linux distributions, simply run the command below to install rdiff-backup as well as its dependencies:

On Debian/Ubuntu

$ sudo apt-get update
$ sudo apt-get install librsync-dev rdiff-backup

On CentOS/RHEL 7

# wget http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-9.noarch.rpm
# rpm -ivh epel-release-7-9.noarch.rpm
# yum install librsync rdiff-backup

On CentOS/RHEL 6

# wget http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
# rpm -ivh epel-release-6-8.noarch.rpm
# yum install librsync rdiff-backup

On Fedora

# yum install librsync rdiff-backup
# dnf install librsync rdiff-backup [Fedora 22+]

How to Use rdiff-backup in Linux

As I mentioned before, rdiff-backup uses SSH to connect to remote machines on your network, and the default authentication in SSH is the username/password method, which normally requires human interaction.

However, to automate tasks such as automatic backups with scripts and beyond, you will need to configure SSH Passwordless Login Using SSH keys, because SSH keys increases the trust between two Linux servers for easy file synchronization or transfer.

Once you have setup SSH Passwordless Login, you can start using the script with the following examples.

Backup Files to Different Partition

The example below will backup the /etc directory in a Backup directory on another partition:

$ sudo rdiff-backup /etc /media/aaronkilik/Data/Backup/mint_etc.backup

Backup Files to Different Partition

Backup Files to Different Partition

To exclude a particular directory as well as it’s subdirectories, you can use the --exclude option as follows:

$ sudo rdiff-backup --exclude /etc/cockpit --exclude /etc/bluetooth /media/aaronkilik/Data/Backup/mint_etc.backup

We can include all device files, fifo files, socket files, and symbolic links with the --include-special-filesoption as below:

$ sudo rdiff-backup --include-special-files --exclude /etc/cockpit /media/aaronkilik/Data/Backup/mint_etc.backup

There are two other important flags we can set for file selection; --max-file-size size which excludes files that are larger than the given size in bytes and --min-file-size size which excludes files that are smaller than the given size in bytes:

$ sudo rdiff-backup --max-file-size 5M --include-special-files --exclude /etc/cockpit /media/aaronkilik/Data/Backup/mint_etc.backup

Backup Remote Files on Local Linux Server

For the purpose of this section, we’ll use:

Remote Server (tecmint)	        : 192.168.56.102 
Local Backup Server (backup) 	: 192.168.56.10

As we stated before, you must install the same version of rdiff-backup on both machines, now try to check the version on both machines as follows:

$ rdiff-backup -V

Check rdiff Version on Servers

Check rdiff Version on Servers

On the backup server, create a directory which will store the backup files like so:

# mkdir -p /backups

Now from the backup server, run the following commands to make a backup of directories /var/log/ and /root from remote Linux server 192.168.56.102 in /backups:

# rdiff-backup root@192.168.56.102::/var/log/ /backups/192.168.56.102_logs.backup
# rdiff-backup root@192.168.56.102::/root/ /backups/192.168.56.102_rootfiles.backup

The screenshot below shows the root file on remote server 192.168.56.102 and the backed up files on the back server 192.168.56.10:

Backup Remote Directory on Local Server

Backup Remote Directory on Local Server

Take note of the rdiff-backup-data directory created in the backup directory as seen in the screenshot, it contains vital data concerning the backup process and incremental files.

rdiff-backup - Backup Process Files

rdiff-backup – Backup Process Files

Now, on the server 192.168.56.102, additional files have been added to the root directory as shown below:

Verify Backup Directory

Verify Backup Directory

Let’s run the backup command once more time to get the changed data, we can use the -v[0-9] (where the number specifies the verbosity level, default is 3 which is silent) option to set the verbosity feature:

# rdiff-backup -v4 root@192.168.56.102::/root/ /backups/192.168.56.102_rootfiles.backup 

Incremental Backup with Summary

Incremental Backup with Summary

And to list the number and date of partial incremental backups contained in the /backups/192.168.56.102_rootfiles.backup directory, we can run:

# rdiff-backup -l /backups/192.168.56.102_rootfiles.backup/

Automating rdiff-back Backup Using Cron

We can print summary statistics after a successful backup with the --print-statistics. However, if we don’t set this option, the info will still be available from the session statistics file. Read more concerning this option in the STATISTICS section of the man page.

And the –remote-schema flag enables us to specify an alternative method of connecting to a remote computer.

Now, let’s start by creating a backup.sh script on the backup server 192.168.56.10 as follows:

# cd ~/bin
# vi backup.sh

Add the following lines to the script file.

#!/bin/bash

#This is a rdiff-backup utility backup script

#Backup command
rdiff-backup --print-statistics --remote-schema 'ssh -C %s "sudo /usr/bin/rdiff-backup --server --restrict-read-only  /"'  root@192.168.56.102::/var/logs  /backups/192.168.56.102_logs.back

#Checking rdiff-backup command success/error
status=$?
if [ $status != 0 ]; then
        #append error message in ~/backup.log file
        echo "rdiff-backup exit Code: $status - Command Unsuccessful" >>~/backup.log;
        exit 1;
fi

#Remove incremental backup files older than one month
rdiff-backup --force --remove-older-than 1M /backups/192.168.56.102_logs.back

Save the file and exit, then run the following command to add the script to the crontab on the backup server 192.168.56.10:

# crontab -e

Add this line to run your backup script daily at midnight:

0   0  *  *  * /root/bin/backup.sh > /dev/null 2>&1

Save the crontab and close it, now we’ve successful automated the backup process. Ensure that it is working as expected.

Read through the rdiff-backup man page for additional info, exhaustive usage options and examples:

# man rdiff-backup

rdiff-backup Homepage: http://www.nongnu.org/rdiff-backup/

That’s it for now! In this tutorial, we showed you how to install and basically use rdiff-backup, an easy-to-use Python script for local/remote incremental backup in Linux. Do share your thoughts with us via the feedback section below.

Source

Rsnapshot (Rsync Based) – A Local/Remote File System Backup Utility for Linux

rsnapshot is an open source local / remote filesystem backup utility was written in Perl language that advantage the power of Rsync and SSH program to create, scheduled incremental backups of Linux/Unixfilesystems, while only taking up the space of one single full backup plus differences and keep those backups on local drive to different hard drive, an external USB stick, an NFS mounted drive or simply over the network to another machine via SSH.

Install rsnapshot backup in Linux

Install Rsnapshot Backup Tool

This article will demonstrate how to install, setup and use rsnapshot to create incremental hourlydailyweeklyand monthly local backups, as well as remote backups. To perform all the steps in this article, you must be rootuser.

Step 1: Installing Rsnapshot Backup in Linux

Installation of rsnapshot using Yum and APT may differs slightly, if you’re using Red Hat and Debian based distributions.

On RHEL/CentOS

First you will have to install and enable third-party repository called EPEL. Please follow below link to install and enable under your RHEL/CentOS systems. Fedora users don’t require any special repository configurations.

  1. Install and Enable EPEL Repository in RHEL/CentOS 6/5/4

Once you get things setup, install rsnapshot from the command line as shown.

# yum install rsnapshot
On Debian/Ubuntu/Linux Mint

By default, rsnapshot included in Ubuntu’s repositories, so you can install it using apt-get command as shown.

# apt-get install rsnapshot

Step 2: Setting up SSH Password-less Login

To backup remote Linux servers, your rsnapshot backup server will be able to connect through SSH without a password. To accomplish this, you will need to create an SSH public and private keys to authenticate on the rsnapshot server. Please follow below link to generate a public and private keys on your rsnapshot backup server.

  1. Create SSH Passwordless Login Using SSH Keygen

Step 3: Configuring Rsnapshot

Now you will need to edit and add some parameters to rsnapshot configuration file. Open rsnapshot.conf file with vi or nano editor.

# vi /etc/rsnapshot.conf

Next create a backup directory, where you want to store all your backups. In my case my backup directory location is “/data/backup/”. Search for and edit the following parameter to set the backup location.

snapshot_root			 /data/backup/

Also uncomment the “cmd_ssh” line to allow to take remote backups over SSH. To uncomment the line remove the “#” in-front of the following line so that rsnapshot can securely transfer your data to a backup server.

cmd_ssh			/usr/bin/ssh

Next, you need to decide how many old backups you would like to keep, because rsnapshot had no idea how often you want to take snapshots. You need to specify how much data to save, add intervals to keep, and how many of each.

Well, the default settings are good enough, but still I would like you to enable “monthly” interval so that you could also have longer term backups in place. Please edit this section to look similar to below settings.

#########################################
#           BACKUP INTERVALS            #
# Must be unique and in ascending order #
# i.e. hourly, daily, weekly, etc.      #
#########################################

interval        hourly  6
interval        daily   7
interval        weekly  4
interval        monthly 3

One more thing you need to edit is “ssh_args” variable. If you have changed the default SSH Port (22) to something else, you need to specify that port number of your remote backing up server.

ssh_args		-p 7851

Finally, add your local and remote backup directories that you want to backup.

Backup Local Directories

If you’ve decided to backup your directories locally to the same machine, the backup entry would look like this. For example, I am taking backup of my /tecmint and /etc directories.

backup		/tecmint/		localhost/
backup		/etc/			localhost/
Backup Remote Directories

If you would like to backup up a remote server directories, then you need to tell the rsnapshot where the server is and which directories you want to backup. Here I am taking a backup of my remote server “/home” directory under “/data/backup” directory on rsnapshot server.

backup		 root@example.com:/home/ 		/data/backup/

Read Also:

  1. How to Backup/Sync Directories Using Rsync (Remote Sync) Tool
  2. How to Transfer Files/Folders Using SCP Command
Exclude Files and Directories

Here, I’m going to exclude everything, and then only specifically define what I want to backed up. To do this, you need to create a exclude file.

# vi /data/backup/tecmint.exclude

First get the list of directories that you want to backed up and add ( – * ) to exclude everything else. This will only backup what you listed in the file. My exclude file looks like similar to below.

+ /boot
+ /data
+ /tecmint
+ /etc
+ /home
+ /opt
+ /root
+ /usr
- /usr/*
- /var/cache
+ /var
- /*

Using exclude file option can be very tricky due to use of rsync recursion. So, my above example may not be what you are looking. Next add the exclude file to rsnapshot.conf file.

exclude_file    /data/backup/tecmint.exclude

Finally, you are almost finished with the initial configuration. Save the “/etc/rsnapshot.conf” configuration file before moving further. There are many options to explain, but here is my sample configuration file.

config_version  1.2
snapshot_root   /data/backup/
cmd_cp  /bin/cp
cmd_rm  /bin/rm
cmd_rsync       /usr/bin/rsync
cmd_ssh /usr/bin/ssh
cmd_logger      /usr/bin/logger
cmd_du  /usr/bin/du
interval        hourly  6
interval        daily   7
interval        weekly  4
interval        monthly 3
ssh_args	-p 25000
verbose 	2
loglevel        4
logfile /var/log/rsnapshot/
exclude_file    /data/backup/tecmint.exclude
rsync_long_args --delete        --numeric-ids   --delete-excluded
lockfile        /var/run/rsnapshot.pid
backup		/tecmint/		localhost/
backup		/etc/			localhost/
backup		root@example.com:/home/ 		/data/backup/

All the above options and argument explanations are as follows:

  1. config_version 1.2 = Configuration file version
  2. snapshot_root = Backup Destination to store snapshots
  3. cmd_cp = Path to copy command
  4. cmd_rm = Path to remove command
  5. cmd_rsync = Path to rsync
  6. cmd_ssh = Path to SSH
  7. cmd_logger = Path to shell command interface to syslog
  8. cmd_du = Path to disk usage command
  9. interval hourly = How many hourly backups to keep.
  10. interval daily = How many daily backups to keep.
  11. interval weekly = How many weekly backups to keep.
  12. interval monthly = How many monthly backups to keep.
  13. ssh_args = Optional SSH arguments, such as a different port (-p )
  14. verbose = Self-explanatory
  15. loglevel = Self-explanatory
  16. logfile = Path to logfile
  17. exclude_file = Path to the exclude file (will be explained in more detail)
  18. rsync_long_args = Long arguments to pass to rsync
  19. lockfile = Self-explanatory
  20. backup = Full path to what to be backed up followed by relative path of placement.

Step 4: Verify Rsnapshot Configuration

Once you’ve done with your all configuration, its time to verify that everything works as expected. Run the following command to verify that your configuration has the correct syntax.

# rsnapshot configtest

Syntax OK

If everything configured correctly, you will receive a “Syntax OK” message. If you get any error messages, that means you need to correct those errors before running rsnapshot.

Next, do a test run on one of the snapshot to make sure that we are generating correct results. We take the “hourly” parameter to do a test run using -t (test) argument. This below command will display a verbose list of the things it will do, without actually doing them.

# rsnapshot -t hourly
Sample Output
echo 2028 > /var/run/rsnapshot.pid 
mkdir -m 0700 -p /data/backup/ 
mkdir -m 0755 -p /data/backup/hourly.0/ 
/usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded /home \
    /backup/hourly.0/localhost/ 
mkdir -m 0755 -p /backup/hourly.0/ 
/usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded /etc \
    /backup/hourly.0/localhost/ 
mkdir -m 0755 -p /data/backup/hourly.0/ 
/usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded \
    /usr/local /data/backup/hourly.0/localhost/ 
touch /data/backup/hourly.0/

Note: The above command tells rsnapshot to create an “hourly” backup. It actually prints out the commands that it will perform when we execute it really.

Step 5: Running Rsnapshot Manually

After verifying your results, you can remove the “-t” option to run the command really.

# rsnapshot hourly

The above command will run the backup script with all the configuration that we added in the rsnapshot.conffile and creates a “backup” directory and then creates the directory structure under it that organizes our files. After running above command, you can verify the results by going to the backup directory and list the directory structure using ls -l command as shown.

# cd /data/backup
# ls -l

total 4
drwxr-xr-x 3 root root 4096 Oct 28 09:11 hourly.0

Step 6: Automating the Process

To automate the process, you need to schedule rsnapshot to be run at certain intervals from Cron. By default, rsnapshot comes with cron file under “/etc/cron.d/rsnapshot“, if it’s doesn’t exists create one and add the following lines to it.

By default rules are commented, so you need to remove the “#” from in front of the scheduling section to enable these values.

# This is a sample cron file for rsnapshot.
# The values used correspond to the examples in /etc/rsnapshot.conf.
# There you can also set the backup points and many other things.
#
# To activate this cron file you have to uncomment the lines below.
# Feel free to adapt it to your needs.

0     */4    * * *    root    /usr/bin/rsnapshot hourly
30     3     * * *    root    /usr/bin/rsnapshot daily
0      3     * * 1    root    /usr/bin/rsnapshot weekly
30     2     1 * *    root    /usr/bin/rsnapshot monthly

Let me explain exactly, what above cron rules does:

  1. Runs every 4 hours and creates an hourly directory under /backup directory.
  2. Runs daily at 3:30am and create a daily directory under /backup directory.
  3. Runs weekly on every Monday at 3:00am and create a weekly directory under /backup directory.
  4. Runs every monthly at 2:30am and create a monthly directory under /backup directory.

To better understand on how cron rules works, I suggest you read our article that describes.

  1. 11 Cron Scheduling Examples

Step 7: Rsnapshot Reports

The rsnapshot provides a nifty small reporting Perl script that sends you an email alert with all the details as to what occurred during your data backup. To setup this script, you need to copy the script somewhere under “/usr/local/bin” and make it executable.

# cp /usr/share/doc/rsnapshot-1.3.1/utils/rsnapreport.pl /usr/local/bin
# chmod +x /usr/local/bin/rsnapreport.pl

Next, add “–stats” parameter in your “rsnapshot.conf” file to the rsync’s long arguments section.

vi /etc/rsnapshot.conf
rsync_long_args --stats	--delete        --numeric-ids   --delete-excluded

Now edit the crontab rules that were added earlier and call the rsnapreport.pl script to pass the reports to specified email address.

# This is a sample cron file for rsnapshot.
# The values used correspond to the examples in /etc/rsnapshot.conf.
# There you can also set the backup points and many other things.
#
# To activate this cron file you have to uncomment the lines below.
# Feel free to adapt it to your needs.

0     */4    * * *    root    /usr/bin/rsnapshot hourly 2>&1  | \/usr/local/bin/rsnapreport.pl | mail -s "Hourly Backup" yourname@email.com
30     3     * * *    root    /usr/bin/rsnapshot daily 2>&1  | \/usr/local/bin/rsnapreport.pl | mail -s "Daily Backup" yourname@email.com
0      3     * * 1    root    /usr/bin/rsnapshot weekly 2>&1  | \/usr/local/bin/rsnapreport.pl | mail -s "Weekly Backup" yourname@email.com
30     2     1 * *    root    /usr/bin/rsnapshot monthly 2>&1  | \/usr/local/bin/rsnapreport.pl | mail -s "Montly Backup" yourname@email.com

Once you’ve added above entries correctly, you will get a report to your e-mail address similar to below.

SOURCE           TOTAL FILES	FILES TRANS	TOTAL MB    MB TRANS   LIST GEN TIME  FILE XFER TIME
--------------------------------------------------------------------------------------------------------
localhost/          185734	   11853   	 2889.45    6179.18    40.661 second   0.000 seconds

Reference Links

  1. rsnapshot homepage

That’s it for now, if any problems occur during installation do drop me a comment. Till then stay tuned to TecMint for more interesting articles on the Open source world.

Source

How to Install TeamSpeak Server in CentOS 7

TeamSpeak is a popular, cross-platform VoIP and text chat application for internal business communication, education and training (lectures), online gaming and connecting with friends and family. Its primary priority is delivering a solution that is simpler to use, with strong security standards, superb voice quality, and less system and bandwidth utilization. It uses a client-server architecture and is capable of handling thousands of simultaneous users.

How it Works

Deploy your own TeamSpeak Server on a Linux VPS and share your TeamSpeak Server address with teammates, friends and family or anyone you want to communicate with. Using the free desktop TeamSpeak Client, they connect to your TeamSpeak Server and start talking. It’s that easy!

You can get a 2GB RAM VPS from Linode for $10, but it’s unmanaged. If you want a Managed VPS, then use our new BlueHost Promotion Offer, you will get upto 40% OFF on hosting with one Free Domain for Life. If you get a Managed VPS, they will probably install TeamSpeak Server for you.

Key Features

  • It is easy to use and highly customizable.
  • Has a decentralized infrastructure and is highly scalable.
  • Supports high security standards.
  • Offers remarkable voice quality.
  • Allows for low system resource and bandwidth usage.
  • Supports powerful file transfer.
  • Also supports a robust permission system.
  • Supports stunning 3D sound effects .
  • Allows for mobile connectivity and lots more.

Requirements

  1. CentOS 7 Server with Minimal System Installation
  2. CentOS 7 Server with Static IP Address

In this tutorial, we will explain how to install TeamSpeak Server on your CentOS 7 instance and a desktop TeamSpeak Client on a Linux machine.

Installing TeamSpeak Server in CentOS 7

1. First start by updating your CentOS 7 server packages and then install needed dependencies for the installation process using following commands.

# yum update
# yum install vim wget perl tar net-tools bzip2

2. Next, you need to create a user for TeamSpeak Server process to ensure that the TeamSpeak server is running in user mode detached from other processes.

# useradd teamspeak
# passwd teamspeak

3. Now go to the TeamSpeak Server download page and grab the most recent version (i.e. 3.2.0) using following wget command and then extract the tarball and copy all of the files to our unprivileged user’s home directory as shown.

# wget -c http://dl.4players.de/ts/releases/3.2.0/teamspeak3-server_linux_amd64-3.2.0.tar.bz2
# tar -xvf teamspeak3-server_linux_amd64-3.2.0.tar.bz2
# mv teamspeak3-server_linux_amd64 teamspeak3
# cp -R teamspeak3 /home/teamspeak/
# chown -R teamspeak:teamspeak /home/teamspeak/teamspeak3/

4. Once everything in place, now switch to teamspeak user and start the teamspeak server manually using following commands.

# su - teamspeak
$ cd teamspeak3/
$ ./ts3server_startscript.sh start

TeamSpeak Starting

TeamSpeak Starting

5. To manage TeamSpeak Server under Systemd services, you need to create a teamspeak service unit file.

$ su -
# vi  /lib/systemd/system/teamspeak.service

Add the following configuration in the unit file.

[Unit]
Description=Team Speak 3 Server
After=network.target

[Service]
WorkingDirectory=/home/teamspeak/
User=teamspeak
Group=teamspeak
Type=forking
ExecStart=/home/teamspeak/ts3server_startscript.sh start inifile=ts3server.ini
ExecStop=/home/teamspeak/ts3server_startscript.sh stop
PIDFile=/home/teamspeak/ts3server.pid
RestartSec=15
Restart=always

[Install]
WantedBy=multi-user.target

Save and close the file. Then start teamspeak server for now and enable it to start automatically at system boot as follows.

# systemctl start teamspeak
# systemctl enable teamspeak
# systemctl status teamspeak

Start TeamSpeak Server

Start TeamSpeak Server

6. When you start the teamspeak server for the first time, it generates a administrator token/key which you will use to connect to the server from a TeamSpeak Client. You can view the log file to get the key.

# cat /home/teamspeak/logs/ts3server_2017-08-09__22_51_25.819181_1.log

TeamSpeak Server Token

TeamSpeak Server Token

7. Next, TeamSpeak listens on a number of ports: 9987 UDP (TeamSpeak Voice service), 10011 TCP (TeamSpeak ServerQuery) and 30033 TCP (TeamSpeak FileTransfer).

Therefore modify your firewall rules to open these ports as follows.

# firewall-cmd --zone=public --add-port=9987/udp --permanent
# firewall-cmd --zone=public --add-port=10011/tcp --permanent
# firewall-cmd --zone=public --add-port=30033/tcp --permanent
# firewall-cmd --reload

Installing TeamSpeak Client in Ubuntu 18.04

8. Login into your Ubuntu Desktop machine (you can use any Linux OS) and go to the TeamSpeak Client download page and grab the most recent version (i.e. 3.1.9) using following wget command and install it as shown.

$ wget http://dl.4players.de/ts/releases/3.1.9/TeamSpeak3-Client-linux_amd64-3.1.9.run
$ chmod 755 TeamSpeak3-Client-linux_amd64-3.1.9.run
$ ./TeamSpeak3-Client-linux_amd64-3.1.9.run
$ cd TeamSpeak3-Client-linux_amd64
./ts3client_runscript.sh

TeamSpeak Client on Ubuntu

TeamSpeak Client on Ubuntu

9. To access the server query admin account, use the loginname and password which were created after starting the server. Here, you will also be asked to provide the ServerAdmin Key, once entered the key, you will see the message below meaning you now have administrative rights on the teamspeak server you just installed.

Privilege Key successfully used.

For more information, check out the TeamSPeak Homepage: https://www.teamspeak.com/en/

In this article, we have explained how to install TeamSpeack Server on CentOS 7 and a client on Ubuntu Desktop. If you have any questions or thoughts to share, use the feedback form below to reach us.

Source

Getting Started with MySQL Clusters as a Service

MySQL Cluster.me starts offering MySQL Clusters and MariaDB Clusters as a service based on Galera Replication technology.

In this article we will go through the main features of a MySQL and MariaDB clusters as a service.

MySQL Clusters as a Service

MySQL Clusters as a Service

What is a MySQL Cluster?

If you have ever wondered how you can increase the reliability and scalability of your MySQL database you might have found that one of the ways to do that is through a MySQL Cluster based on Galera Cluster technology.

This technology allows you to have a complete copy of the MySQL database synchronized across many servers in one or several datacenters. This lets you achieve high database availability – which means that if 1 or more of your database servers crash then you will still have a fully operational database on another server.

It is important to note that the minimum number of servers in a MySQL Cluster is 3 because when one server recovers from a crash it needs to copy data from one of the remaining two servers making one of them a “donor“. So in case of crash recovery you must have at least two online servers from which the crashed server can recover the data.

Also, a MariaDB cluster is essentially the same thing as MySQL cluster just based on a newer and more optimized version on MySQL.

MySQL Clusters Galera Replications

MySQL Clusters Galera Replications

What is a MySQL Cluster and MariaDB Cluster as a Service?

MySQL Clusters as a service offer you a great way to achieve both requirements at the same time.

First, you get High Database Availability with a high probability of 100% Uptime in case of any datacenter issues.

Secondly, outsourcing the tedious tasks associated with managing a mysql cluster let you focus on your business instead of spending time on cluster management.

In fact, managing a cluster on your own may require you to perform the following tasks:

  1. Provision and setup the cluster – may take you a few hours of an experienced database administrator to fully setup an operational cluster.
  2. Monitor the cluster – one of your techs must keep an eye on the cluster 24×7 because many issues can happen – cluster desynchronization, server crash, disk getting full etc.
  3. Optimize and resize the cluster – this can be a huge pain if you have a large database and you need to resize the cluster. This task needs to be handled with extra care.
  4. Backups management – you need to backup your cluster data to avoid it being lost if your cluster fails.
  5. Issue resolution – you need an experienced engineer who will be able to dedicate a lot of effort optimizing and solving issues with your cluster.

Instead, you can save a lot of time and money by going with a MySQL Cluster as a Service offered by MySQLcluster.me team.

So what’s included into MySQL Cluster as a Service offered by MySQLcluster.me?

Apart from high database availability with an almost guaranteed uptime of 100%, you get the ability to:

  1. Resize the MySQL Cluster at any time – you can increase or decrease cluster resources to adjust for the spikes in your traffic (RAM, CPU, Disk).
  2. Optimized Disks and Database Performance – disks can achieve a rate of 100,000 IOPS which is crucial for database operation.
  3. Datacenter Choice – you can decide in which datacenter you would like to host the cluster. Currently supported – Digital Ocean, Amazon AWS, RackSpace, Google Compute Engine.
  4. 24×7 Cluster Support – if anything happens to your cluster our team will always assist you and even provide you advice on your cluster architecture.
  5. Cluster Backups – our team sets up backups for you so that your cluster is automatically backed up on a daily basis to a secure location.
  6. Cluster Monitoring – our team sets up automatic monitoring so in case of any issue our team starts working on your cluster even if you are away from your desk.

There are a lot of advantages of having your own MySQL Cluster but this must be done with care and experience.

Speak to MySQL Cluster team to find the best suitable package for you.

Source

How to Install Countly Analytics in CentOS and Debian Based Systems

Countly is a feature-rich, open source, highly-extensible real-time mobile & web analytics, push notifications and crash reporting software powering more than 2.5k web sites and 12k mobile applications.

It works in a client/server model; the server gathers data from mobile devices and other Internet-connected devices, while the client (mobile, web or desktop SDK) displays this information in a format which analyzes application usage and end-user behavior.

Watch a 1 minute video introduction to Countly.

Countly Analytics Features:

  • Supports for centralized management.
  • Powerful dashboard user interface (supports multiple, custom and API dashboards).
  • Provides user, application and permission management functionalities.
  • Offers multiple application support.
  • Supports for reading / writing APIs.
  • Supports a variety of plugins.
  • Offers analytics features for mobile, web and desktop.
  • Supports crash reporting for iOS and Android and error reporting for Javascript.
  • Supports for rich and interactive push notifications for iOS and Android.
  • Also supports for custom email reporting.

Requirements

Countly can be easily installed via beautiful installation script on a freshly installed CentOS, RHEL, Debian and Ubuntu systems without any services listening on port 80 or 443.

  1. Installation of CentOS 7 Minimal
  2. Installation of RHEL 7 Minimal
  3. Installation of Debian 9 Minimal

In this article, we will guide you on how to install and manage Countly Analytics from the command line in CentOS and Debian based systems.

Step 1: Install Countly Server

1. Luckily, there is an installation script prepared for you which will install all the dependencies as well as Countly server on your system.

Simply download the script using the wget command and run it thereafter as follows.

# wget -qO- http://c.ly/install | bash

Important: Disable SELinux on CentOS or RHEL if it’s enabled. Countly will not work on a server where SELinux is enabled.

Installation will take between 6-8 minutes, once complete open the URL from a web browser to create your admin account and login to your dashboard.

http://localhost 
OR
http://SERVER_IP

Create Countly Admin Account

Create Countly Admin Account

2. You will land in the interface below where you can add an App to your account to start collecting data. To populate an app with random/demo data, check the option “Demo data”.

Countly Add App

Countly Add App

3. Once the app has been populated, you will get the overview of the test app as shown. To manage applications, users plugins etc, click on the Management Menu item.

Countly App Analytics

Countly App Analytics

Step 2: Manage Countly From Linux Terminal

4. Countly ships in with several commands to manage the process. You can execute most of the tasks via Countly user interface, but the countly command which can be run in the following syntax – does the needful for command line geeks.

$ sudo countly version		#prints Countly version
$ sudo countly start  		#starts Countly 
$ sudo countly stop	  	#stops Countly 
$ sudo countly restart  	#restarts Countly 
$ sudo countly status  	        #used to view process status
$ sudo countly test 		#runs countly test set 
$ sudo countly dir 		#prints Countly is installed path

Step 3: Backup and Restore Countly

5. To configure automatic backups for Countly, you can run countly backup command or assign a cron job that runs every day or week. This cron job ideally backup Countly data to a directory of your choice.

The following command backup Countly database, Countly configuration & user files (e.g app images, user images, certificates, etc).

$ sudo countly backup /var/backups/countly

Additionally you can back up files or database separately by executing.

$ sudo countly backupdb /var/backups/countly
$ sudo countly backupfiles /var/backups/countly

6. To restore Countly from backup, issue the command below (specify the backup directory).

$ sudo countly restore /var/backups/countly

Likewise restore only files or database separately as follows.

$ sudo countly restorefiles /var/backups/countly
$ sudo countly restoredb /var/backups/countly

Step 4: Upgrade Countly Server

7. To initiate an upgrade process, run the command below which will run npm to install any new dependencies, if any. It will also run grunt dist-all to minify all files and create production files from them for enhanced effective loading.

And lastly restarts Countly’s Node.js process to effect new files changes during the two previous processes.

$ sudo countly upgrade 	
$ countly usage 

For more information visit official site: https://github.com/countly/countly-server

In this article, we guided you on how to install and manage Countly Analytics server from the command line in CentOS and Debian based systems. As usual, send us your queries or thoughts concerning this article via the response form below.

Source

5 Ways to Find a ‘Binary Command’ Description and Location on File System

With the thousands of commands/programs available in Linux systems, knowing the type and purpose of a given command as well as its location (absolute path) on the system can be a little challenge for newbies.

Knowing a few details of commands/programs not only helps a Linux user master the numerous commands, but it also enables a user understand what operations on the system to use them for, either from the command line or a script.

Therefore, in this article we will explain to you five useful commands for showing a short description and the location of a given command.

To discover new commands on your system look into all the directories in your PATH environmental variable. These directories store all the installed commands/programs on the system.

Once you find an interesting command name, before you proceed to read more about it probably in the man page, try to gather some shallow information about it as follows.

Assuming you have echoed the values of PATH and moved into the directory /usr/local/bin and noticed a new command called fswatch (monitors file modification changes):

$ echo $PATH
$ cd /usr/local/bin

Find New Commands in Linux

Find New Commands in Linux

Now let’s find out the description and location of the fswatch command using following different ways in Linux.

1. whatis Command

whatis is used to display one-line manual page descriptions of the command name (such as fswatch in the command below) you enter as an argument.

If the description is too long some parts are trimmed of by default, use the -l flag to show a complete description.

$ whatis fswatch
$ whatis -l fswatch

Linux whatis Command Example

Linux whatis Command Example

2. apropos Command

apropos searches for the manual page names and descriptions of the keyword (considered a regex, which is the command name) provided.

The -l option enables showing of the compete description.

$ apropos fswatch 
$ apropos -l fswatch

Linux apropos Command Example

Linux apropos Command Example

By default, apropos may show an output of all matched lines, as in the example below. You can only match the exact keyword using the -e switch:

$ apropos fmt
$ apropos -e fmt

Linux apropos Command Show by Keyword

Linux apropos Command Show by Keyword

3. type Command

type tells you the full pathname of a given command, additionally, in case the command name entered is not a program that exists as a separate disk file, type also tells you the command classification:

  1. Shell built-in command or
  2. Shell keyword or reserved word or
  3. An alias
$ type fswatch 

Linux type Command Example

Linux type Command Example

When the command is an alias for another command, type shows the command executed when the alias is run. Use the alias command to view all aliases created on your system:

$ alias
$ type l
$ type ll

Show All Aliases in Linux

Show All Aliases in Linux

4. which Command

which helps to locate a command, it prints the absolute command path as below:

$ which fswatch 

Find Linux Command Location

Find Linux Command Location

Some binaries can be stored in more than one directory under the PATH, use the -a flag to show all matching pathnames.

5. whereis Command

whereis command locates the binary, source, and manual page files for the command name provided as follows:

$ whereis fswatch
$ whereis mkdir 
$ whereis rm

Linux whereis Command Example

Linux whereis Command Example

Although the commands above may be vital in finding some quick info about a command/program, opening and reading through its manual page always provides a full documentation, including a list of other related programs:

$ man fswatch

In this article, we reviewed five simple commands used to display short manual page descriptions and location of a command. You can make a contribution to this post or ask a question via the feedback section below.

Source

How to Install Snipe-IT (IT Asset Management) on CentOS and Ubuntu

Snipe-IT is a free and open source, cross-platform, feature-rich IT asset management system built using a PHP framework called Laravel. It is web-based software, which enables IT administrators in medium to large enterprises to track physical assets, software licenses, accessories and consumables in a single place.

Check out a live, up-to-date version of Snipe-IT Asset Management Tool: https://snipeitapp.com/demo

Snipe-IT Features:

  1. It is a cross-platform – works on Linux, Windows and Mac OS X.
  2. It is mobile-friendly for easy asset updates.
  3. Easily Integrates with Active Directory and LDAP.
  4. Slack notification integration for checkin/checkout.
  5. Supports one-click (or cron) backups and automated backups.
  6. Supports optional two-factor authentication with Google authenticator.
  7. Supports generation of custom reports.
  8. Supports custom status labels.
  9. Supports bulk user actions and user role management for different levels of access.
  10. Supports several languages for easy localization and so much more.

In this article, I will explain how to install a IT asset management system called Snipe-IT using a LAMP (Linux, Apache, MySQL & PHP) stack on CentOS and Debian based systems.

Step 1: Install LAMP Stack

1. First update the system (meaning update the list of packages that needs to be upgraded and add new packages that have entered in repositories enabled on the system).

$ sudo apt update        [On Debian/Ubuntu]
$ sudo yum update        [On CentOS/RHEL] 

2. Once system has been updated, now you can install LAMP (Linux, Apache, MySQL & PHP) stack with all needed PHP modules as shown.

Install LAMP on Debian/Ubuntu

$ sudo apt install apache2 apache2-utils libapache2-mod-php mariadb-server mariadb-client php php-pdo php-mbstring php-tokenizer php-curl php-mysql php-ldap php-zip php-fileinfo php-gd php-dom php-mcrypt 

Install LAMP on CentOS/RHEL

3. Snipe-IT requires PHP greater than 5.5.9 and PHP 5.5 has reached end of life, so to have PHP 5.6, you need to enable the Remi repository as shown.

$ sudo rpm -Uvh http://rpms.remirepo.net/enterprise/remi-release-7.rpm
$ sudo yum -y install yum-utils
$ sudo yum-config-manager --enable remi-php56

4. Next, install PHP 5.6 on CentOS 7 with the required modules needed by Snipe-IT.

$ sudo yum install httpd mariadb mariadb-server php php-openssl php-pdo php-mbstring php-tokenizer php-curl php-mysql php-ldap php-zip php-fileinfo php-gd php-dom php-mcrypt

5. After the LAMP stack installation completes, start the web server for the mean time, and enable it to start on the next system boot with the following command.

$ sudo systemctl start enable status apache2       [On Debian/Ubuntu]
$ sudo systemctl start enable status httpd         [On CentOS/RHEL]

6. Next verify Apache and PHP installation and all its current configurations from a web browser, let’s create a info.php file in the Apache DocumentRoot (/var/www/html) using the following command.

$ sudo echo "<?php  phpinfo(); ?>" | sudo tee -a /var/www/html/info.php

Now open a web browser and navigate to following URL’s to verify Apache and PHP configuration.

http://SERVER_IP/
http://SERVER_IP/info.php 

7. Next, you need to secure and harden your MySQL installation using the following command.

$ sudo mysql_secure_installation     

You will be asked you to set a strong root password for your MariaDB and answer Y to all of the other questions asked (self explanatory).

8. Finally start MySQL server and enable it to start at the next system boot.

$ sudo systemctl start mariadb            
OR
$ sudo systemctl start mysql

Step 2: Create Snipe-IT Database on MySQL

9. Now log in to the MariaDB shell and create a database for Snipe-IT, a database user and set a suitable password for the user as follows.

$ mysql -u root -p

Provide the password for the MariaDB root user.

MariaDB [(none)]> CREATE DATABASE snipeit_db;
MariaDB [(none)]> CREATE USER 'tecmint'@'localhost' IDENTIFIED BY 't&cmint@190root';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON snipeit_db.* TO 'tecmint'@'localhost';
MariaDB [(none)]> FLUSH PRIVILEGES;
MariaDB [(none)]> exit

Step 3: Install Composer – PHP Manager

10. Now you need to install Composer – a dependency manager for PHP, with the commands below.

$ sudo curl -sS https://getcomposer.org/installer | php
$ sudo mv composer.phar /usr/local/bin/composer

Step 4: Install Snipe-IT Asset Management

11. First install Git to fetch and clone the latest version of Snipe-IT under Apache web-root directory.

$ sudo apt -y install git      [On Debian/Ubuntu]
$ sudo yum -y install git      [On CentOS/RHEL]

$ cd  /var/www/
$ sudo git clone https://github.com/snipe/snipe-it.git

12. Now go into the snipe-it directory and rename the .env.example file to .env.

$ cd snipe-it
$ ls
$ sudo mv .env.example .env

Step 5: Configure Snipe-IT Asset Management

13. Next, configure the snipe-it environment, here you’ll provide the database connection settings and many more.

First open the .env file.

$ sudo vi .env

Then Find and change the following variables according to instructions given.

APP_TIMEZONE=Africa/Kampala                                   #Change it according to your country
APP_URL=http://10.42.0.1/setup                                #set your domain name or IP address
APP_KEY=base64:BrS7khCxSY7282C1uvoqiotUq1e8+TEt/IQqlh9V+6M=   #set your app key
DB_HOST=localhost                                             #set it to localhost
DB_DATABASE=snipeit_db                                        #set the database name
DB_USERNAME=tecmint                                           #set the database username
DB_PASSWORD=password                                          #set the database user password

Save and close the file.

14. Now you need to set the appropriate permissions on certain directories as follows.

$ sudo chmod -R 755 storage 
$ sudo chmod -R 755 public/uploads
$ sudo chown -R www-data:www-data storage public/uploads   [On Debian/Ubuntu]
sudo chown -R apache:apache storage public/uploads         [On CentOS/RHEL]

15. Next, install all the dependencies required by PHP using Composer dependency manager as follows.

$ sudo composer install --no-dev –prefer-source

16. Now you can generate the “APP_KEY” value with the following command (this will be set automatically in the .env file).

$ sudo php artisan key:generate

17. Now, you need to create a virtual host file on the web server for Snipe-IT.

$ sudo vi /etc/apache2/sites-available/snipeit.example.com.conf     [On Debian/Ubuntu]
$ sudo vi /etc/httpd/conf.d/snipeit.example.com.conf                [On CentOS/RHEL]

Then add/modify the line below in your Apache config file (use your server IP address here).

<VirtualHost 10.42.0.1:80>
    ServerName snipeit.tecmint.lan
    DocumentRoot /var/www/snipe-it/public
    <Directory /var/www/snipe-it/public>
        Options Indexes FollowSymLinks MultiViews
        AllowOverride All
        Order allow,deny
        allow from all
    </Directory>
</VirtualHost>

Save and close the file.

18. On Debian/Ubuntu, you need to enable virtual host, mod_rewrite and mcrypt using the following commands.

$ sudo a2ensite snipeit.conf
$ sudo a2enmod rewrite
$ sudo php5enmod mcrypt

19. Lastly, restart Apache web server to take new changes into effect.

$ sudo systemctl restart apache2       [On Debian/Ubuntu]
$ sudo systemctl restart httpd         [On CentOS/RHEL]

Step 6: Snipe-IT Web Installation

20. Now open your web browser and enter the URL: http://SERVER_IP to view the Snipe-IT web installation interface.

First you will see the Pre-Flight Check page below, click Next: Create Database Tables.

Snipe-IT Pre Flight Check

Snipe-IT Pre Flight Check

21. You will now see all the tables created, click Next: Create User.

Create Snipe-IT User

Create Snipe-IT User

22. Here, provide all the admin user information and click Next: Save User.

Snipe-IT User Information

Snipe-IT User Information

23. Finally open the login page using the URL http://SERVER_IP/login as shown below and login to view the Snipe-IT dashboard.

Snipe-IT Login

Snipe-IT Login

Snipe-IT Dashboard

Snipe-IT Dashboard

Snipe-IT Homepagehttps://snipeitapp.com/

In this article, we discussed how to setup Snipe-IT with LAMP (Linux Apache MySQL PHP) stack on CentOS and Debian based systems. If any issues, do share with us using our comment form below.

Source

5 Useful Tools to Remember Linux Commands Forever

There are thousands of tools, utilities, and programs that come pre-installed on a Linux system. You can run them from a terminal window or virtual console as commands via a shell such as Bash.

A command is typically the pathname (eg. /usr/bin/top) or basename (e.g top) of a program including arguments passed to it. However, there is a common misconception among Linux users that a command is the actual program or tool.

Read AlsoA – Z Linux Commands – Overview with Examples

Remembering Linux commands and their usage is not easy, especially for new Linux users. In this article, we will share 5 command-line tools for remembering Linux commands.

1. Bash History

Bash records all unique commands executed by users on the system in a history file. Each user’s bash history file is stored in their home directory (e.g. /home/tecmint/.bash_history for user tecmint). A user can only view his/her own history file content and root can view the bash history file for all users on a Linux system.

To view your bash history, use the history command as shown.

$ history  

View User History Command

View User History Command

To fetch a command from bash history, press the Up arrow key continuously to search through a list of all unique commands that you run previously. If you have skipped the command your looking for or failed to get it, use the Down arrow key to perform a reverse search.

This bash feature is one of the many ways of easily remembering Linux commands. You can find more examples of the history command in these articles:

  1. The Power of Linux “History Command” in Bash Shell
  2. How to Clear BASH Command Line History in Linux

2. Friendly Interactive Shell (Fish)

Fish is a modern, powerful, user-friendly, feature-rich and interactive shell which is compatible to Bash or Zsh. It supports automatic suggestions of file names and commands in the current directory and history respectively, which helps you to easily remember commands.

In the following screenshot, the command “uname -r” is in the bash history, to easily remember it, type the later “u” or “un” and fish will auto-suggest the complete command. If the command auto-suggested is the one you wish to run, use the Right arrow key to select it and run it.

Fish - Friendly Interactive Shell

Fish – Friendly Interactive Shell

Fish is a fully-fledged shell program with a wealth of features for you to remember Linux commands in a straightforward manner.

3. Apropos Tool

Apropos searches and displays the name and short description of a keyword, for instance a command name, as written in the man page of that command.

Read Also5 Ways to Find a Linux Command Description and Location

If you do not know the exact name of a command, simply type a keyword (regular expression) to search for it. For example if you are searching for the description of docker-commit command, you can type docker, apropos will search and list all commands with the string docker, and their description as well.

$ apropos docker

Find Linux Command Description

Find Linux Command Description

You can get the description of the exact keyword or command name you have provided as shown.

$ apropos docker-commit
OR
$ apropos -a docker-commit

This is another useful way of remembering Linux commands, to guide you on what command to use for a specific task or if you have forgotten what a command is used for. Read on, because the next tool is even more interesting.

4. Explain Shell Script

Explain Shell is a small Bash script that explains shell commands. It requires the curl program and a working internet connection. It displays a command description summary and in addition, if the command includes a flag, it also shows a description of that flag.

To use it, first you need to add the following code at the bottom of you $HOME/.bashrc file.

# explain.sh begins
explain () {
  if [ "$#" -eq 0 ]; then
    while read  -p "Command: " cmd; do
      curl -Gs "https://www.mankier.com/api/explain/?cols="$(tput cols) --data-urlencode "q=$cmd"
    done
    echo "Bye!"
  elif [ "$#" -eq 1 ]; then
    curl -Gs "https://www.mankier.com/api/explain/?cols="$(tput cols) --data-urlencode "q=$1"
  else
    echo "Usage"
    echo "explain                  interactive mode."
    echo "explain 'cmd -o | ...'   one quoted command to explain it."
  fi
}

Save and close the file, then source it or open a fresh terminal windows.

$ source .bashrc

Assuming you have forgotten what the command “apropos -a” does, you can use explain command to help you remember it, as shown.

$ explain 'apropos -a'

Show Linux Command Manual

Show Linux Command Manual

This script can explain to you any shell command effectively, thus helping you remember Linux commands. Unlike the explain shell script, the next tool brings a distinct approach, it shows usage examples of a command.

5. Cheat Program

Cheat is a simple, interactive command-line cheat-sheet program which shows use cases of a Linux command with a number of options and their short understandable function. It is useful for Linux newbies and sysadmins.

To install and use it, check out our complete article about Cheat program and its usage with examples:

  1. Cheat – An Ultimate Command Line ‘Cheat-Sheet’ for Linux Beginners

That’s all! In this article, we have shared 5 command-line tools for remembering Linux commands. If you know any other tools for the same purpose that are missing in the list above, let us know via the feedback form below.

Source

How to Enable, Disable and Install Yum Plug-ins

YUM plug-ins are small programs that extend and improve the overall performance of the package manager. A few of them are installed by default, while many are not. Yum always notify you which plug-ins, if any, are loaded and active whenever you run any yum command.

In this short article, we will explain how to turn on or off and configure YUM package manager plug-ins in CentOS/RHEL distributions.

To see all active plug-ins, run a yum command on the terminal. From the output below, you can see that the fastestmirror plug-in is loaded.

# yum search nginx

Loaded plugins: fastestmirror
Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast
Determining fastest mirrors
...

Enabling YUM Plug-ins

To enable yum plug-ins, ensure that the directive plugins=1 (1 meaning on) exists under the [main] section in the /etc/yum.conf file, as shown below.

# vi /etc/yum.conf
Yum Configuration File
[main]
cachedir=/var/cache/yum/$basearch/$releasever
keepcache=0
debuglevel=2
logfile=/var/log/yum.log
exactarch=1
obsoletes=1
gpgcheck=1
plugins=1 installonly_limit=5

This is a general method of enabling yum plug-ins globally. As we will see later on, you can enable them individually in their receptive configuration files.

Disabling YUM Plug-ins

To disable yum plug-ins, simply change the value above to 0 (meaning off), which disables all plug-ins globally.

plugins=0	

At this stage, it is useful to note that:

  • Since a few plug-ins (such as product-id and subscription-manager) offer fundamental yum functionalities, it is not recommended to turn off all plug-ins especially globally.
  • Secondly, disabling plug-ins globally is allowed as an easy way out, and this implies that you can use this provision when investigating a likely problem with yum.
  • Configurations for various plug-ins are located in /etc/yum/pluginconf.d/.
  • Disabling plug-ins globally in /etc/yum.conf overrides settings in individual configuration files.
  • And you can also disable a single or all yum plug-ins when running yum, as described later on.

Installing and Configuring Extra YUM Plug-ins

You can view a list of all yum plug-ins and their descriptions using this command.

# yum search yum-plugin

Loaded plugins: fastestmirror
Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast
Loading mirror speeds from cached hostfile
 * base: mirror.sov.uk.goscomb.net
 * epel: www.mirrorservice.org
 * extras: mirror.sov.uk.goscomb.net
 * updates: mirror.sov.uk.goscomb.net
========================================================================= N/S matched: yum-plugin ==========================================================================
PackageKit-yum-plugin.x86_64 : Tell PackageKit to check for updates when yum exits
fusioninventory-agent-yum-plugin.noarch : Ask FusionInventory agent to send an inventory when yum exits
kabi-yum-plugins.noarch : The CentOS Linux kernel ABI yum plugin
yum-plugin-aliases.noarch : Yum plugin to enable aliases filters
yum-plugin-auto-update-debug-info.noarch : Yum plugin to enable automatic updates to installed debuginfo packages
yum-plugin-changelog.noarch : Yum plugin for viewing package changelogs before/after updating
yum-plugin-fastestmirror.noarch : Yum plugin which chooses fastest repository from a mirrorlist
yum-plugin-filter-data.noarch : Yum plugin to list filter based on package data
yum-plugin-fs-snapshot.noarch : Yum plugin to automatically snapshot your filesystems during updates
yum-plugin-keys.noarch : Yum plugin to deal with signing keys
yum-plugin-list-data.noarch : Yum plugin to list aggregate package data
yum-plugin-local.noarch : Yum plugin to automatically manage a local repo. of downloaded packages
yum-plugin-merge-conf.noarch : Yum plugin to merge configuration changes when installing packages
yum-plugin-ovl.noarch : Yum plugin to work around overlayfs issues
yum-plugin-post-transaction-actions.noarch : Yum plugin to run arbitrary commands when certain pkgs are acted on
yum-plugin-priorities.noarch : plugin to give priorities to packages from different repos
yum-plugin-protectbase.noarch : Yum plugin to protect packages from certain repositories.
yum-plugin-ps.noarch : Yum plugin to look at processes, with respect to packages
yum-plugin-remove-with-leaves.noarch : Yum plugin to remove dependencies which are no longer used because of a removal
yum-plugin-rpm-warm-cache.noarch : Yum plugin to access the rpmdb files early to warm up access to the db
yum-plugin-show-leaves.noarch : Yum plugin which shows newly installed leaf packages
yum-plugin-tmprepo.noarch : Yum plugin to add temporary repositories
yum-plugin-tsflags.noarch : Yum plugin to add tsflags by a commandline option
yum-plugin-upgrade-helper.noarch : Yum plugin to help upgrades to the next distribution version
yum-plugin-verify.noarch : Yum plugin to add verify command, and options
yum-plugin-versionlock.noarch : Yum plugin to lock specified packages from being updated

To install a plug-in, use the same method for installing a package. For instance we will install the changelogplug-in which is used to display package changelogs before/after updating.

# yum install yum-plugin-changelog 

Once you have installed, changelog will be enabled by default, to confirm take look into its configuration file.

# vi /etc/yum/pluginconf.d/changelog.conf

Now you can view the changelog for a package (httpd in this case) like this.

# yum changelog httpd

Loaded plugins: changelog, fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.linode.com
 * epel: mirror.freethought-internet.co.uk
 * extras: mirrors.linode.com
 * updates: mirrors.linode.com

Listing all changelogs

==================== Installed Packages ====================
httpd-2.4.6-45.el7.centos.4.x86_64       installed
* Wed Apr 12 17:30:00 2017 CentOS Sources <bugs@centos.org> - 2.4.6-45.el7.centos.4
- Remove index.html, add centos-noindex.tar.gz
- change vstring
- change symlink for poweredby.png
- update welcome.conf with proper aliases
...

Disable YUM Plug-ins in Command Line

As stated before, we can also turn off one or more plug-ins while running a yum command by using these two important options.

  • --noplugins – turns off all plug-ins
  • --disableplugin=plugin_name – disables a single plug-ins

You can disable all plug-ins as in this yum command.

# yum search --noplugins yum-plugin

The next command disables the plug-in, fastestmirror while installing httpd package.

# yum install --disableplugin=fastestmirror httpd

Loaded plugins: changelog
Resolving Dependencies
--> Running transaction check
---> Package httpd.x86_64 0:2.4.6-45.el7.centos.4 will be updated
--> Processing Dependency: httpd = 2.4.6-45.el7.centos.4 for package: 1:mod_ssl-2.4.6-45.el7.centos.4.x86_64
---> Package httpd.x86_64 0:2.4.6-67.el7.centos.6 will be an update
...

That’s it for now! you may also like to read these following YUM related articles.

  1. How to Use ‘Yum History’ to Find Out Installed or Removed Packages Info
  2. How to Fix Yum Error: Database Disk Image is Malformed

In this guide, we showed how to activate, configure or deactivate YUM package manager plug-ins in CentOS/RHEL 7. Use the comment form below to ask any question or share your views about this article.

Source

WP2Social Auto Publish Powered By : XYZScripts.com