Using Shell Scripting to Automate Linux System Maintenance Tasks

Some time ago I read that one of the distinguishing characteristics of an effective system administrator / engineer is laziness. It seemed a little contradictory at first but the author then proceeded to explain why:

Automate Linux System Maintenance Tasks

RHCE Series: Automate Linux System Maintenance Tasks – Part 4

if a sysadmin spends most of his time solving issues and doing repetitive tasks, you can suspect he or she is not doing things quite right. In other words, an effective system administrator / engineer should develop a plan to perform repetitive tasks with as less action on his / her part as possible, and should foresee problems by using,

for example, the tools reviewed in Part 3 – Monitor System Activity Reports Using Linux Toolsets of this series. Thus, although he or she may not seem to be doing much, it’s because most of his / her responsibilities have been taken care of with the help of shell scripting, which is what we’re going to talk about in this tutorial.

What is a shell script?

In few words, a shell script is nothing more and nothing less than a program that is executed step by step by a shell, which is another program that provides an interface layer between the Linux kernel and the end user.

By default, the shell used for user accounts in RHEL 7 is bash (/bin/bash). If you want a detailed description and some historical background, you can refer to this Wikipedia article.

To find out more about the enormous set of features provided by this shell, you may want to check out its man page, which is downloaded in in PDF format at (Bash Commands). Other than that, it is assumed that you are familiar with Linux commands (if not, I strongly advise you to go through A Guide from Newbies to SysAdminarticle in Tecmint.com before proceeding). Now let’s get started.

Writing a script to display system information

For our convenience, let’s create a directory to store our shell scripts:

# mkdir scripts
# cd scripts

And open a new text file named system_info.sh with your preferred text editor. We will begin by inserting a few comments at the top and some commands afterwards:

#!/bin/bash

# Sample script written for Part 4 of the RHCE series
# This script will return the following set of system information:
# -Hostname information:
echo -e "\e[31;43m***** HOSTNAME INFORMATION *****\e[0m"
hostnamectl
echo ""
# -File system disk space usage:
echo -e "\e[31;43m***** FILE SYSTEM DISK SPACE USAGE *****\e[0m"
df -h
echo ""
# -Free and used memory in the system:
echo -e "\e[31;43m ***** FREE AND USED MEMORY *****\e[0m"
free
echo ""
# -System uptime and load:
echo -e "\e[31;43m***** SYSTEM UPTIME AND LOAD *****\e[0m"
uptime
echo ""
# -Logged-in users:
echo -e "\e[31;43m***** CURRENTLY LOGGED-IN USERS *****\e[0m"
who
echo ""
# -Top 5 processes as far as memory usage is concerned
echo -e "\e[31;43m***** TOP 5 MEMORY-CONSUMING PROCESSES *****\e[0m"
ps -eo %mem,%cpu,comm --sort=-%mem | head -n 6
echo ""
echo -e "\e[1;32mDone.\e[0m"

Next, give the script execute permissions:

# chmod +x system_info.sh

and run it:

./system_info.sh

Note that the headers of each section are shown in color for better visualization:

Server Monitoring Shell Script

Server Monitoring Shell Script

That functionality is provided by this command:

echo -e "\e[COLOR1;COLOR2m<YOUR TEXT HERE>\e[0m"

Where COLOR1 and COLOR2 are the foreground and background colors, respectively (more info and options are explained in this entry from the Arch Linux Wiki) and <YOUR TEXT HERE> is the string that you want to show in color.

Automating Tasks

The tasks that you may need to automate may vary from case to case. Thus, we cannot possibly cover all of the possible scenarios in a single article, but we will present three classic tasks that can be automated using shell scripting:

1) update the local file database, 2) find (and alternatively delete) files with 777 permissions, and 3) alert when filesystem usage surpasses a defined limit.

Let’s create a file named auto_tasks.sh in our scripts directory with the following content:

#!/bin/bash

# Sample script to automate tasks:
# -Update local file database:
echo -e "\e[4;32mUPDATING LOCAL FILE DATABASE\e[0m"
updatedb
if [ $? == 0 ]; then
        echo "The local file database was updated correctly."
else
        echo "The local file database was not updated correctly."
fi
echo ""

# -Find and / or delete files with 777 permissions.
echo -e "\e[4;32mLOOKING FOR FILES WITH 777 PERMISSIONS\e[0m"
# Enable either option (comment out the other line), but not both.
# Option 1: Delete files without prompting for confirmation. Assumes GNU version of find.
#find -type f -perm 0777 -delete
# Option 2: Ask for confirmation before deleting files. More portable across systems.
find -type f -perm 0777 -exec rm -i {} +;
echo ""
# -Alert when file system usage surpasses a defined limit 
echo -e "\e[4;32mCHECKING FILE SYSTEM USAGE\e[0m"
THRESHOLD=30
while read line; do
        # This variable stores the file system path as a string
        FILESYSTEM=$(echo $line | awk '{print $1}')
        # This variable stores the use percentage (XX%)
        PERCENTAGE=$(echo $line | awk '{print $5}')
        # Use percentage without the % sign.
        USAGE=${PERCENTAGE%?}
        if [ $USAGE -gt $THRESHOLD ]; then
                echo "The remaining available space in $FILESYSTEM is critically low. Used: $PERCENTAGE"
        fi
done < <(df -h --total | grep -vi filesystem)

Please note that there is a space between the two < signs in the last line of the script.

Shell Script to Find 777 Permissions

Shell Script to Find 777 Permissions

Using Cron

To take efficiency one step further, you will not want to sit in front of your computer and run those scripts manually. Rather, you will use cron to schedule those tasks to run on a periodic basis and sends the results to a predefined list of recipients via email or save them to a file that can be viewed using a web browser.

The following script (filesystem_usage.sh) will run the well-known df -h command, format the output into a HTML table and save it in the report.html file:

#!/bin/bash
# Sample script to demonstrate the creation of an HTML report using shell scripting
# Web directory
WEB_DIR=/var/www/html
# A little CSS and table layout to make the report look a little nicer
echo "<HTML>
<HEAD>
<style>
.titulo{font-size: 1em; color: white; background:#0863CE; padding: 0.1em 0.2em;}
table
{
border-collapse:collapse;
}
table, td, th
{
border:1px solid black;
}
</style>
<meta http-equiv='Content-Type' content='text/html; charset=UTF-8' />
</HEAD>
<BODY>" > $WEB_DIR/report.html
# View hostname and insert it at the top of the html body
HOST=$(hostname)
echo "Filesystem usage for host <strong>$HOST</strong><br>
Last updated: <strong>$(date)</strong><br><br>
<table border='1'>
<tr><th class='titulo'>Filesystem</td>
<th class='titulo'>Size</td>
<th class='titulo'>Use %</td>
</tr>" >> $WEB_DIR/report.html
# Read the output of df -h line by line
while read line; do
echo "<tr><td align='center'>" >> $WEB_DIR/report.html
echo $line | awk '{print $1}' >> $WEB_DIR/report.html
echo "</td><td align='center'>" >> $WEB_DIR/report.html
echo $line | awk '{print $2}' >> $WEB_DIR/report.html
echo "</td><td align='center'>" >> $WEB_DIR/report.html
echo $line | awk '{print $5}' >> $WEB_DIR/report.html
echo "</td></tr>" >> $WEB_DIR/report.html
done < <(df -h | grep -vi filesystem)
echo "</table></BODY></HTML>" >> $WEB_DIR/report.html

In our RHEL 7 server (192.168.0.18), this looks as follows:

Server Monitoring Report

Server Monitoring Report

You can add to that report as much information as you want. To run the script every day at 1:30 pm, add the following crontab entry:

30 13 * * * /root/scripts/filesystem_usage.sh

Summary

You will most likely think of several other tasks that you want or need to automate; as you can see, using shell scripting will greatly simplify this effort. Feel free to let us know if you find this article helpful and don’t hesitate to add your own ideas or comments via the form below.

Source

Fabric – Automate Your Linux Administration Tasks and Application Deployments Over SSH

When it comes to managing remote machines and deployment of applications, there are several command line tools out there in existence though many have a common problem of lack of detailed documentation.

In this guide, we shall cover the steps to introduce and get started on how to use fabric to improve on administering groups of servers.

Automate Linux Administration Tasks Using Fabric

Automate Linux Administration Tasks Using Fabric

Fabric is a python library and a powerful command line tool for performing system administration tasks such as executing SSH commands on multiple machines and application deployment.

Read AlsoUse Shell Scripting to Automate Linux System Maintenance Tasks

Having a working knowledge of Python can be helpful when using Fabric, but may certainly not be necessary.

Reasons why you should choose fabric over other alternatives:

  1. Simplicity
  2. It is well-documented
  3. You don’t need to learn another language if you’re already a python guy.
  4. Easy to install and use.
  5. It is fast in its operations.
  6. It supports parallel remote execution.

How to Install Fabric Automation Tool in Linux

An important characteristic about fabric is that the remote machines which you need to administer only need to have the standard OpenSSH server installed. You only need certain requirements installed on the server from which you are administering the remote servers before you can get started.

Requirements:

  1. Python 2.5+ with the development headers
  2. Python-setuptools and pip (optional, but preferred) gcc

Fabric is easily installed using pip (highly recommended), but you may also prefer to choose your default package manager yumdnf or apt-get to install fabric package, typically called fabric or python-fabric.

For RHEL/CentOS based distributions, you must have EPEL repository installed and enabled on the system to install fabric package.

# yum install fabric   [On RedHat based systems]  
# dnf install fabric   [On Fedora 22+ versions]

For Debian and it’s derivatives such as Ubuntu and Mint users can simply do apt-get to install the fabric package as shown:

# apt-get install fabric

If you want to install development version of fabric, you may use pip to grab the most recent master branch.

# yum install python-pip       [On RedHat based systems] 
# dnf install python-pip       [On Fedora 22+ versions]
# apt-get install python-pip   [On Debian based systems]

Once pip has been installed successfully, you may use pip to grab the latest version of fabric as shown:

# pip install fabric

How to Use Fabric to Automate Linux Administration Tasks

So lets get started on how you can use Fabric. During the installation process, a Python script called fab was added to a directory in your path. The fab script does all the work when using fabric.

Executing commands on the local Linux machine

By convention, you need to start by creating a Python file called fabfile.py using your favorite editor. Remember you can give this file a different name as you wish but you will need to specify the file path as follows:

# fabric --fabfile /path/to/the/file.py

Fabric uses fabfile.py to execute tasks. The fabfile should be in the same directory where you run the Fabric tool.

Example 1: Let’s create a basic Hello World first.

# vi fabfile.py

Add these lines of code in the file.

def hello():
       print('Hello world, Tecmint community')

Save the file and run the command below.

# fab hello

Fabric Tool Usage

Fabric Tool Usage

Let us now look at an example of a fabfile.py to execute the uptime command on the local machine.

Example 2: Open a new fabfile.py file as follows:

# vi fabfile.py

And paste the following lines of code in the file.

#!  /usr/bin/env python
from fabric.api import local
def uptime():
  local('uptime')

Then save the file and run the following command:

# fab uptime

Fabric: Check System Uptime

Fabric: Check System Uptime

Executing commands on remote Linux machines to automate tasks

The Fabric API uses a configuration dictionary which is Python’s equivalent of an associative array known as env, which stores values that control what Fabric does.

The env.hosts is a list of servers on which you want run Fabric tasks. If your network is 192.168.0.0 and wish to manage host 192.168.0.2 and 192.168.0.6 with your fabfile, you could configure the env.hosts as follows:

#!/usr/bin/env python
from  fabric.api import env
env.hosts = [ '192.168.0.2', '192.168.0.6' ]

The above line of code only specify the hosts on which you will run Fabric tasks but do nothing more. Therefore you can define some tasks, Fabric provides a set of functions which you can use to interact with your remote machines.

Although there are many functions, the most commonly used are:

  1. run – which runs a shell command on a remote machine.
  2. local – which runs command on the local machine.
  3. sudo – which runs a shell command on a remote machine, with root privileges.
  4. Get – which downloads one or more files from a remote machine.
  5. Put – which uploads one or more files to a remote machine.

Example 3: To echo a message on multiple machines create a fabfile.py such as the one below.

#!/usr/bin/env python
from fabric.api import env, run
env.hosts = ['192.168.0.2','192.168.0.6']
def echo():
      run("echo -n 'Hello, you are tuned to Tecmint ' ")

To execute the tasks, run the following command:

# fab echo

Fabric: Automate Linux Tasks on Remote Linux

Fabric: Automate Linux Tasks on Remote Linux

Example 4: You can improve the fabfile.py which you created earlier on to execute the uptime command on the local machine, so that it runs the uptime command and also checks disk usage using the df command on multiple machines as follows:

#!/usr/bin/env python
from fabric.api import env, run
env.hosts = ['192.168.0.2','192.168.0.6']
def uptime():
      run('uptime')
def disk_space():
     run('df -h')

Save the file and run the following command:

# fab uptime
# fab disk_space

Fabric: Automate Tasks on Multiple Linux Systems

Fabric: Automate Tasks on Multiple Linux Systems

Automatically Deploy LAMP Stack on Remote Linux Server

Example 4: Let us look at an example to deploy LAMP (Linux, Apache, MySQL/MariaDB and PHP) server on a remote Linux server.

We shall write a function that will allow LAMP to be installed remotely using root privileges.

For RHEL/CentOS and Fedora
#!/usr/bin/env python
from fabric.api import env, run
env.hosts = ['192.168.0.2','192.168.0.6']
def deploy_lamp():
  run ("yum install -y httpd mariadb-server php php-mysql")
For Debian/Ubuntu and Linux Mint
#!/usr/bin/env python
from fabric.api import env, run
env.hosts = ['192.168.0.2','192.168.0.6']
def deploy_lamp():
  sudo("apt-get install -q apache2 mysql-server libapache2-mod-php5 php5-mysql")

Save the file and run the following command:

# fab deploy_lamp

Note: Due to large output, it’s not possible for us to create a screencast (animated gif) for this example.

Now you can able to automate Linux server management tasks using Fabric and its features and examples given above…

Some Useful Options to Use with Fabric

  1. You can run fab –help to view help information and a long list of available command line options.
  2. An important option is –fabfile=PATH that helps you to specify a different python module file to import other then fabfile.py.
  3. To specify a username to use when connecting to remote hosts, use the –user=USER option.
  4. To use password for authentication and/or sudo, use the –password=PASSWORD option.
  5. To print detailed info about command NAME, use –display=NAME option.
  6. To view formats use –list option, choices: short, normal, nested, use the –list-format=FORMAT option.
  7. To print list of possible commands and exit, include the –list option.
  8. You can specify the location of config file to use by using the –config=PATH option.
  9. To display a colored error output, use –colorize-errors.
  10. To view the program’s version number and exit, use the –version option.

Summary

Fabric is a powerful tool and is well documented and provides easy usage for newbies. You can read the full documentation to get more understanding of it. If you have any information to add or incase of any errors you encounter during installation and usage, you can leave a comment and we shall find ways to fix them.

Reference: Fabric documentation

Source

4 Good Open Source Log Monitoring and Management Tools for Linux

Linux Log Monitoring and Management Tools

4 Linux Log Monitoring and Management Tools

When an operating system such as Linux is running, there are many events happening and processes that run in the background to enable efficient and reliable use of system resources. These events may happen in system software for example the init process or user applications such as Apache, MySQL, FTP and many more.

In order to understand the state of the system and different applications and how they are working, System Administrators have to keep reviewing logfiles on daily basis in production environments.

You can imagine having to review logfiles from several system areas and applications, that is where logging systems come in handy. They help to monitor, review, analyzer and even generate reports from different logfiles as configured by a System Administrator.

  1. How to Monitor System Usages, Outages and Troubleshoot Linux Systems
  2. How to Manage Server Logs (Configure and Rotate) in Linux
  3. How to Monitor Linux Server Logs Real Time with Log.io Tool

In this article, we shall look at the top four most used open source logging management systems in Linux today, the standard logging protocol in most if not all distributions today is syslog.

1. Graylog 2

This is a fully integrated open source log management system that enables System Administrators to collect, index, and analyze both framed, systematic and disorganized data from just about any available source systems.

Graylog Linux Log Management Tool

Graylog Linux Log Management Tool

This logging system is highly pluggable and enables centralized log management from many systems. It is integrated with external components such as MongoDB for metadata and Elasticsearch used to keep logfiles and enable text search.

Graylog 2 has the following features:

  1. Ready for enterprise level production
  2. Includes a dashboard and an alerting system
  3. Can work on data from any log source
  4. Enables real time log processing
  5. Enables parsing of unstructured data
  6. Extensible and highly customizable
  7. Offers an operational data hub

For more information view the Graylog 2 website.

2. Logcheck

Logcheck is an open source log management system that helps System Administrators automatically identify unknown problems and security violations in logfiles. It periodically sends messages about the analysis results to a configured e-mail address.

Logcheck Scans System Logs

Logcheck Scans System Logs

Logcheck is designed as a cronjob on an hourly basis and on every system reboot by default. Three are different levels of logfile filtering are developed in this logging system which include:

  1. Paranoid: is intended for high-security systems that are running very few services as possible.
  2. Server: this is the default filtering level for logcheck and its rules are defined for many different system daemons. The rules defined under paranoid level are also included under this level.
  3. Workstation: it is for sheltered systems and helps to filter most of the messages. It also includes rules defined under paranoid and server levels.

Logcheck is also capable of sorting messages to be reported into three possible layers which include, security events, system events and system attack alerts. A System Administrator can choose the level of details to which system events are reported depending on the filtering level though this does not affect security events and system attack alerts.

Read more about it at the Development team’s logcheck website

3. Logwatch

Logwatch is a Linux/Unix system logfile analyzer and reporter that can be easily customized and it also allows a System Administrator to add additional plugins, create custom scripts that serve specific logging needs.

Logwatch Linux Log Analyzer

Logwatch Linux Log Analyzer

What it does is to review system logfiles for a given period to time and then generates a report based on system areas that you wish to collect information from. One feature of this logging system is that it is easy to use for new System Administrator and it also works on most Linux distributions available and many Unix systems.

Visit the project homepage of Logwatch

4. Logstash

Logstash is also an open source data collection and logging system available on Linux, which capable of real-time pipelining, which was originally designed for data collection but its new versions now integrated several other capabilities such as using a wide range of input data formats, filtering and also output plugins and formats.

LogStash

LogStash

It can effectively unify data from various log source systems and normalize the data into targets of a System Administrators’ choice. Logstash also allows System Administrators to cleanse, compare and standardize all their logging data for distinct advanced analytics and also create visualization use cases as well.

Read more about it at Logstash website.

Summary

That is it for now and remember that these are not all the available log management systems that you can use on Linux. We shall keep reviewing and updating the list in future articles, I hope you find this article useful and you can let us know of other important logging tools or systems out there by leaving a comment.

Source

Install GIT to Create and Share Your Own Projects on GITHub Repository

If you have spent any amount of time recently in the Linux world, then chances are that you have heard of GITGIT is a distributed version control system that was created by Linus Torvalds, the mastermind of Linux itself. It was designed to be a superior version control system to those that were readily available, the two most common of these being CVS and Subversion (SVN).

Whereas CVS and SVN use the Client/Server model for their systems, GIT operates a little differently. Instead of downloading a project, making changes, and uploading it back to the server, GIT makes the local machine act as a server.

Install GitHub in Centos

Install GitHub Repository

In other words, you download the project with everything, the source files, version changes, and individual file changes right to the local machine, when you check-in, check-out, and perform all of the other version control activities. Once you are finished, you then merge the project back to the repository.

This model provides many advantages, the most obvious being that if you are disconnected from your central server for whatever reason, you still have access to your project.

In this tutorial, we are going to install GITcreate a repository, and upload that repository to GitHub. You will need to go to http://www.github.com and create an account and repository if you wish to upload your project there.

How to Install GIT in Linux

On Debian/Ubuntu/Linux Mint, if it is not already installed, you can install it using apt-get command.

$ sudo apt-get install git

On Red Hat/CentOS/Fedora/ systems, you can install it using yum command.

$ yum install git

If you prefer to install and compile it form source, you can follow below commands.

$ wget http://kernel.org/pub/software/scm/git/git-1.8.4.tar.bz2
$ tar xvjf git-1.8.4.tar/bz2
$ cd git-*
$ ./configure
$ make
$ make install

How to Create Git Project

Now that GIT is installed, let’s set it up. In your home directory, there will be a file called “~/.gitconfig“. This holds all of your repository info. Let’s give it your name and your email:

$ git config –-global user.name “Your Name”
$ git config –-global user.email youremail@mailsite.com

Now we are going to create our first repository. You can make any directory a GIT repository. cd to one that has some source files and do the following:

$ cd /home/rk/python-web-scraper
$ git init

In that directory, a new hidden directory has been created called “.git“. This directory is where GIT stores all of its information about your project, and any changes that you make to it. If at any time you no longer wish for any directory to be a part of a GIT repository, you just delete this directory in the typical fashion:

$ rm –rf .git

Now that we have a repository created, we need to add some files to the project. You can add any type of file to your GIT project, but for now, let’s generate a “README.md” file that gives a little info about your project (also shows up in the README block at GitHub) and add some source files.

$ vi README.md

Enter in info about your project, save and exit.

$ git add README.md
$ git add *.py

With the two above commands, we have added the “README.md” file to your GIT project, and then we added all Python source (*.py) files in the current directory. Worth noting is that 99 times out of 100 when you are working on a GIT project, you are going to be adding all of the files in the directory. You can do so like this:

$ git add .

Now we are ready to commit the project to a stage, meaning that this is a marker point in the project. You do this with the git commit “–m” command where the “–m” option specifies a message you want to give it. Since this is out first commit of out project, we will enter in “first commit” as our “–m” string.

$ git commit –m ‘first commit’

How to Upload Project to GitHub Repository

We are now ready to push your project up to GitHub. You will need the login information that you made when you created your account. We are going to take this information and pass it to GIT so it knows where to go. Obviously, you’ll want to replace ‘user’ and ‘repo.git’ with the proper values.

$ git remote set-url origin git@github.com:user/repo.git

Now, it is time to push, ie copy from your repository to the remote repository. The git push command takes two arguments: the “remotename” and the “branchname”. These two names are usually origin and master, respectively:

$ git push origin master

That’s it! Now you can go the https://github.com/username/repo link to see your own git project.

Source

11 Best Graphical Git Clients and Git Repository Viewers for Linux

Git is a free and open source distributed version control system for software development and several other version control tasks. It is designed to cope with everything from small to very large projects based on speed, efficiency and data integrity.

Linux users can manage Git primarily from the command line, however, there are several graphical user interface (GUIGit clients that facilitate efficient and reliable usage of Git on a Linux desktop and offer most, if not all of the command line operations.

Therefore, below is a list of some of the best Git front-ends with a GUI for Linux desktop users.

Suggested Read: Install GIT to Create and Share Your Own Projects on GITHub Repository

That said, let’s proceed to listing them.

1. GitKraken

GitKraken is a cross-platform, elegant and highly efficient Git client for Linux. It works on Unix-like systems such as Linux and Mac OS X, and Windows as well. Its designed to boost a Git user’s productivity through features such as:

  1. Visual interaction and hints
  2. 100% standalone
  3. Supports multiple profiles
  4. Supports single-click undo and redo functions
  5. Built-in merge tool
  6. A fast and intuitive search tool
  7. Easily adapts to a user’s workspace and also supports submodules and Gitflow
  8. Integrates with a user’s GitHub or Bitbucket account
  9. Keyboard shortcuts plus lots more.

GitKraken Git Client for Linux

GitKraken Git Client for Linux

Visit Homepagehttps://www.gitkraken.com/

2. Git-cola

Git-cola is a powerful, configurable Git client for Linux that offers users a sleek GUI. Its written in Python and released under the GPL license.

The Git-cola interface comprises of several collaborative tools that can be hidden and rearranged according to a users wish. It also offers users many useful keyboard shortcuts.

Its additional features include:

  1. Multiple sub-commands
  2. Custom window settings
  3. Configurable and environment variables
  4. Language settings
  5. Supports custom GUI settings

Git-cola - Git Client for Linux

Git-cola – Git Client for Linux

Visit Homepagehttp://git-cola.github.io/

3. SmartGit

SmartGit is a also a cross-platform, powerful, popular GUI Git client for Linux, Mac OS X and Windows. Referred to as Git for professionals, it enables users master daily Git challenges and boosts their productivity through efficient workflows.

Users can utilize it with their own repos or other hosting providers. It ships in with the following illustrious features:

  1. Supports Git pull requests and comments
  2. Supports SVN repositories
  3. Comes with Git-flow, SSH-client and file compare/merge tools
  4. Integrates strongly with GitHub, BitBucket and Atlassian Stash

SmartGit - Git Client for Linux

SmartGit – Git Client for Linux

Visit Homepagehttp://www.syntevo.com/smartgit/

4. Giggle

Giggle is a free GUI client for Git content tracker that uses GTK+ toolkit and only runs on Linux. It was developed as a result of a hackathon Imendio, in January 2007. It has now been integrated into the GNOME infrastructure. Its basically a Git viewer, allows users to browse their repository history.

Giggle - Git Client for Linux

Giggle – Git Client for Linux

Visit Homepagehttps://wiki.gnome.org/giggle

5. Gitg

Gitg is a GNOME GUI front-end to view Git repositories. Its comprises of features such as – enables GNOME shell integration through app menu, enables users to view recently used repositories, browse repository history.

It also offers a files view, staging area to compose commits, and commit staged changes, open repository, clone repository and user information.

Gitg - Client to View Git Repositories

Gitg – Client to View Git Repositories

Visit Homepagehttps://wiki.gnome.org/Apps/Gitg

6. Git GUI

Git GUI is a cross-platform and portable Tcl/Tk based GUI front-end for Git that works on Linux, Windows and Mac OS X. It mainly focuses on commit generation by enabling users to make changes to their repository by generating new commits, amending existing ones, building branches. Additionally, it also allows them to perform local merges, and fetch/push to remote repositories.

GitGui - Client for Git

GitGui – Client for Git

Visit Homepagehttps://www.kernel.org/pub/software/scm/git/docs/git-gui.html

7. Qgit

QGit is a simple, fast and straight forward yet powerful GUI Git client based written in Qt/C++. It offers users a nice UI and allows them to browse revisions history, view patch content and changed files graphically by following distinct development branches.

A few of its features are listed below:

  1. View, revision, diffs, file history, file annotations and archive trees
  2. Supports commit changes
  3. Enables users to apply or format patch series from selected commits
  4. Also supports drag and drop functions for commits between two QGit instances
  5. Associates commands sequences, scripts and anything executable to a custom action
  6. It implements a GUI for many common StGit commands such as push/pop and apply/format patches and many more

Qgit - Git Client for Linux

Qgit – Git Client for Linux

Visit Homepagehttp://digilander.libero.it/mcostalba/

8. GitForce

GitForce is also an easy-to-use and intuitive GUI front-end for Git that runs on Linux and Windows, plus any OS with Mono support. It provides users some of the most common Git operations and it is powerful enough to be used exclusively without involving any other command line Git tool.

GitForce - Git Client for Linux

GitForce – Git Client for Linux

Visit Homepagehttps://sites.google.com/site/gitforcetool/home

9. Egit

Egit is a Git plugin for Eclipse IDE, its an Eclipse Team provider for Git. The project is aimed at implementing Eclipse tooling on top of the JQit java implementation of Git. Eqit comprises of features such as a repository explorer, new files, commit window and history view.

Egit - Git Plugin for Eclipse IDE

Egit – Git Plugin for Eclipse IDE

Visit Homepagehttp://www.eclipse.org/egit/

10. GitEye

GitEye is a simple and intuitive GUI client for Git that integrates easily with planning, tracking, code reviewing and build tools such as TeamForge, GitGub, Jira, Bugzilla and lots more. It is flexible with powerful visualization and history management features.

Visit Homepagehttp://www.collab.net/products/giteye

11. GITK (Generalized Interface Toolkit)

GITK is a multi-layered GUI front-end for Git that enables users to work effectively with software in any situation. Its main aim is to vividly enrich adaptivity of software, it runs on a multi-layered architecture where interface functionality is adequately separated from look and feel.

Importantly, GITK lets each use choose the kind and style of UI that fits his/her needs depending on ability, preferences and current environment.

Visit Homepagehttp://gitk.sourceforge.net/

Summary

In this post, we reviewed a few of the best known Git clients with a GUI for Linux, however, there could be one or two missing in the list above, therefore, get back to us for any suggestions or feedback through the comment section below. You can as well tell us your best Git client with a GUI and why you prefer using it.

Source

10 Top Open Source Artificial Intelligence Tools for Linux

In this post, we shall cover a few of the top, open-source artificial intelligence (AI) tools for the Linux ecosystem. Currently, AI is one of the ever advancing fields in science and technology, with a major focus geared towards building software and hardware to solve every day life challenges in areas such as health care, education, security, manufacturing, banking and so much more.

Suggested Read: 20 Free Open Source Softwares I Found in Year 2015

Below is a list of a number of platforms designed and developed for supporting AI, that you can utilize on Linux and possibly many other operating systems. Remember this list is not arranged in any specific order of interest.

1. Deep Learning For Java (Deeplearning4j)

Deeplearning4j is a commercial grade, open-source, plug and play, distributed deep-learning library for Java and Scala programming languages. It’s designed specifically for business related application, and integrated with Hadoop and Spark on top of distributed CPUs and GPUs.

DL4J is released under the Apache 2.0 license and provides GPU support for scaling on AWS and is adapted for micro-service architecture.

Deeplearning4j - Deep Learning for Java

Deeplearning4j – Deep Learning for Java

 

Visit Homepagehttp://deeplearning4j.org/

2. Caffe – Deep Learning Framework

Caffe is a modular and expressive deep learning framework based on speed. It is released under the BSD 2-Clause license, and it’s already supporting several community projects in areas such as research, startup prototypes, industrial applications in fields such as vision, speech and multimedia.

Caffe - Deep Learning Framework

Caffe – Deep Learning Framework

Visit Homepagehttp://caffe.berkeleyvision.org/

3. H20 – Distributed Machine Learning Framework

H20 is an open-source, fast, scalable and distributed machine learning framework, plus the assortment of algorithms equipped on the framework. It supports smarter application such as deep learning, gradient boosting, random forests, generalized linear modeling (I.e logistic regression, Elastic Net) and many more.

It is a businesses oriented artificial intelligence tool for decision making from data, it enables users to draw insights from their data using faster and better predictive modeling.

H2O - Distributed Machine Learning Framework

H2O – Distributed Machine Learning Framework

Visit Homepagehttp://www.h2o.ai/

4. MLlib – Machine Learning Library

MLlib is an open-source, easy-to-use and high performance machine learning library developed as part of Apache Spark. It is essentially easy to deploy and can run on existing Hadoop clusters and data.

Suggested Read: 12 Best Open Source Text Editors (GUI + CLI) I Found in 2015

MLlib also ships in with an collection of algorithms for classification, regression, recommendation, clustering, survival analysis and so much more. Importantly, it can be used in Python, Java, Scala and R programming languages.

MLlib - Machine Learning Library

MLlib – Machine Learning Library

Visit Homepagehttps://spark.apache.org/mllib/

5. Apache Mahout

Mahout is an open-source framework designed for building scalable machine learning applications, it has three prominent features listed below:

  1. Provides simple and extensible programming workplace
  2. Offers a variety of prepackaged algorithms for Scala + Apache Spark, H20 as well as Apache Flink
  3. Includes Samaras, a vector math experimentation workplace with R-like syntax

Apache Mahout

Apache Mahout

Visit Homepagehttp://mahout.apache.org/

6. Open Neural Networks Library (OpenNN)

OpenNN is also an open-source class library written in C++ for deep learning, it is used to instigate neural networks. However, it is only optimal for experienced C++ programmers and persons with tremendous machine learning skills. It’s characterized of a deep architecture and high performance.

OpenNN - Open Neural Networks Library

OpenNN – Open Neural Networks Library

Visit Homepagehttp://www.opennn.net/

7. Oryx 2

Oryx 2 is a continuation of the initial Oryx project, it’s developed on Apache Spark and Apache Kafka as a re-architecting of the lambda architecture, although dedicated towards achieving real-time machine learning.

It is a platform for application development and ships in with certain applications as well for collaborative filtering, classification, regression and clustering purposes.

Oryx2 - Re-architecting Lambda Architecture

Oryx2 – Re-architecting Lambda Architecture

Visit Homepagehttp://oryx.io/

8. OpenCyc

OpenCyc is an open-source portal to the largest and most comprehensive general knowledge base and commonsense reasoning engine of the world. It includes a large number of Cyc terms arranged in a precisely designed onology for application in areas such as:

  1. Rich domain modeling
  2. Domain-specific expert systems
  3. Text understanding
  4. Semantic data integration as well as AI games plus many more.

OpenCyc

OpenCyc

Visit Homepagehttp://www.cyc.com/platform/opencyc/

9. Apache SystemML

SystemML is open-source artificial intelligence platform for machine learning ideal for big data. Its main features are – runs on R and Python-like syntax, focused on big data and designed specifically for high-level math. How it works is well explained on the homepage, including a video demonstration for clear illustration.

Suggested Read: 18 Best IDEs for C/C++ Programming or Source Code Editors on Linux

There are several ways to use it including Apache Spark, Apache Hadoop, Jupyter and Apache Zeppelin. Some of its notable use cases include automotives, airport traffic and social banking.

Apache SystemML - Machine Learning Platform

Apache SystemML – Machine Learning Platform

Visit Homepagehttp://systemml.apache.org/

10. NuPIC

NuPIC is an open-source framework for machine learning that is based on Heirarchical Temporary Memory (HTM), a neocortex theory. The HTM program integrated in NuPIC is implemented for analyzing real-time streaming data, where it learns time-based patterns existing in data, predicts the imminent values as well as reveals any irregularities.

Its notable features include:

  1. Continuous online learning
  2. Temporal and spatial patterns
  3. Real-time streaming data
  4. Prediction and modeling
  5. Powerful anomaly detection
  6. Hierarchical temporal memory

NuPIC Machine Intelligence

NuPIC Machine Intelligence

Visit Homepagehttp://numenta.org/

With the rise and ever advancing research in AI, we are bound to witness more tools spring up to help make this area of technology a success especially for solving daily scientific challenges along with educational purposes.

Are you interested in AI, what is your say? Offer us your thoughts, suggestions or any productive feedback about the subject matter via the comment section below and we shall be delighted to know more from your.

Source

6 Best Email Clients for Linux Systems

Email is an old way of communication yet, it still remains the basic and most important method out there of sharing information up to date, but the way we access emails has changed over the years. From web applications, a lot of people now prefer to use email clients than ever before.

Best Linux Email Clients

6 Best Linux Email Clients

An Email client is a software that enables a user to manage their inbox with sending, receiving and organizing messages simply from a desktop or a mobile phone.

Email clients have many advantages and they have become more than just utilities for sending and receiving messages but they are now powerful components of information management utilities.

Don’t Miss: 4 Best Terminal Email Clients For Linux

In this particular case, we shall focus on desktop email clients that allow you to manage your email messages from your Linux desktop without the hustle of having to sign in and out as is the case with web email service providers.

There are several native email clients for Linux desktops but we shall look at some of the best that you can use.

1. Thunderbird Email Client

Thunderbird is an open source email client developed by Mozilla, it is also cross-platform and has some great attributes offering users speed, privacy and the latest technologies for accessing email services.

Thunderbird Email Client for Linux

Thunderbird Email Client for Linux

Thunderbird has been around for a long time though it is becoming less popular, but still remains one of the best email clients on Linux desktops.

It is feature rich with features such as:

  1. Enables users to have personalized email addresses
  2. A one click address book
  3. An attachment reminder
  4. Multiple-channel chat
  5. Tabs and search
  6. Enables searching the web
  7. A quick filter toolbar
  8. Message archive
  9. Activity manager
  10. Large files management
  11. Security features such as phishing protection, no tracking
  12. Automated updates plus many more

Visit Homepagehttps://www.mozilla.org/en-US/thunderbird/

2. Evolution Email Client

Evolution is not just an email client but an information management software that offers an integrated email client including calender and address book functionality.

Evolution Email Client for Linux

Evolution Email Client for Linux

It offers some of the basic email management functionalities plus advanced features including the following:

  1. Account management
  2. Changing mail window layout
  3. Deleting and undeleting messages
  4. Sorting and organizing mails
  5. Shortcut keys functionalities for reading mails
  6. Mail encryption and certificates
  7. Sending invitations by mail
  8. Autocompletion of email addresses
  9. Message forwarding
  10. Spell checking
  11. Working with email signatures
  12. Working offline plus many others

Visit Homepagehttps://wiki.gnome.org/Apps/Evolution

3. KMail Email Client

It is the email component of Kontact, KDE’s unified personal information manager.

Kmail Email Client for Linux

Kmail Email Client for Linux

KMail also has many features as the other email clients we have looked at above and these include:

  1. Supports standard mail protocols such as SMTP, IMAP and POP3
  2. Supports plain text and secure logins
  3. Reading and writing HTML mail
  4. Integration of international character set
  5. Integration with spam checkers such as Bogofilter, SpamAssassin plus many more
  6. Support for receiving and accepting invitations
  7. Powerful search and filter capabilities
  8. Spell checking
  9. Encrypted passwords saving in KWallet
  10. Backup support
  11. Fully integrated with other Kontact components plus many more

Visit Homepagehttps://userbase.kde.org/KMail

4. Geary Email Client

Geary is a simple and easy-to-use email client built with a modern interface for the GNOME 3 desktop. If you are looking for a simple and efficient email client that offers the basic functionalities, then Geary can be a good choice for you.

Geary Email Client for Linux

Geary Email Client for Linux

It has the following features:

  1. Supports common email service providers such as Gmail, Yahoo! Mail, plus many popular IMAP servers
  2. Simple, modern and straight forward interface
  3. Quick account setup
  4. Mail organized by conversations
  5. Fast keyword searching
  6. Full-featured HTML mail composer
  7. Desktop notifications support

Visit Homepagehttps://wiki.gnome.org/Apps/Geary

5. Sylpheed- Email Client

Sylpheed- is a simple, lightweight, easy-to-use, cross-platform email client that is featureful, it can run on Linux, Windows, Mac OS X and other Unix-like operating systems.

Sylpheed Email Client for Linux

Sylpheed Email Client for Linux

It is offers an intuitive user-interface with a keyboard-oriented use. It works well for new and power users with the following features:

  1. Simple, beautiful and easy-to-use interface
  2. Lightweight operations
  3. Pluggable
  4. Well organized, easy-to-understand configuration
  5. Junk mail control
  6. Support for various protocols
  7. Powerful searching and filtering functionalities
  8. Flexible cooperation with external commands
  9. Security features such as GnuPG, SSL/TLSv
  10. High-level Japanese processing and many more

Visit Homepagehttp://sylpheed.sraoss.jp/en/

6. Claws Mail Email Client

Claws mail is a user-friendly, lightweight and fast email client based on GTK+, it also includes news reader functionality. It has a graceful and sophisticated user interface, also supports keyboard-oriented operation similar to other email clients and works well for new and power users alike.

Claws Mail Email Client for Linux

Claws Mail Email Client for Linux

It has abundant features including the following:

  1. Highly pluggable
  2. Supports multiple email accounts
  3. Support for message filtering
  4. Color labels
  5. Highly extensible
  6. An external editor
  7. Line-wrapping
  8. Clickable URLs
  9. User-defined headers
  10. Mime attachments
  11. Managing messages in MH format offering fast access and data security
  12. Import and export emails from and to other email clients plus many others

Visit Homepagehttp://www.claws-mail.org/

Whether you need some basic features or advanced functionalities, the email clients above will work just well for you. There are many others out there that we have not looked at here which you might be using, you can let us know of them via the comment section below.

Source

MTR – A Network Diagnostic Tool for Linux

MTR is a simple, cross-platform command-line network diagnostic tool that combines the functionality of commonly used traceroute and ping programs into a single tool. In a similar fashion as traceroutemtr prints information about the route that packets take from the host on which mtr is run to a user specified destination host.

Read AlsoHow to Audit Network Performance, Security and Troubleshoot in Linux

However, mtr shows a wealth of information than traceroute: it determines the pathway to a remote machine while printing response percentage as well as response times of all network hops in the internet route between the local system and a remote machines.

How Does MTR Work?

Once you run mtr, it probes the network connection between the local system and a remote host that you have specified. It first establishes the address of each network hop (bridges, routers and gateways etc.) between the hosts, it then pings (sends a sequence ICMP ECHO requests to) each one to determine the quality of the link to each machine.

During the course of this operation, mtr outputs some useful statistics about each machine – updated in real-time, by default.

This tool comes pre-installed on most Linux distributions and is fairly easy to use once you go through the 10 mtr command examples for network diagnostics in Linux, explained below.

If mtr not installed, you can install it on your respective Linux distributions using your default package manager as shown.

$ sudo apt install mtr
$ sudo yum install mtr
$ sudo dnf install mtr

10 MTR Network Diagnostics Tool Usage Examples

1. The simplest example of using mtr is to provide the domain name or IP address of the remote machine as an argument, for example google.com or 216.58.223.78. This command will show you a traceroute report updated in real-time, until you exit the program (by pressing q or Ctrl + C).

$ mtr google.com
OR
$ mtr 216.58.223.78

Start: Thu Jun 28 12:10:13 2018
HOST: TecMint                     Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- 192.168.0.1                0.0%     5    0.3   0.3   0.3   0.4   0.0
  2.|-- 5.5.5.211                  0.0%     5    0.7   0.9   0.7   1.3   0.0
  3.|-- 209.snat-111-91-120.hns.n 80.0%     5    7.1   7.1   7.1   7.1   0.0
  4.|-- 72.14.194.226              0.0%     5    1.9   2.9   1.9   4.4   1.1
  5.|-- 108.170.248.161            0.0%     5    2.9   3.5   2.0   4.3   0.7
  6.|-- 216.239.62.237             0.0%     5    3.0   6.2   2.9  18.3   6.7
  7.|-- bom05s12-in-f14.1e100.net  0.0%     5    2.1   2.4   2.0   3.8   0.5

2. You can force mtr to display numeric IP addresses instead of host names (typically FQDNs – Fully Qualified Domain Names), using the -n flag as shown.

$ mtr -n google.com

Start: Thu Jun 28 12:12:58 2018
HOST: TecMint                     Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- 192.168.0.1                0.0%     5    0.3   0.3   0.3   0.4   0.0
  2.|-- 5.5.5.211                  0.0%     5    0.9   0.9   0.8   1.1   0.0
  3.|-- ???                       100.0     5    0.0   0.0   0.0   0.0   0.0
  4.|-- 72.14.194.226              0.0%     5    2.0   2.0   1.9   2.0   0.0
  5.|-- 108.170.248.161            0.0%     5    2.3   2.3   2.2   2.4   0.0
  6.|-- 216.239.62.237             0.0%     5    3.0   3.2   3.0   3.3   0.0
  7.|-- 172.217.160.174            0.0%     5    3.7   3.6   2.0   5.3   1.4

3. If you would like mtr to display both host names as well as numeric IP numbers use the -b flag as shown.

$ mtr -b google.com

Start: Thu Jun 28 12:14:36 2018
HOST: TecMint                     Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- 192.168.0.1                0.0%     5    0.3   0.3   0.3   0.4   0.0
  2.|-- 5.5.5.211                  0.0%     5    0.7   0.8   0.6   1.0   0.0
  3.|-- 209.snat-111-91-120.hns.n  0.0%     5    1.4   1.6   1.3   2.1   0.0
  4.|-- 72.14.194.226              0.0%     5    1.8   2.1   1.8   2.6   0.0
  5.|-- 108.170.248.209            0.0%     5    2.0   1.9   1.8   2.0   0.0
  6.|-- 216.239.56.115             0.0%     5    2.4   2.7   2.4   2.9   0.0
  7.|-- bom07s15-in-f14.1e100.net  0.0%     5    3.7   2.2   1.7   3.7   0.9

4. To limit the number of pings to a specific value and exit mtr after those pings, use the -c flag. If you observe from the Snt column, once the specified number of pings is reached, the live update stops and the program exits.

$ mtr -c5 google.com

5. You can set it into report mode using the -r flag, a useful option for producing statistics concerning network quality. You can use this option together with the -c option to specify the number of pings. Since the statistics are printed to std output, you can redirect them to a file for later analysis.

$ mtr -r -c 5 google.com >mtr-report

The -w flag enables wide report mode for a clearer output.

$ mtr -rw -c 5 google.com >mtr-report

6. You can also re-arrange the output fields the way you wish, this is made possible by the -o flag as shown (see the mtr man page for meaning of field labels).

$ mtr -o "LSDR NBAW JMXI" 216.58.223.78

MTR Fields and Order

MTR Fields and Order

7. The default interval between ICMP ECHO requests is one second, you can specify interval between ICMP ECHO requests by changing the value using the -i flag as shown.

$ mtr -i 2 google.com

8. You can use TCP SYN packets or UDP datagrams instead of the default ICMP ECHO requests as shown.

$ mtr --tcp test.com
OR
$ mtr --udp test.com 

9. To specify the maximum number of hops (default is 30) to be probed between the local system and the remote machine, use the -m flag.

$ mtr -m 35 216.58.223.78

10. While probing network quality, you can set the packet size used in bytes using the -s flag like so.

$ mtr -r -s PACKETSIZE -c 5 google.com >mtr-report

With these examples, you should be good to go with using mtr, see man page for more usage options.

$ man mtr 

Also check out these useful guides about Linux network configurations and troubleshooting:

  1. 13 Linux Network Configuration and Troubleshooting Commands
  2. How to Block Ping ICMP Requests to Linux Systems

That’s it for now! MTR is a simple, easy-to-use and above all cross-platform network diagnostics tool. In this guide, we have explained 10 mtr command examples in Linux. If you have any questions, or thoughts to share with us, use the comment form below.

Source

How to Backup or Clone Linux Partitions Using ‘cat’ Command

A rough utilization of Linux cat command would be to make a full disk backup or a disk partition backup or cloning of a disk partition by redirecting the command output against the partition of a hard disk, or USB stick or a local image file or write the output to a network socket.

Linux Filesystem Backup Using 'cat' Command

Linux Filesystem Backup Using ‘cat’ Command

It absolutely normal of you to think of why we should use cat over dd when the latter does the same job easily, which is quite right, however, I recently realized that cat is much faster than dd when its comes to speed and performance.

I do agree that dd provides, even more, options and also very useful in dealing with large backups such as tape drives (How to Clone Linux Partitions Using ‘dd’ Command), whereas cat includes lesser option and it’s not necessarily a worthy dd replacement but still, remains an option wherever applicable.

Suggested Read: How to Clone or Backup Linux Disk Using Clonezilla

Trust me, it gets the job done quite successfully in copying the content of a partition to a new unformatted partition. The only requirements would be to provide a valid hard disk partition with the minimum size of the existing data and with no filesystem whatsoever.

In the below example the first partition on the first hard disk, which corresponds to the /boot partition i.e. /dev/sda1, is cloned onto the first partition of the second disk (i.e. /dev/sdb1) using the Linux redirection operator.

# cat /dev/sda1 > /dev/sdb1

Full Disk Partition Backup in Linux

Full Disk Partition Backup in Linux

After the command finishes, the cloned partition is mounted to /mnt and both mount points directories are listed to check if any files are missing.

# mount /dev/sdb1 /mnt
# ls /mnt
# ls /boot

Verify Cloned Partition Files

Verify Cloned Partition Files

In order to extend the partition file system to the maximum size issue the following command with root privileges.

Suggested Read: 14 Outstanding Backup Utilities for Linux Systems

$ sudo resize2fs /dev/sdb1

Resize or Extend Partition Size in Linux

Resize or Extend Partition Size in Linux

The cat command is an excellent tool to manipulate text files in Linux and some special multimedia files, but should be avoided for binary data files or concatenate shebang files. For all other options don’t hesitate to execute man cat from console.

$ man cat

Surprisingly, there is another command called tac, yes, I am talking about tac, which is a reverse version of catcommand (also spelled backwards) which display each line of a file in reverse order, want to know more about tac, read How to Use Tac Command in Linux.

Source

Useful Commands to Create Commandline Chat Server and Remove Unwanted Packages in Linux

Here we are with the next part of Linux Command Line Tips and Tricks. If you missed our previous post on Linux Tricks you may find it here.

  1. 5 Linux Command Line Tricks

In this post we will be introducing 6 command Line tips namely create Linux Command line chat using Netcatcommand, perform addition of a column on the fly from the output of a command, remove orphan packages from Debian and CentOS, get local and remote IP from command Line, get colored output in terminal and decode various color code and last but not the least hash tags implementation in Linux command Line. Lets check them one by one.

Linux Commandline Chat Server

6 Useful Commandline Tricks and Tips

1. Create Linux Commandline Chat Server

We all have been using chat service since a long time. We are familiar with Google chat, Hangout, Facebook chat, Whatsapp, Hike and several other application and integrated chat services. Do you know Linux nccommand can make your Linux box a chat server with just one line of command.

What is nc command in Linux and what it does?

nc is the depreciation of Linux netcat command. The nc utility is often referred as Swiss army knife based upon the number of its built-in capabilities. It is used as debugging tool, investigation tool, reading and writing to network connection using TCP/UDP, DNS forward/reverse checking.

It is prominently used for port scanning, file transferring, backdoor and port listening. nc has the ability to use any local unused port and any local network source address.

Use nc command (On Server with IP address: 192.168.0.7) to create a command line messaging server instantly.

$ nc -l -vv -p 11119

Explanation of the above command switches.

  1. -v : means Verbose
  2. -vv : more verbose
  3. -p : The local port Number

You may replace 11119 with any other local port number.

Next on the client machine (IP address: 192.168.0.15) run the following command to initialize chat session to machine (where messaging server is running).

$ nc 192.168.0.7 11119

Linux Commandline Chat with nc Command

Note: You can terminate chat session by hitting ctrl+c key and also nc chat is one-to-one service.

2. How to Sum Values in a Column in Linux

How to sum the numerical values of a column, generated as an output of a command, on the fly in the terminal.

The output of the ‘ls -l‘ command.

$ ls -l

Sum Numerical Values

Notice that the second column is numerical which represents number of symbolic links and the 5th column is numerical which represents the size of he file. Say we need to sum the values of fifth column on the fly.

List the content of 5th column without printing anything else. We will be using ‘awk‘ command to do this. ‘$5‘ represents 5th column.

$ ls -l | awk '{print $5}'

List Content Column

Now use awk to print the sum of the output of 5th column by pipelining it.

$ ls -l | awk '{print $5}' | awk '{total = total + $1}END{print total}'

Sum and Print Columns

How to Remove Orphan Packages in Linux?

Orphan packages are those packages that are installed as a dependency of another package and no longer required when the original package is removed.

Say we installed a package gtprogram which was dependent of gtdependency. We can’t install gtprogramunless gtdependency is installed.

When we remove gtprogram it won’t remove gtdependency by default. And if we don’t remove gtdependency, it will remain as Orpahn Package with no connection to any other package.

# yum autoremove                [On RedHat Systems]

Remove Orphan Packages in CentOS

# apt-get autoremove                [On Debian Systems]

Remove Orphan Packages in Debian

You should always remove Orphan Packages to keep the Linux box loaded with just necessary stuff and nothing else.

4. How to Get Local and Public IP Address of Linux Server

To get you local IP address run the below one liner script.

$ ifconfig | grep "inet addr:" | awk '{print $2}' | grep -v '127.0.0.1' | cut -f2 -d:

You must have installed ifconfig, if not, apt or yum the required packages. Here we will be pipelining the output of ifconfig with grep command to find the string “intel addr:”.

We know ifconfig command is sufficient to output local IP Address. But ifconfig generate lots of other outputs and our concern here is to generate only local IP address and nothing else.

# ifconfig | grep "inet addr:"

Check Local IP Address

Although the output is more custom now, but we need to filter our local IP address only and nothing else. For this we will use awk to print the second column only by pipelining it with the above script.

# ifconfig | grep “inet addr:” | awk '{print $2}'

Filter Only IP Address

Clear from the above image that we have customised the output very much but still not what we want. The loopback address 127.0.0.1 is still there in the result.

We use use -v flag with grep that will print only those lines that don’t match the one provided in argument. Every machine have the same loopback address 127.0.0.1, so use grep -v to print those lines that don’t have this string, by pipelining it with above output.

# ifconfig | grep "inet addr" | awk '{print $2}' | grep -v '127.0.0.1'

Print IP Address

We have almost generated desired output, just replace the string (addr:) from the beginning. We will use cutcommand to print only column two. The column 1 and column 2 are not separated by tab but by (:), so we need to use delimiter (-d) by pipelining the above output.

# ifconfig | grep "inet addr:" | awk '{print $2}' | grep -v '127.0.0.1' | cut -f2 -d:

Customized IP Address

Finally! The desired result has been generated.

5. How to Color Linux Terminal

You might have seen colored output in terminal. Also you would be knowing to enable/disable colored output in terminal. If not you may follow the below steps.

In Linux every user has '.bashrc' file, this file is used to handle your terminal output. Open and edit this file with your choice of editor. Note that, this file is hidden (dot beginning of file means hidden).

$ vi /home/$USER/.bashrc

Make sure that the following lines below are uncommented. ie., it don’t start with a #.

if [ -x /usr/bin/dircolors ]; then
    test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dirc$
    alias ls='ls --color=auto'
    #alias dir='dir --color=auto'
    #alias vdir='vdir --color=auto'

    alias grep='grep --color=auto'
    alias fgrep='fgrep --color=auto'
    alias egrep='egrep --color=auto'
fi

User .bashrc File

Once done! Save and exit. To make the changes taken into effect logout and again login.

Now you will see files and folders are listed in various colors based upon type of file. To decode the color code run the below command.

$ dircolors -p

Since the output is too long, lets pipeline the output with less command so that we get output one screen at a time.

$ dircolors -p | less

Linux Color Output

6. How to Hash Tag Linux Commands and Scripts

We are using hash tags on TwitterFacebook and Google Plus (may be some other places, I have not noticed). These hash tags make it easier for others to search for a hash tag. Very few know that we can use hash tag in Linux command Line.

We already know that # in configuration files and most of the programming languages is treated as comment line and is excluded from execution.

Run a command and then create a hash tag of the command so that we can find it later. Say we have a long script that was executed in point 4 above. Now create a hash tag for this. We know ifconfig can be run by sudoor root user hence acting as root.

# ifconfig | grep "inet addr:" | awk '{print $2}' | grep -v '127.0.0.1' | cut -f2 -d: #myip

The script above has been hash tagged with ‘myip‘. Now search for the hash tag in reverse-i-serach (press ctrl+r), in the terminal and type ‘myip‘. You may execute it from there, as well.

Create Command Hash Tags

You may create as many hash tags for every command and find it later using reverse-i-search.

That’s all for now. We have been working hard to produce interesting and knowledgeable contents for you. What do you think how we are doing? Any suggestion is welcome. You may comment in the box below. Keep connected! Kudos.

Source

WP2Social Auto Publish Powered By : XYZScripts.com