10 Top Open Source Artificial Intelligence Tools for Linux

In this post, we shall cover a few of the top, open-source artificial intelligence (AI) tools for the Linux ecosystem. Currently, AI is one of the ever advancing fields in science and technology, with a major focus geared towards building software and hardware to solve every day life challenges in areas such as health care, education, security, manufacturing, banking and so much more.

Suggested Read: 20 Free Open Source Softwares I Found in Year 2015

Below is a list of a number of platforms designed and developed for supporting AI, that you can utilize on Linux and possibly many other operating systems. Remember this list is not arranged in any specific order of interest.

1. Deep Learning For Java (Deeplearning4j)

Deeplearning4j is a commercial grade, open-source, plug and play, distributed deep-learning library for Java and Scala programming languages. It’s designed specifically for business related application, and integrated with Hadoop and Spark on top of distributed CPUs and GPUs.

DL4J is released under the Apache 2.0 license and provides GPU support for scaling on AWS and is adapted for micro-service architecture.

Deeplearning4j - Deep Learning for Java

Deeplearning4j – Deep Learning for Java

 

Visit Homepagehttp://deeplearning4j.org/

2. Caffe – Deep Learning Framework

Caffe is a modular and expressive deep learning framework based on speed. It is released under the BSD 2-Clause license, and it’s already supporting several community projects in areas such as research, startup prototypes, industrial applications in fields such as vision, speech and multimedia.

Caffe - Deep Learning Framework

Caffe – Deep Learning Framework

Visit Homepagehttp://caffe.berkeleyvision.org/

3. H20 – Distributed Machine Learning Framework

H20 is an open-source, fast, scalable and distributed machine learning framework, plus the assortment of algorithms equipped on the framework. It supports smarter application such as deep learning, gradient boosting, random forests, generalized linear modeling (I.e logistic regression, Elastic Net) and many more.

It is a businesses oriented artificial intelligence tool for decision making from data, it enables users to draw insights from their data using faster and better predictive modeling.

H2O - Distributed Machine Learning Framework

H2O – Distributed Machine Learning Framework

Visit Homepagehttp://www.h2o.ai/

4. MLlib – Machine Learning Library

MLlib is an open-source, easy-to-use and high performance machine learning library developed as part of Apache Spark. It is essentially easy to deploy and can run on existing Hadoop clusters and data.

Suggested Read: 12 Best Open Source Text Editors (GUI + CLI) I Found in 2015

MLlib also ships in with an collection of algorithms for classification, regression, recommendation, clustering, survival analysis and so much more. Importantly, it can be used in Python, Java, Scala and R programming languages.

MLlib - Machine Learning Library

MLlib – Machine Learning Library

Visit Homepagehttps://spark.apache.org/mllib/

5. Apache Mahout

Mahout is an open-source framework designed for building scalable machine learning applications, it has three prominent features listed below:

  1. Provides simple and extensible programming workplace
  2. Offers a variety of prepackaged algorithms for Scala + Apache Spark, H20 as well as Apache Flink
  3. Includes Samaras, a vector math experimentation workplace with R-like syntax

Apache Mahout

Apache Mahout

Visit Homepagehttp://mahout.apache.org/

6. Open Neural Networks Library (OpenNN)

OpenNN is also an open-source class library written in C++ for deep learning, it is used to instigate neural networks. However, it is only optimal for experienced C++ programmers and persons with tremendous machine learning skills. It’s characterized of a deep architecture and high performance.

OpenNN - Open Neural Networks Library

OpenNN – Open Neural Networks Library

Visit Homepagehttp://www.opennn.net/

7. Oryx 2

Oryx 2 is a continuation of the initial Oryx project, it’s developed on Apache Spark and Apache Kafka as a re-architecting of the lambda architecture, although dedicated towards achieving real-time machine learning.

It is a platform for application development and ships in with certain applications as well for collaborative filtering, classification, regression and clustering purposes.

Oryx2 - Re-architecting Lambda Architecture

Oryx2 – Re-architecting Lambda Architecture

Visit Homepagehttp://oryx.io/

8. OpenCyc

OpenCyc is an open-source portal to the largest and most comprehensive general knowledge base and commonsense reasoning engine of the world. It includes a large number of Cyc terms arranged in a precisely designed onology for application in areas such as:

  1. Rich domain modeling
  2. Domain-specific expert systems
  3. Text understanding
  4. Semantic data integration as well as AI games plus many more.

OpenCyc

OpenCyc

Visit Homepagehttp://www.cyc.com/platform/opencyc/

9. Apache SystemML

SystemML is open-source artificial intelligence platform for machine learning ideal for big data. Its main features are – runs on R and Python-like syntax, focused on big data and designed specifically for high-level math. How it works is well explained on the homepage, including a video demonstration for clear illustration.

Suggested Read: 18 Best IDEs for C/C++ Programming or Source Code Editors on Linux

There are several ways to use it including Apache Spark, Apache Hadoop, Jupyter and Apache Zeppelin. Some of its notable use cases include automotives, airport traffic and social banking.

Apache SystemML - Machine Learning Platform

Apache SystemML – Machine Learning Platform

Visit Homepagehttp://systemml.apache.org/

10. NuPIC

NuPIC is an open-source framework for machine learning that is based on Heirarchical Temporary Memory (HTM), a neocortex theory. The HTM program integrated in NuPIC is implemented for analyzing real-time streaming data, where it learns time-based patterns existing in data, predicts the imminent values as well as reveals any irregularities.

Its notable features include:

  1. Continuous online learning
  2. Temporal and spatial patterns
  3. Real-time streaming data
  4. Prediction and modeling
  5. Powerful anomaly detection
  6. Hierarchical temporal memory

NuPIC Machine Intelligence

NuPIC Machine Intelligence

Visit Homepagehttp://numenta.org/

With the rise and ever advancing research in AI, we are bound to witness more tools spring up to help make this area of technology a success especially for solving daily scientific challenges along with educational purposes.

Are you interested in AI, what is your say? Offer us your thoughts, suggestions or any productive feedback about the subject matter via the comment section below and we shall be delighted to know more from your.

Source

6 Best Email Clients for Linux Systems

Email is an old way of communication yet, it still remains the basic and most important method out there of sharing information up to date, but the way we access emails has changed over the years. From web applications, a lot of people now prefer to use email clients than ever before.

Best Linux Email Clients

6 Best Linux Email Clients

An Email client is a software that enables a user to manage their inbox with sending, receiving and organizing messages simply from a desktop or a mobile phone.

Email clients have many advantages and they have become more than just utilities for sending and receiving messages but they are now powerful components of information management utilities.

Don’t Miss: 4 Best Terminal Email Clients For Linux

In this particular case, we shall focus on desktop email clients that allow you to manage your email messages from your Linux desktop without the hustle of having to sign in and out as is the case with web email service providers.

There are several native email clients for Linux desktops but we shall look at some of the best that you can use.

1. Thunderbird Email Client

Thunderbird is an open source email client developed by Mozilla, it is also cross-platform and has some great attributes offering users speed, privacy and the latest technologies for accessing email services.

Thunderbird Email Client for Linux

Thunderbird Email Client for Linux

Thunderbird has been around for a long time though it is becoming less popular, but still remains one of the best email clients on Linux desktops.

It is feature rich with features such as:

  1. Enables users to have personalized email addresses
  2. A one click address book
  3. An attachment reminder
  4. Multiple-channel chat
  5. Tabs and search
  6. Enables searching the web
  7. A quick filter toolbar
  8. Message archive
  9. Activity manager
  10. Large files management
  11. Security features such as phishing protection, no tracking
  12. Automated updates plus many more

Visit Homepagehttps://www.mozilla.org/en-US/thunderbird/

2. Evolution Email Client

Evolution is not just an email client but an information management software that offers an integrated email client including calender and address book functionality.

Evolution Email Client for Linux

Evolution Email Client for Linux

It offers some of the basic email management functionalities plus advanced features including the following:

  1. Account management
  2. Changing mail window layout
  3. Deleting and undeleting messages
  4. Sorting and organizing mails
  5. Shortcut keys functionalities for reading mails
  6. Mail encryption and certificates
  7. Sending invitations by mail
  8. Autocompletion of email addresses
  9. Message forwarding
  10. Spell checking
  11. Working with email signatures
  12. Working offline plus many others

Visit Homepagehttps://wiki.gnome.org/Apps/Evolution

3. KMail Email Client

It is the email component of Kontact, KDE’s unified personal information manager.

Kmail Email Client for Linux

Kmail Email Client for Linux

KMail also has many features as the other email clients we have looked at above and these include:

  1. Supports standard mail protocols such as SMTP, IMAP and POP3
  2. Supports plain text and secure logins
  3. Reading and writing HTML mail
  4. Integration of international character set
  5. Integration with spam checkers such as Bogofilter, SpamAssassin plus many more
  6. Support for receiving and accepting invitations
  7. Powerful search and filter capabilities
  8. Spell checking
  9. Encrypted passwords saving in KWallet
  10. Backup support
  11. Fully integrated with other Kontact components plus many more

Visit Homepagehttps://userbase.kde.org/KMail

4. Geary Email Client

Geary is a simple and easy-to-use email client built with a modern interface for the GNOME 3 desktop. If you are looking for a simple and efficient email client that offers the basic functionalities, then Geary can be a good choice for you.

Geary Email Client for Linux

Geary Email Client for Linux

It has the following features:

  1. Supports common email service providers such as Gmail, Yahoo! Mail, plus many popular IMAP servers
  2. Simple, modern and straight forward interface
  3. Quick account setup
  4. Mail organized by conversations
  5. Fast keyword searching
  6. Full-featured HTML mail composer
  7. Desktop notifications support

Visit Homepagehttps://wiki.gnome.org/Apps/Geary

5. Sylpheed- Email Client

Sylpheed- is a simple, lightweight, easy-to-use, cross-platform email client that is featureful, it can run on Linux, Windows, Mac OS X and other Unix-like operating systems.

Sylpheed Email Client for Linux

Sylpheed Email Client for Linux

It is offers an intuitive user-interface with a keyboard-oriented use. It works well for new and power users with the following features:

  1. Simple, beautiful and easy-to-use interface
  2. Lightweight operations
  3. Pluggable
  4. Well organized, easy-to-understand configuration
  5. Junk mail control
  6. Support for various protocols
  7. Powerful searching and filtering functionalities
  8. Flexible cooperation with external commands
  9. Security features such as GnuPG, SSL/TLSv
  10. High-level Japanese processing and many more

Visit Homepagehttp://sylpheed.sraoss.jp/en/

6. Claws Mail Email Client

Claws mail is a user-friendly, lightweight and fast email client based on GTK+, it also includes news reader functionality. It has a graceful and sophisticated user interface, also supports keyboard-oriented operation similar to other email clients and works well for new and power users alike.

Claws Mail Email Client for Linux

Claws Mail Email Client for Linux

It has abundant features including the following:

  1. Highly pluggable
  2. Supports multiple email accounts
  3. Support for message filtering
  4. Color labels
  5. Highly extensible
  6. An external editor
  7. Line-wrapping
  8. Clickable URLs
  9. User-defined headers
  10. Mime attachments
  11. Managing messages in MH format offering fast access and data security
  12. Import and export emails from and to other email clients plus many others

Visit Homepagehttp://www.claws-mail.org/

Whether you need some basic features or advanced functionalities, the email clients above will work just well for you. There are many others out there that we have not looked at here which you might be using, you can let us know of them via the comment section below.

Source

MTR – A Network Diagnostic Tool for Linux

MTR is a simple, cross-platform command-line network diagnostic tool that combines the functionality of commonly used traceroute and ping programs into a single tool. In a similar fashion as traceroutemtr prints information about the route that packets take from the host on which mtr is run to a user specified destination host.

Read AlsoHow to Audit Network Performance, Security and Troubleshoot in Linux

However, mtr shows a wealth of information than traceroute: it determines the pathway to a remote machine while printing response percentage as well as response times of all network hops in the internet route between the local system and a remote machines.

How Does MTR Work?

Once you run mtr, it probes the network connection between the local system and a remote host that you have specified. It first establishes the address of each network hop (bridges, routers and gateways etc.) between the hosts, it then pings (sends a sequence ICMP ECHO requests to) each one to determine the quality of the link to each machine.

During the course of this operation, mtr outputs some useful statistics about each machine – updated in real-time, by default.

This tool comes pre-installed on most Linux distributions and is fairly easy to use once you go through the 10 mtr command examples for network diagnostics in Linux, explained below.

If mtr not installed, you can install it on your respective Linux distributions using your default package manager as shown.

$ sudo apt install mtr
$ sudo yum install mtr
$ sudo dnf install mtr

10 MTR Network Diagnostics Tool Usage Examples

1. The simplest example of using mtr is to provide the domain name or IP address of the remote machine as an argument, for example google.com or 216.58.223.78. This command will show you a traceroute report updated in real-time, until you exit the program (by pressing q or Ctrl + C).

$ mtr google.com
OR
$ mtr 216.58.223.78

Start: Thu Jun 28 12:10:13 2018
HOST: TecMint                     Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- 192.168.0.1                0.0%     5    0.3   0.3   0.3   0.4   0.0
  2.|-- 5.5.5.211                  0.0%     5    0.7   0.9   0.7   1.3   0.0
  3.|-- 209.snat-111-91-120.hns.n 80.0%     5    7.1   7.1   7.1   7.1   0.0
  4.|-- 72.14.194.226              0.0%     5    1.9   2.9   1.9   4.4   1.1
  5.|-- 108.170.248.161            0.0%     5    2.9   3.5   2.0   4.3   0.7
  6.|-- 216.239.62.237             0.0%     5    3.0   6.2   2.9  18.3   6.7
  7.|-- bom05s12-in-f14.1e100.net  0.0%     5    2.1   2.4   2.0   3.8   0.5

2. You can force mtr to display numeric IP addresses instead of host names (typically FQDNs – Fully Qualified Domain Names), using the -n flag as shown.

$ mtr -n google.com

Start: Thu Jun 28 12:12:58 2018
HOST: TecMint                     Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- 192.168.0.1                0.0%     5    0.3   0.3   0.3   0.4   0.0
  2.|-- 5.5.5.211                  0.0%     5    0.9   0.9   0.8   1.1   0.0
  3.|-- ???                       100.0     5    0.0   0.0   0.0   0.0   0.0
  4.|-- 72.14.194.226              0.0%     5    2.0   2.0   1.9   2.0   0.0
  5.|-- 108.170.248.161            0.0%     5    2.3   2.3   2.2   2.4   0.0
  6.|-- 216.239.62.237             0.0%     5    3.0   3.2   3.0   3.3   0.0
  7.|-- 172.217.160.174            0.0%     5    3.7   3.6   2.0   5.3   1.4

3. If you would like mtr to display both host names as well as numeric IP numbers use the -b flag as shown.

$ mtr -b google.com

Start: Thu Jun 28 12:14:36 2018
HOST: TecMint                     Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- 192.168.0.1                0.0%     5    0.3   0.3   0.3   0.4   0.0
  2.|-- 5.5.5.211                  0.0%     5    0.7   0.8   0.6   1.0   0.0
  3.|-- 209.snat-111-91-120.hns.n  0.0%     5    1.4   1.6   1.3   2.1   0.0
  4.|-- 72.14.194.226              0.0%     5    1.8   2.1   1.8   2.6   0.0
  5.|-- 108.170.248.209            0.0%     5    2.0   1.9   1.8   2.0   0.0
  6.|-- 216.239.56.115             0.0%     5    2.4   2.7   2.4   2.9   0.0
  7.|-- bom07s15-in-f14.1e100.net  0.0%     5    3.7   2.2   1.7   3.7   0.9

4. To limit the number of pings to a specific value and exit mtr after those pings, use the -c flag. If you observe from the Snt column, once the specified number of pings is reached, the live update stops and the program exits.

$ mtr -c5 google.com

5. You can set it into report mode using the -r flag, a useful option for producing statistics concerning network quality. You can use this option together with the -c option to specify the number of pings. Since the statistics are printed to std output, you can redirect them to a file for later analysis.

$ mtr -r -c 5 google.com >mtr-report

The -w flag enables wide report mode for a clearer output.

$ mtr -rw -c 5 google.com >mtr-report

6. You can also re-arrange the output fields the way you wish, this is made possible by the -o flag as shown (see the mtr man page for meaning of field labels).

$ mtr -o "LSDR NBAW JMXI" 216.58.223.78

MTR Fields and Order

MTR Fields and Order

7. The default interval between ICMP ECHO requests is one second, you can specify interval between ICMP ECHO requests by changing the value using the -i flag as shown.

$ mtr -i 2 google.com

8. You can use TCP SYN packets or UDP datagrams instead of the default ICMP ECHO requests as shown.

$ mtr --tcp test.com
OR
$ mtr --udp test.com 

9. To specify the maximum number of hops (default is 30) to be probed between the local system and the remote machine, use the -m flag.

$ mtr -m 35 216.58.223.78

10. While probing network quality, you can set the packet size used in bytes using the -s flag like so.

$ mtr -r -s PACKETSIZE -c 5 google.com >mtr-report

With these examples, you should be good to go with using mtr, see man page for more usage options.

$ man mtr 

Also check out these useful guides about Linux network configurations and troubleshooting:

  1. 13 Linux Network Configuration and Troubleshooting Commands
  2. How to Block Ping ICMP Requests to Linux Systems

That’s it for now! MTR is a simple, easy-to-use and above all cross-platform network diagnostics tool. In this guide, we have explained 10 mtr command examples in Linux. If you have any questions, or thoughts to share with us, use the comment form below.

Source

How to Backup or Clone Linux Partitions Using ‘cat’ Command

A rough utilization of Linux cat command would be to make a full disk backup or a disk partition backup or cloning of a disk partition by redirecting the command output against the partition of a hard disk, or USB stick or a local image file or write the output to a network socket.

Linux Filesystem Backup Using 'cat' Command

Linux Filesystem Backup Using ‘cat’ Command

It absolutely normal of you to think of why we should use cat over dd when the latter does the same job easily, which is quite right, however, I recently realized that cat is much faster than dd when its comes to speed and performance.

I do agree that dd provides, even more, options and also very useful in dealing with large backups such as tape drives (How to Clone Linux Partitions Using ‘dd’ Command), whereas cat includes lesser option and it’s not necessarily a worthy dd replacement but still, remains an option wherever applicable.

Suggested Read: How to Clone or Backup Linux Disk Using Clonezilla

Trust me, it gets the job done quite successfully in copying the content of a partition to a new unformatted partition. The only requirements would be to provide a valid hard disk partition with the minimum size of the existing data and with no filesystem whatsoever.

In the below example the first partition on the first hard disk, which corresponds to the /boot partition i.e. /dev/sda1, is cloned onto the first partition of the second disk (i.e. /dev/sdb1) using the Linux redirection operator.

# cat /dev/sda1 > /dev/sdb1

Full Disk Partition Backup in Linux

Full Disk Partition Backup in Linux

After the command finishes, the cloned partition is mounted to /mnt and both mount points directories are listed to check if any files are missing.

# mount /dev/sdb1 /mnt
# ls /mnt
# ls /boot

Verify Cloned Partition Files

Verify Cloned Partition Files

In order to extend the partition file system to the maximum size issue the following command with root privileges.

Suggested Read: 14 Outstanding Backup Utilities for Linux Systems

$ sudo resize2fs /dev/sdb1

Resize or Extend Partition Size in Linux

Resize or Extend Partition Size in Linux

The cat command is an excellent tool to manipulate text files in Linux and some special multimedia files, but should be avoided for binary data files or concatenate shebang files. For all other options don’t hesitate to execute man cat from console.

$ man cat

Surprisingly, there is another command called tac, yes, I am talking about tac, which is a reverse version of catcommand (also spelled backwards) which display each line of a file in reverse order, want to know more about tac, read How to Use Tac Command in Linux.

Source

Useful Commands to Create Commandline Chat Server and Remove Unwanted Packages in Linux

Here we are with the next part of Linux Command Line Tips and Tricks. If you missed our previous post on Linux Tricks you may find it here.

  1. 5 Linux Command Line Tricks

In this post we will be introducing 6 command Line tips namely create Linux Command line chat using Netcatcommand, perform addition of a column on the fly from the output of a command, remove orphan packages from Debian and CentOS, get local and remote IP from command Line, get colored output in terminal and decode various color code and last but not the least hash tags implementation in Linux command Line. Lets check them one by one.

Linux Commandline Chat Server

6 Useful Commandline Tricks and Tips

1. Create Linux Commandline Chat Server

We all have been using chat service since a long time. We are familiar with Google chat, Hangout, Facebook chat, Whatsapp, Hike and several other application and integrated chat services. Do you know Linux nccommand can make your Linux box a chat server with just one line of command.

What is nc command in Linux and what it does?

nc is the depreciation of Linux netcat command. The nc utility is often referred as Swiss army knife based upon the number of its built-in capabilities. It is used as debugging tool, investigation tool, reading and writing to network connection using TCP/UDP, DNS forward/reverse checking.

It is prominently used for port scanning, file transferring, backdoor and port listening. nc has the ability to use any local unused port and any local network source address.

Use nc command (On Server with IP address: 192.168.0.7) to create a command line messaging server instantly.

$ nc -l -vv -p 11119

Explanation of the above command switches.

  1. -v : means Verbose
  2. -vv : more verbose
  3. -p : The local port Number

You may replace 11119 with any other local port number.

Next on the client machine (IP address: 192.168.0.15) run the following command to initialize chat session to machine (where messaging server is running).

$ nc 192.168.0.7 11119

Linux Commandline Chat with nc Command

Note: You can terminate chat session by hitting ctrl+c key and also nc chat is one-to-one service.

2. How to Sum Values in a Column in Linux

How to sum the numerical values of a column, generated as an output of a command, on the fly in the terminal.

The output of the ‘ls -l‘ command.

$ ls -l

Sum Numerical Values

Notice that the second column is numerical which represents number of symbolic links and the 5th column is numerical which represents the size of he file. Say we need to sum the values of fifth column on the fly.

List the content of 5th column without printing anything else. We will be using ‘awk‘ command to do this. ‘$5‘ represents 5th column.

$ ls -l | awk '{print $5}'

List Content Column

Now use awk to print the sum of the output of 5th column by pipelining it.

$ ls -l | awk '{print $5}' | awk '{total = total + $1}END{print total}'

Sum and Print Columns

How to Remove Orphan Packages in Linux?

Orphan packages are those packages that are installed as a dependency of another package and no longer required when the original package is removed.

Say we installed a package gtprogram which was dependent of gtdependency. We can’t install gtprogramunless gtdependency is installed.

When we remove gtprogram it won’t remove gtdependency by default. And if we don’t remove gtdependency, it will remain as Orpahn Package with no connection to any other package.

# yum autoremove                [On RedHat Systems]

Remove Orphan Packages in CentOS

# apt-get autoremove                [On Debian Systems]

Remove Orphan Packages in Debian

You should always remove Orphan Packages to keep the Linux box loaded with just necessary stuff and nothing else.

4. How to Get Local and Public IP Address of Linux Server

To get you local IP address run the below one liner script.

$ ifconfig | grep "inet addr:" | awk '{print $2}' | grep -v '127.0.0.1' | cut -f2 -d:

You must have installed ifconfig, if not, apt or yum the required packages. Here we will be pipelining the output of ifconfig with grep command to find the string “intel addr:”.

We know ifconfig command is sufficient to output local IP Address. But ifconfig generate lots of other outputs and our concern here is to generate only local IP address and nothing else.

# ifconfig | grep "inet addr:"

Check Local IP Address

Although the output is more custom now, but we need to filter our local IP address only and nothing else. For this we will use awk to print the second column only by pipelining it with the above script.

# ifconfig | grep “inet addr:” | awk '{print $2}'

Filter Only IP Address

Clear from the above image that we have customised the output very much but still not what we want. The loopback address 127.0.0.1 is still there in the result.

We use use -v flag with grep that will print only those lines that don’t match the one provided in argument. Every machine have the same loopback address 127.0.0.1, so use grep -v to print those lines that don’t have this string, by pipelining it with above output.

# ifconfig | grep "inet addr" | awk '{print $2}' | grep -v '127.0.0.1'

Print IP Address

We have almost generated desired output, just replace the string (addr:) from the beginning. We will use cutcommand to print only column two. The column 1 and column 2 are not separated by tab but by (:), so we need to use delimiter (-d) by pipelining the above output.

# ifconfig | grep "inet addr:" | awk '{print $2}' | grep -v '127.0.0.1' | cut -f2 -d:

Customized IP Address

Finally! The desired result has been generated.

5. How to Color Linux Terminal

You might have seen colored output in terminal. Also you would be knowing to enable/disable colored output in terminal. If not you may follow the below steps.

In Linux every user has '.bashrc' file, this file is used to handle your terminal output. Open and edit this file with your choice of editor. Note that, this file is hidden (dot beginning of file means hidden).

$ vi /home/$USER/.bashrc

Make sure that the following lines below are uncommented. ie., it don’t start with a #.

if [ -x /usr/bin/dircolors ]; then
    test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dirc$
    alias ls='ls --color=auto'
    #alias dir='dir --color=auto'
    #alias vdir='vdir --color=auto'

    alias grep='grep --color=auto'
    alias fgrep='fgrep --color=auto'
    alias egrep='egrep --color=auto'
fi

User .bashrc File

Once done! Save and exit. To make the changes taken into effect logout and again login.

Now you will see files and folders are listed in various colors based upon type of file. To decode the color code run the below command.

$ dircolors -p

Since the output is too long, lets pipeline the output with less command so that we get output one screen at a time.

$ dircolors -p | less

Linux Color Output

6. How to Hash Tag Linux Commands and Scripts

We are using hash tags on TwitterFacebook and Google Plus (may be some other places, I have not noticed). These hash tags make it easier for others to search for a hash tag. Very few know that we can use hash tag in Linux command Line.

We already know that # in configuration files and most of the programming languages is treated as comment line and is excluded from execution.

Run a command and then create a hash tag of the command so that we can find it later. Say we have a long script that was executed in point 4 above. Now create a hash tag for this. We know ifconfig can be run by sudoor root user hence acting as root.

# ifconfig | grep "inet addr:" | awk '{print $2}' | grep -v '127.0.0.1' | cut -f2 -d: #myip

The script above has been hash tagged with ‘myip‘. Now search for the hash tag in reverse-i-serach (press ctrl+r), in the terminal and type ‘myip‘. You may execute it from there, as well.

Create Command Hash Tags

You may create as many hash tags for every command and find it later using reverse-i-search.

That’s all for now. We have been working hard to produce interesting and knowledgeable contents for you. What do you think how we are doing? Any suggestion is welcome. You may comment in the box below. Keep connected! Kudos.

Source

Exploring /proc File System in Linux

Today, we are going to take a look inside the /proc directory and develop a familiarity with it. The /proc directory is present on all Linux systems, regardless of flavor or architecture.

One misconception that we have to immediately clear up is that the /proc directory is NOT a real File System, in the sense of the term. It is a Virtual File System. Contained within the procfs are information about processes and other system information. It is mapped to /proc and mounted at boot time.

Linux proc file system

Exploring /proc File System

First, lets get into the /proc directory and have a look around:

# cd /proc

The first thing that you will notice is that there are some familiar sounding files, and then a whole bunch of numbered directories. The numbered directories represent processes, better known as PIDs, and within them, a command that occupies them. The files contain system information such as memory (meminfo), CPU information (cpuinfo), and available filesystems.

Read Also:  Linux Free Command to Check Physical Memory and Swap Memory

Let’s take a look at one of the files first:

# cat /proc/meminfo
Sample Output

which returns something similar to this:

MemTotal:         604340 kB
MemFree:           54240 kB
Buffers:           18700 kB
Cached:           369020 kB
SwapCached:            0 kB
Active:           312556 kB
Inactive:         164856 kB
Active(anon):      89744 kB
Inactive(anon):      360 kB
Active(file):     222812 kB
Inactive(file):   164496 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:                 0 kB
Writeback:             0 kB
AnonPages:         89724 kB
Mapped:            18012 kB
Shmem:               412 kB
Slab:              50104 kB
SReclaimable:      40224 kB
...

As you can see, /proc/meminfo contains a bunch of information about your system’s memory, including the total amount available (in kb) and the amount free on the top two lines.

Running the cat command on any of the files in /proc will output their contents. Information about any files is available in the man page by running:

# man 5 /proc/<filename>

I will give you quick rundown on /proc’s files:

  1. /proc/cmdline – Kernel command line information.
  2. /proc/console – Information about current consoles including tty.
  3. /proc/devices – Device drivers currently configured for the running kernel.
  4. /proc/dma – Info about current DMA channels.
  5. /proc/fb – Framebuffer devices.
  6. /proc/filesystems – Current filesystems supported by the kernel.
  7. /proc/iomem – Current system memory map for devices.
  8. /proc/ioports – Registered port regions for input output communication with device.
  9. /proc/loadavg – System load average.
  10. /proc/locks – Files currently locked by kernel.
  11. /proc/meminfo – Info about system memory (see above example).
  12. /proc/misc – Miscellaneous drivers registered for miscellaneous major device.
  13. /proc/modules – Currently loaded kernel modules.
  14. /proc/mounts – List of all mounts in use by system.
  15. /proc/partitions – Detailed info about partitions available to the system.
  16. /proc/pci – Information about every PCI device.
  17. /proc/stat – Record or various statistics kept from last reboot.
  18. /proc/swap – Information about swap space.
  19. /proc/uptime – Uptime information (in seconds).
  20. /proc/version – Kernel version, gcc version, and Linux distribution installed.

Within /proc’s numbered directories you will find a few files and links. Remember that these directories’ numbers correlate to the PID of the command being run within them. Let’s use an example. On my system, there is a folder name /proc/12:

# cd /proc/12
# ls
Sample Output
attr        coredump_filter  io         mounts      oom_score_adj  smaps    wchan
autogroup   cpuset           latency    mountstats  pagemap        stack
auxv        cwd              limits     net         personality    stat
cgroup      environ          loginuid   ns          root           statm
clear_refs  exe              maps       numa_maps   sched          status
cmdline     fd               mem        oom_adj     schedstat      syscall
comm        fdinfo           mountinfo  oom_score   sessionid      task

If I run:

# cat /proc/12/status

I get the following:

Name:	xenwatch
State:	S (sleeping)
Tgid:	12
Pid:	12
PPid:	2
TracerPid:	0
Uid:	0	0	0	0
Gid:	0	0	0	0
FDSize:	64
Groups:
Threads:	1
SigQ:	1/4592
SigPnd:	0000000000000000
ShdPnd:	0000000000000000
SigBlk:	0000000000000000
SigIgn:	ffffffffffffffff
SigCgt:	0000000000000000
CapInh:	0000000000000000
CapPrm:	ffffffffffffffff
CapEff:	ffffffffffffffff
CapBnd:	ffffffffffffffff
Cpus_allowed:	1
Cpus_allowed_list:	0
Mems_allowed:	00000000,00000001
Mems_allowed_list:	0
voluntary_ctxt_switches:	84
nonvoluntary_ctxt_switches:	0

So, what does this mean? Well, the important part is at the top. We can see from the status file that this process belongs to xenwatch. Its current state is sleeping, and its process ID is 12, obviously. We also can see who is running this, as UID and GID are 0, indicating that this process belongs to the root user.

In any numbered directory, you will have a similar file structure. The most important ones, and their descriptions, are as follows:

  1. cmdline – command line of the process
  2. environ – environmental variables
  3. fd – file descriptors
  4. limits – contains information about the limits of the process
  5. mounts – related information

You will also notice a number of links in the numbered directory:

  1. cwd – a link to the current working directory of the process
  2. exe – link to the executable of the process
  3. root – link to the work directory of the process

This should get you started with familiarizing yourself with the /proc directory. It should also provide insight to how a number of commands obtain their info, such as uptimelsofmount, and ps, just to name a few.

Source

Learn How to Use ‘fuser’ Command with Examples in Linux

One of the most important task in Linux systems administration, is process management. Its involves several operations under monitoring, signaling processes as well as setting processes priorities on the system.

There are numerous Linux tools/utilities designed for monitoring/handling processes such as toppspgrepkillkillallnice coupled with many others.

In this article, we shall uncover how to find processes using a resourceful Linux utility called fuser.

Suggested Read: Find Top Running Processes by Highest Memory and CPU Usage

fuser is a simple yet powerful command line utility intended to locate processes based on the files, directories or socket a particular process is accessing. In short, it helps a system user identify processes using files or sockets.

How to Use fuser in Linux Systems

The conventional syntax for using fuser is:

# fuser [options] [file|socket]
# fuser [options] -SIGNAL [file|socket]
# fuser -l 

Below are a few examples of using fuser to locate processes on your system.

Find Which Process Accessing a Directory

Running fuser command without any option will displays the PIDs of processes currently accessing your current working directory.

$ fuser .
OR
$ fuser /home/tecmint

Find Running Processes of Directory

Find Running Processes of Directory

For a more detailed and clear output, enable the -v or --verbose as follows. In the output, fuser prints out the name of the current directory, then columns of the process owner (USER), process ID (PID), the access type (ACCESS) and command (COMMAND) as in the image below.

$ fuser -v

List of Running Processes of Directory

List of Running Processes of Directory

Under the ACCESS column, you will see access types signified by the following letters:

  1. c – current directory
  2. e – an executable file being run
  3. f – open file, however, f is left out in the output
  4. F – open file for writing, F is as well excluded from the output
  5. r – root directory
  6. m – mmap’ed file or shared library

Find Which Process Accessing A File System

Next, you can determine which processes are accessing your ~.bashrc file like so:

$ fuser -v -m .bashrc

The option, -m NAME or --mount NAME means name all processes accessing the file NAME. In case you a spell out directory as NAME, it is spontaneously changed to NAME/, to use any file system that is possibly mounted on that directory.

Suggested Read: Find Top 15 Processes by Memory Usage in Linux

How to Kill and Signal Processes Using fuser

In this section we shall work through using fuser to kill and send signals to processes.

In order to kill a processes accessing a file or socket, employ the -k or --kill option like so:

$ sudo fuser -k .

To interactively kill a process, where you are that asked to confirm your intention to kill the processes accessing a file or socket, make use of -i or --interactive option:

$ sudo fuser -ki .

Interactively Kill Process in Linux

Interactively Kill Process in Linux

The two previous commands will kill all processes accessing your current directory, the default signal sent to the processes is SIGKILL, except when -SIGNAL is used.

Suggested Read: A Guide to Kill, Pkill and Killall Commands in Linux

You can list all the signals using the -l or --list-signals options as below:

$ sudo fuser --list-signals 

List All Kill Process Signals

List All Kill Process Signals

Therefore, you can send a signal to processes as in the next command, where SIGNAL is any of the signals listed in the output above.

$ sudo fuser -k -SIGNAL

For example, this command below sends the HUP signal to all processes that have your /boot directory open.

$ sudo fuser -k -HUP /boot 

Try to read through the fuser man page for advanced usage options, additional and more detailed information.

That is it for now, you can reach us by means of the feedback section below for any assistance that you possibly need or suggestions you wish to make.

Source

10 Amazing and Mysterious Uses of (!) Symbol or Operator in Linux Commands

The '!' symbol or operator in Linux can be used as Logical Negation operator as well as to fetch commands from history with tweaks or to run previously run command with modification. All the commands below have been checked explicitly in bash Shell. Though I have not checked but a major of these won’t run in other shell. Here we go into the amazing and mysterious uses of '!' symbol or operator in Linux commands.

1. Run a command from history by command number.

You might not be aware of the fact that you can run a command from your history command (already/earlier executed commands). To get started first find the command number by running ‘history‘ command.

$ history

Find Last Executed Commands with History Command

Now run a command from history just by the number at which it appears, in the output of history. Say run a command that appears at number 1551 in the output of ‘history‘ command.

$ !1551

Run Last Executed Commands by Number ID

And, it runs the command (top command in the above case), that was listed at number 1551. This way to retrieving already executed command is very helpful specially in case of those commands which are long. You just need to call it using ![Number at which it appears in the output of history command].

2. Run previously executed command as 2nd last command, 7th last command,etc.

You may run those commands which you have run previously by their running sequence being the last run command will be represented as -1, second last as -2, seventh last as -7,….

First run history command to get a list of last executed command. It is necessary to run history command, so that you can be sure that there is no command like rm command > file and others just to make sure you do not run any dangerous command accidentally. And then check Sixth last command, Eight last command and Tenth last command.

$ history
$ !-6
$ !-8
$ !-10

Run Last Executed Commands By Numbers

Run Last Executed Commands By Numbers

3. Pass arguments of last command that we run to the new command without retyping

I need to list the content of directory ‘/home/$USER/Binary/firefox‘ so I fired.

$ ls /home/$USER/Binary/firefox

Then I realized that I should have fired ‘ls -l‘ to see which file is executable there? So should I type the whole command again! No I don’t need. I just need to carry the last argument to this new command as:

$ ls -l !$

Here !$ will carry arguments passed in last command to this new command.

Pass Arguments of Last Executed Command to New

Pass Arguments of Last Executed Command to New

4. How to handle two or more arguments using (!)

Let’s say I created a text file 1.txt on the Desktop.

$ touch /home/avi/Desktop/1.txt

and then copy it to ‘/home/avi/Downloads‘ using complete path on either side with cp command.

$ cp /home/avi/Desktop/1.txt /home/avi/downloads

Now we have passed two arguments with cp command. First is ‘/home/avi/Desktop/1.txt‘ and second is ‘/home/avi/Downloads‘, lets handle them differently, just execute echo [arguments] to print both arguments differently.

$ echo “1st Argument is : !^”
$ echo “2nd Argument is : !cp:2”

Note 1st argument can be printed as “!^” and rest of the arguments can be printed by executing “![Name_of_Command]:[Number_of_argument]”.

In the above example the first command was ‘cp‘ and 2nd argument was needed to print. Hence “!cp:2”, if any command say xyz is run with 5 arguments and you need to get 4th argument, you may use “!xyz:4”, and use it as you like. All the arguments can be accessed by “!*”.

Handle Two or More Arguments

Handle Two or More Arguments

5. Execute last command on the basis of keywords

We can execute the last executed command on the basis of keywords. We can understand it as follows:

$ ls /home > /dev/null						[Command 1]
$ ls -l /home/avi/Desktop > /dev/null		                [Command 2]	
$ ls -la /home/avi/Downloads > /dev/null	                [Command 3]
$ ls -lA /usr/bin > /dev/null				        [Command 4]

Here we have used same command (ls) but with different switches and for different folders. Moreover we have sent to output of each command to ‘/dev/null‘ as we are not going to deal with the output of the command also the console remains clean.

Now Execute last run command on the basis of keywords.

$ ! ls					[Command 1]
$ ! ls -l				[Command 2]	
$ ! ls -la				[Command 3]
$ ! ls -lA				[Command 4]

Check the output and you will be astonished that you are running already executed commands just by lskeywords.

Run Commands Based on Keywords

Run Commands Based on Keywords

6. The power of !! Operator

You can run/alter your last run command using (!!). It will call the last run command with alter/tweak in the current command. Lets show you the scenario

Last day I run a one-liner script to get my private IP so I run,

$ ip addr show | grep inet | grep -v 'inet6'| grep -v '127.0.0.1' | awk '{print $2}' | cut -f1 -d/

Then suddenly I figured out that I need to redirect the output of the above script to a file ip.txt, so what should I do? Should I retype the whole command again and redirect the output to a file? Well an easy solution is to use UP navigation key and add '> ip.txt' to redirect the output to a file as.

$ ip addr show | grep inet | grep -v 'inet6'| grep -v '127.0.0.1' | awk '{print $2}' | cut -f1 -d/ > ip.txt

Thanks to the life Savior UP navigation key here. Now consider the below condition, the next time I run below one-liner script.

$ ifconfig | grep "inet addr:" | awk '{print $2}' | grep -v '127.0.0.1' | cut -f2 -d:

As soon as I run script, the bash prompt returned an error with the message “bash: ifconfig: command not found”, It was not difficult for me to guess I run this command as user where it should be run as root.

So what’s the solution? It is difficult to login to root and then type the whole command again! Also (UP Navigation Key) in last example didn’t came to rescue here. So? We need to call “!!” without quotes, which will call the last command for that user.

$ su -c “!!” root

Here su is switch user which is root, -c is to run the specific command as the user and the most important part !! will be replaced by command and last run command will be substituted here. Yeah! You need to provide root password.

The Power of !! Key

The Power of !! Key

I make use of !! mostly in following scenarios,

1. When I run apt-get command as normal user, I usually get an error saying you don’t have permission to execute.

$ apt-get upgrade && apt-get dist-upgrade

Opps error…don’t worry execute below command to get it successful..

$ su -c !!

Same way I do for,

$ service apache2 start
or
$ /etc/init.d/apache2 start
or
$ systemctl start apache2

OOPS User not authorized to carry such task, so I run..

$ su -c 'service apache2 start'
or
$ su -c '/etc/init.d/apache2 start'
or
$ su -c 'systemctl start apache2'
7. Run a command that affects all the file except ![FILE_NAME]

The ! (Logical NOT) can be used to run the command on all the files/extension except that is behind '!'.

A. Remove all the files from a directory except the one the name of which is 2.txt.

$ rm !(2.txt)

B. Remove all the file type from the folder except the one the extension of which is ‘pdf‘.

$ $ rm !(*.pdf)
8. Check if a directory (say /home/avi/Tecmint)exist or not? Printf if the said directory exist or not.

Here we will use '! -d' to validate if the directory exist or not followed by Logical AND Operator (&&) to print that directory does not exist and Logical OR Operator (||) to print the directory is present.

Logic is, when the output of [ ! -d /home/avi/Tecmint ] is 0, it will execute what lies beyond Logical ANDelse it will go to Logical OR (||) and execute what lies beyond Logical OR.

$ [ ! -d /home/avi/Tecmint ] && printf '\nno such /home/avi/Tecmint directory exist\n' || printf '\n/home/avi/Tecmint directory exist\n'
9. Check if a directory exist or not? If not exit the command.

Similar to the above condition, but here if the desired directory doesn’t exist it will exit the command.

$ [ ! -d /home/avi/Tecmint ] && exit
10. Create a directory (say test) in your home directory if it does not exist.

A general implementation in Scripting Language where if the desired directory does not exist, it will create one.

[ ! -d /home/avi/Tecmint ] && mkdir /home/avi/Tecmint

That’s all for now. If you know or come across any other use of '!' which is worth knowing, you may like to provide us with your suggestion in the feedback. Keep connected!

Source

MultiCD – Create a MultiBoot Linux Live USB

Having a single CD or USB drive with multiple available operating systems, for install, can be extremely useful in all kind of scenarios. Either for quickly testing or debugging something or simply reinstalling the operating system of your laptop or PC, this can save you lots of time.

Read AlsoHow to Install Linux on USB and Run It On Any PC

In this article, you will learn how to create multi bootable USB media, by using tool called MultiCD – is a shell script, designed to create a multiboot image with different Linux distributions (means it combine several boot CDs into one). That image can later be written to CD/DVD or flash drive so you can use it to install the OS by your choice.

The advantages to making a CD with MultiCD script are:

  • No need to create multiple CDs for small distributions.
  • If you already have the ISO images, it’s not required to download them again.
  • When new distributions is released, simply download and run the script again to build a new multiboot image.

Read Also2 Ways to Create an ISO from a Bootable USB in Linux

Download MultiCD Script

MultiCD can be obtained by either using git command or by downloading the tar archive.

If you wish to use the git repository, use the following command.

# git clone git://github.com/IsaacSchemm/MultiCD.git

Create Multiboot Image

Before we start creating our multiboot image, we will need to download the images for the Linux distributions we like to use. You can see a list of all supported Linux distros on the MultiCD page.

Once you have downloaded the image files, you will have to place them in the same directory as the MultiCDscript. For me that directory is MultiCD. For the purpose of this tutorial, I have prepared two ISO images:

CentOS-7 minimal
Ubuntu 18 desktop

Multi Linux Distros

Multi Linux Distros

It is important to note that the downloaded images should be renamed as listed in the Supported distros list or a symlink to be created. So reviewing the supported images, you can see that the filename for Ubuntu can remain the same as the original file.

For CentOS however, it must be renamed to centos-boot.iso as shown.

# mv CentOS-7-x86_64-Minimal-1810.iso centos-boot.iso

Now to create the multiboot image, run the following command.

# sudo multicd.sh 

The script will look for your .iso files and attempt to create the new file.

Create Multiboot Linux Image

Create Multiboot Linux Image

Once the process is complete, you will end up having a file called multicd.iso inside the build folder. You can now burn the new image file to CD or USB flash drive. Next you can test it by trying to boot from the new media. The boot page should look like this:

Test Multiboot Media

Test Multiboot Media

Choose the OS you wish to install and you will be redirected to the options for that OS.

Select Linux Distro to Install

Select Linux Distro to Install

Just like that, you can create a single bootable media with multiple Linux distros on it. The most important part is to always check the correct name for the iso image that you want to write as otherwise it might not be detected by multicd.sh.

Conclusion

MultiCD is no doubt one of the useful tools that can save you time from burning CDs or creating multiple bootable flash drives. Personally I have created my own USB flash drive few distros on it to keep in my desk. You never know when you will want to install another distro on your device.

Source

How to Repair and Defragment Linux System Partitions and Directories

People who use Linux often think that it doesn’t require defragmentation. This is a common misunderstanding across Linux users. Actually, the Linux operating system does support defragmentation. The point of the defragmentation is to improve I/O operations like allowing local videos to load faster or extracting archives significantly faster.

Defragment Linux System Partitions

Defragment Linux System Partitions and Directories

The Linux ext2, ext3 and ext4 filesystems don’t need that much attention, but with time, after executing many many many read/writes the filesystem may require optimization. Otherwise the hard disk might become slower and may affect the entire system.

In this tutorial I am going to show you few different techniques to perform defragmentation on files. Before we start, we should mention what the common filesystems like ext2,3,4 do to prevent fragmentation. These filesystems include technique to prevent the effect. For example filesystems reserve free block groups on the hard disk to store growing files completely.

Unfortunately the problem is not always solved with such mechanism. While other operating systems may require expensive additional software to resolve such issues, Linux has some easy to install tools that can help you resolve such problems.

How to Check a Filesystem Requires Defragmentation?

Before we start I would like to point that the operations below should only be ran on HDDs and not on SSD. Defragging your SSD drive will only increase its read/write count and therefore shorten it’s life. Instead, if you are using SSD, you should use the TRIM function, which is not covered in this tutorial.

let’s test if the system actually requires defragmentation. We can easily check this with tool such as e2fsck. Before you use this tool on a partition on your system, it is recommended to unmount that partition with. This is not completely necessary, but it’s the safe way to go:

$ sudo umount <device file>

In my case I have /dev/sda1 mounted at /tmp:

Disk Partition Table Before

Disk Partition Table Before

Keep in mind that in your case the partition table might be different so make sure to unmount the right partition. To unmount that partition you can use:

$ sudo umount /dev/sda1

Now let’s check if this partition requires defragmentation, with e2fsck. You will need to run the following command:

$ sudo e2fsck -fn /dev/sda1

The above command will perform a file system check. The -f option forces the check, even if the system seems clean. The -n option is used to open the filesystem in read-only and assume answer of "no" to all questions that may appear.

This options basically allows to use e2fsck non-interactively. If everything is Okay, you should see result similar to the one shown on the screenshot below:

e2fsck Healthy Partition

e2fsck Healthy Partition

Here is another example that shows errors on a system:

e2fsck With Errors

e2fsck With Errors

How to Repair Linux Filesystem Using e2fsck

If errors appear, you can attempt a repair of the filesystem with e2fsck with the “-p” option. Note that in order to run the command below, the partition will need to be unmounted:

$ sudo e2fsck -p <device file>

The “-p” options attempts automatic repair on the file system for problems that can be safely fixed without human intervention. If a problem is discovered that may require the system administrator to take additional corrective action, e2fsck will print a description of the problem and will exit with code 4, which means “File system errors left uncorrected”. Depending on the issue that has been found, different actions might be required.

If the issue appears on a partition that cannot be unmounted, you can use another tool called e4defrag. It comes pre-installed on many Linux distros, but if you don’t have it on yours, you can install it with:

$ sudo apt-get install e2fsprogs         [On Debian and Derivatives]
# yum install e2fsprogs                  [On CentOS based systems]
# dnf install e2fsprogs                  [On Fedora 22+ versions] 

How to Defragment Linux Partitions

Now it’s time to defragment Linux partitions using following command.

$ sudo e4defrag <location>
or
$ sudo e4defrag <device>

How to Defragment Linux Directory

For example, if you wish to defragment a single directory or device, you can use:

$ sudo e4defrag /home/user/directory/
# sudo e4defrag /dev/sda5

How to Defragment All Linux Partitions

If you prefer to defragment your entire system, the safe way of doing this is:

$ sudo e4defrag /

Keep in mind that this process may take some time to be completed.

Conclusion

Defragmentation is an operation that you will rarely need to run in Linux. It’s meant for power users who know what exactly they are doing and is not recommended for Linux newbies. The point of the whole action is to have your filesystem optimized so that new read/write operations are performed more efficiently.

Source

WP2Social Auto Publish Powered By : XYZScripts.com