MTR – A Network Diagnostic Tool for Linux

MTR is a simple, cross-platform command-line network diagnostic tool that combines the functionality of commonly used traceroute and ping programs into a single tool. In a similar fashion as traceroutemtr prints information about the route that packets take from the host on which mtr is run to a user specified destination host.

Read AlsoHow to Audit Network Performance, Security and Troubleshoot in Linux

However, mtr shows a wealth of information than traceroute: it determines the pathway to a remote machine while printing response percentage as well as response times of all network hops in the internet route between the local system and a remote machines.

How Does MTR Work?

Once you run mtr, it probes the network connection between the local system and a remote host that you have specified. It first establishes the address of each network hop (bridges, routers and gateways etc.) between the hosts, it then pings (sends a sequence ICMP ECHO requests to) each one to determine the quality of the link to each machine.

During the course of this operation, mtr outputs some useful statistics about each machine – updated in real-time, by default.

This tool comes pre-installed on most Linux distributions and is fairly easy to use once you go through the 10 mtr command examples for network diagnostics in Linux, explained below.

If mtr not installed, you can install it on your respective Linux distributions using your default package manager as shown.

$ sudo apt install mtr
$ sudo yum install mtr
$ sudo dnf install mtr

10 MTR Network Diagnostics Tool Usage Examples

1. The simplest example of using mtr is to provide the domain name or IP address of the remote machine as an argument, for example google.com or 216.58.223.78. This command will show you a traceroute report updated in real-time, until you exit the program (by pressing q or Ctrl + C).

$ mtr google.com
OR
$ mtr 216.58.223.78

Start: Thu Jun 28 12:10:13 2018
HOST: TecMint                     Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- 192.168.0.1                0.0%     5    0.3   0.3   0.3   0.4   0.0
  2.|-- 5.5.5.211                  0.0%     5    0.7   0.9   0.7   1.3   0.0
  3.|-- 209.snat-111-91-120.hns.n 80.0%     5    7.1   7.1   7.1   7.1   0.0
  4.|-- 72.14.194.226              0.0%     5    1.9   2.9   1.9   4.4   1.1
  5.|-- 108.170.248.161            0.0%     5    2.9   3.5   2.0   4.3   0.7
  6.|-- 216.239.62.237             0.0%     5    3.0   6.2   2.9  18.3   6.7
  7.|-- bom05s12-in-f14.1e100.net  0.0%     5    2.1   2.4   2.0   3.8   0.5

2. You can force mtr to display numeric IP addresses instead of host names (typically FQDNs – Fully Qualified Domain Names), using the -n flag as shown.

$ mtr -n google.com

Start: Thu Jun 28 12:12:58 2018
HOST: TecMint                     Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- 192.168.0.1                0.0%     5    0.3   0.3   0.3   0.4   0.0
  2.|-- 5.5.5.211                  0.0%     5    0.9   0.9   0.8   1.1   0.0
  3.|-- ???                       100.0     5    0.0   0.0   0.0   0.0   0.0
  4.|-- 72.14.194.226              0.0%     5    2.0   2.0   1.9   2.0   0.0
  5.|-- 108.170.248.161            0.0%     5    2.3   2.3   2.2   2.4   0.0
  6.|-- 216.239.62.237             0.0%     5    3.0   3.2   3.0   3.3   0.0
  7.|-- 172.217.160.174            0.0%     5    3.7   3.6   2.0   5.3   1.4

3. If you would like mtr to display both host names as well as numeric IP numbers use the -b flag as shown.

$ mtr -b google.com

Start: Thu Jun 28 12:14:36 2018
HOST: TecMint                     Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- 192.168.0.1                0.0%     5    0.3   0.3   0.3   0.4   0.0
  2.|-- 5.5.5.211                  0.0%     5    0.7   0.8   0.6   1.0   0.0
  3.|-- 209.snat-111-91-120.hns.n  0.0%     5    1.4   1.6   1.3   2.1   0.0
  4.|-- 72.14.194.226              0.0%     5    1.8   2.1   1.8   2.6   0.0
  5.|-- 108.170.248.209            0.0%     5    2.0   1.9   1.8   2.0   0.0
  6.|-- 216.239.56.115             0.0%     5    2.4   2.7   2.4   2.9   0.0
  7.|-- bom07s15-in-f14.1e100.net  0.0%     5    3.7   2.2   1.7   3.7   0.9

4. To limit the number of pings to a specific value and exit mtr after those pings, use the -c flag. If you observe from the Snt column, once the specified number of pings is reached, the live update stops and the program exits.

$ mtr -c5 google.com

5. You can set it into report mode using the -r flag, a useful option for producing statistics concerning network quality. You can use this option together with the -c option to specify the number of pings. Since the statistics are printed to std output, you can redirect them to a file for later analysis.

$ mtr -r -c 5 google.com >mtr-report

The -w flag enables wide report mode for a clearer output.

$ mtr -rw -c 5 google.com >mtr-report

6. You can also re-arrange the output fields the way you wish, this is made possible by the -o flag as shown (see the mtr man page for meaning of field labels).

$ mtr -o "LSDR NBAW JMXI" 216.58.223.78

MTR Fields and Order

MTR Fields and Order

7. The default interval between ICMP ECHO requests is one second, you can specify interval between ICMP ECHO requests by changing the value using the -i flag as shown.

$ mtr -i 2 google.com

8. You can use TCP SYN packets or UDP datagrams instead of the default ICMP ECHO requests as shown.

$ mtr --tcp test.com
OR
$ mtr --udp test.com 

9. To specify the maximum number of hops (default is 30) to be probed between the local system and the remote machine, use the -m flag.

$ mtr -m 35 216.58.223.78

10. While probing network quality, you can set the packet size used in bytes using the -s flag like so.

$ mtr -r -s PACKETSIZE -c 5 google.com >mtr-report

With these examples, you should be good to go with using mtr, see man page for more usage options.

$ man mtr 

Also check out these useful guides about Linux network configurations and troubleshooting:

  1. 13 Linux Network Configuration and Troubleshooting Commands
  2. How to Block Ping ICMP Requests to Linux Systems

That’s it for now! MTR is a simple, easy-to-use and above all cross-platform network diagnostics tool. In this guide, we have explained 10 mtr command examples in Linux. If you have any questions, or thoughts to share with us, use the comment form below.

Source

How to Backup or Clone Linux Partitions Using ‘cat’ Command

A rough utilization of Linux cat command would be to make a full disk backup or a disk partition backup or cloning of a disk partition by redirecting the command output against the partition of a hard disk, or USB stick or a local image file or write the output to a network socket.

Linux Filesystem Backup Using 'cat' Command

Linux Filesystem Backup Using ‘cat’ Command

It absolutely normal of you to think of why we should use cat over dd when the latter does the same job easily, which is quite right, however, I recently realized that cat is much faster than dd when its comes to speed and performance.

I do agree that dd provides, even more, options and also very useful in dealing with large backups such as tape drives (How to Clone Linux Partitions Using ‘dd’ Command), whereas cat includes lesser option and it’s not necessarily a worthy dd replacement but still, remains an option wherever applicable.

Suggested Read: How to Clone or Backup Linux Disk Using Clonezilla

Trust me, it gets the job done quite successfully in copying the content of a partition to a new unformatted partition. The only requirements would be to provide a valid hard disk partition with the minimum size of the existing data and with no filesystem whatsoever.

In the below example the first partition on the first hard disk, which corresponds to the /boot partition i.e. /dev/sda1, is cloned onto the first partition of the second disk (i.e. /dev/sdb1) using the Linux redirection operator.

# cat /dev/sda1 > /dev/sdb1

Full Disk Partition Backup in Linux

Full Disk Partition Backup in Linux

After the command finishes, the cloned partition is mounted to /mnt and both mount points directories are listed to check if any files are missing.

# mount /dev/sdb1 /mnt
# ls /mnt
# ls /boot

Verify Cloned Partition Files

Verify Cloned Partition Files

In order to extend the partition file system to the maximum size issue the following command with root privileges.

Suggested Read: 14 Outstanding Backup Utilities for Linux Systems

$ sudo resize2fs /dev/sdb1

Resize or Extend Partition Size in Linux

Resize or Extend Partition Size in Linux

The cat command is an excellent tool to manipulate text files in Linux and some special multimedia files, but should be avoided for binary data files or concatenate shebang files. For all other options don’t hesitate to execute man cat from console.

$ man cat

Surprisingly, there is another command called tac, yes, I am talking about tac, which is a reverse version of catcommand (also spelled backwards) which display each line of a file in reverse order, want to know more about tac, read How to Use Tac Command in Linux.

Source

Useful Commands to Create Commandline Chat Server and Remove Unwanted Packages in Linux

Here we are with the next part of Linux Command Line Tips and Tricks. If you missed our previous post on Linux Tricks you may find it here.

  1. 5 Linux Command Line Tricks

In this post we will be introducing 6 command Line tips namely create Linux Command line chat using Netcatcommand, perform addition of a column on the fly from the output of a command, remove orphan packages from Debian and CentOS, get local and remote IP from command Line, get colored output in terminal and decode various color code and last but not the least hash tags implementation in Linux command Line. Lets check them one by one.

Linux Commandline Chat Server

6 Useful Commandline Tricks and Tips

1. Create Linux Commandline Chat Server

We all have been using chat service since a long time. We are familiar with Google chat, Hangout, Facebook chat, Whatsapp, Hike and several other application and integrated chat services. Do you know Linux nccommand can make your Linux box a chat server with just one line of command.

What is nc command in Linux and what it does?

nc is the depreciation of Linux netcat command. The nc utility is often referred as Swiss army knife based upon the number of its built-in capabilities. It is used as debugging tool, investigation tool, reading and writing to network connection using TCP/UDP, DNS forward/reverse checking.

It is prominently used for port scanning, file transferring, backdoor and port listening. nc has the ability to use any local unused port and any local network source address.

Use nc command (On Server with IP address: 192.168.0.7) to create a command line messaging server instantly.

$ nc -l -vv -p 11119

Explanation of the above command switches.

  1. -v : means Verbose
  2. -vv : more verbose
  3. -p : The local port Number

You may replace 11119 with any other local port number.

Next on the client machine (IP address: 192.168.0.15) run the following command to initialize chat session to machine (where messaging server is running).

$ nc 192.168.0.7 11119

Linux Commandline Chat with nc Command

Note: You can terminate chat session by hitting ctrl+c key and also nc chat is one-to-one service.

2. How to Sum Values in a Column in Linux

How to sum the numerical values of a column, generated as an output of a command, on the fly in the terminal.

The output of the ‘ls -l‘ command.

$ ls -l

Sum Numerical Values

Notice that the second column is numerical which represents number of symbolic links and the 5th column is numerical which represents the size of he file. Say we need to sum the values of fifth column on the fly.

List the content of 5th column without printing anything else. We will be using ‘awk‘ command to do this. ‘$5‘ represents 5th column.

$ ls -l | awk '{print $5}'

List Content Column

Now use awk to print the sum of the output of 5th column by pipelining it.

$ ls -l | awk '{print $5}' | awk '{total = total + $1}END{print total}'

Sum and Print Columns

How to Remove Orphan Packages in Linux?

Orphan packages are those packages that are installed as a dependency of another package and no longer required when the original package is removed.

Say we installed a package gtprogram which was dependent of gtdependency. We can’t install gtprogramunless gtdependency is installed.

When we remove gtprogram it won’t remove gtdependency by default. And if we don’t remove gtdependency, it will remain as Orpahn Package with no connection to any other package.

# yum autoremove                [On RedHat Systems]

Remove Orphan Packages in CentOS

# apt-get autoremove                [On Debian Systems]

Remove Orphan Packages in Debian

You should always remove Orphan Packages to keep the Linux box loaded with just necessary stuff and nothing else.

4. How to Get Local and Public IP Address of Linux Server

To get you local IP address run the below one liner script.

$ ifconfig | grep "inet addr:" | awk '{print $2}' | grep -v '127.0.0.1' | cut -f2 -d:

You must have installed ifconfig, if not, apt or yum the required packages. Here we will be pipelining the output of ifconfig with grep command to find the string “intel addr:”.

We know ifconfig command is sufficient to output local IP Address. But ifconfig generate lots of other outputs and our concern here is to generate only local IP address and nothing else.

# ifconfig | grep "inet addr:"

Check Local IP Address

Although the output is more custom now, but we need to filter our local IP address only and nothing else. For this we will use awk to print the second column only by pipelining it with the above script.

# ifconfig | grep “inet addr:” | awk '{print $2}'

Filter Only IP Address

Clear from the above image that we have customised the output very much but still not what we want. The loopback address 127.0.0.1 is still there in the result.

We use use -v flag with grep that will print only those lines that don’t match the one provided in argument. Every machine have the same loopback address 127.0.0.1, so use grep -v to print those lines that don’t have this string, by pipelining it with above output.

# ifconfig | grep "inet addr" | awk '{print $2}' | grep -v '127.0.0.1'

Print IP Address

We have almost generated desired output, just replace the string (addr:) from the beginning. We will use cutcommand to print only column two. The column 1 and column 2 are not separated by tab but by (:), so we need to use delimiter (-d) by pipelining the above output.

# ifconfig | grep "inet addr:" | awk '{print $2}' | grep -v '127.0.0.1' | cut -f2 -d:

Customized IP Address

Finally! The desired result has been generated.

5. How to Color Linux Terminal

You might have seen colored output in terminal. Also you would be knowing to enable/disable colored output in terminal. If not you may follow the below steps.

In Linux every user has '.bashrc' file, this file is used to handle your terminal output. Open and edit this file with your choice of editor. Note that, this file is hidden (dot beginning of file means hidden).

$ vi /home/$USER/.bashrc

Make sure that the following lines below are uncommented. ie., it don’t start with a #.

if [ -x /usr/bin/dircolors ]; then
    test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dirc$
    alias ls='ls --color=auto'
    #alias dir='dir --color=auto'
    #alias vdir='vdir --color=auto'

    alias grep='grep --color=auto'
    alias fgrep='fgrep --color=auto'
    alias egrep='egrep --color=auto'
fi

User .bashrc File

Once done! Save and exit. To make the changes taken into effect logout and again login.

Now you will see files and folders are listed in various colors based upon type of file. To decode the color code run the below command.

$ dircolors -p

Since the output is too long, lets pipeline the output with less command so that we get output one screen at a time.

$ dircolors -p | less

Linux Color Output

6. How to Hash Tag Linux Commands and Scripts

We are using hash tags on TwitterFacebook and Google Plus (may be some other places, I have not noticed). These hash tags make it easier for others to search for a hash tag. Very few know that we can use hash tag in Linux command Line.

We already know that # in configuration files and most of the programming languages is treated as comment line and is excluded from execution.

Run a command and then create a hash tag of the command so that we can find it later. Say we have a long script that was executed in point 4 above. Now create a hash tag for this. We know ifconfig can be run by sudoor root user hence acting as root.

# ifconfig | grep "inet addr:" | awk '{print $2}' | grep -v '127.0.0.1' | cut -f2 -d: #myip

The script above has been hash tagged with ‘myip‘. Now search for the hash tag in reverse-i-serach (press ctrl+r), in the terminal and type ‘myip‘. You may execute it from there, as well.

Create Command Hash Tags

You may create as many hash tags for every command and find it later using reverse-i-search.

That’s all for now. We have been working hard to produce interesting and knowledgeable contents for you. What do you think how we are doing? Any suggestion is welcome. You may comment in the box below. Keep connected! Kudos.

Source

Exploring /proc File System in Linux

Today, we are going to take a look inside the /proc directory and develop a familiarity with it. The /proc directory is present on all Linux systems, regardless of flavor or architecture.

One misconception that we have to immediately clear up is that the /proc directory is NOT a real File System, in the sense of the term. It is a Virtual File System. Contained within the procfs are information about processes and other system information. It is mapped to /proc and mounted at boot time.

Linux proc file system

Exploring /proc File System

First, lets get into the /proc directory and have a look around:

# cd /proc

The first thing that you will notice is that there are some familiar sounding files, and then a whole bunch of numbered directories. The numbered directories represent processes, better known as PIDs, and within them, a command that occupies them. The files contain system information such as memory (meminfo), CPU information (cpuinfo), and available filesystems.

Read Also:  Linux Free Command to Check Physical Memory and Swap Memory

Let’s take a look at one of the files first:

# cat /proc/meminfo
Sample Output

which returns something similar to this:

MemTotal:         604340 kB
MemFree:           54240 kB
Buffers:           18700 kB
Cached:           369020 kB
SwapCached:            0 kB
Active:           312556 kB
Inactive:         164856 kB
Active(anon):      89744 kB
Inactive(anon):      360 kB
Active(file):     222812 kB
Inactive(file):   164496 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:                 0 kB
Writeback:             0 kB
AnonPages:         89724 kB
Mapped:            18012 kB
Shmem:               412 kB
Slab:              50104 kB
SReclaimable:      40224 kB
...

As you can see, /proc/meminfo contains a bunch of information about your system’s memory, including the total amount available (in kb) and the amount free on the top two lines.

Running the cat command on any of the files in /proc will output their contents. Information about any files is available in the man page by running:

# man 5 /proc/<filename>

I will give you quick rundown on /proc’s files:

  1. /proc/cmdline – Kernel command line information.
  2. /proc/console – Information about current consoles including tty.
  3. /proc/devices – Device drivers currently configured for the running kernel.
  4. /proc/dma – Info about current DMA channels.
  5. /proc/fb – Framebuffer devices.
  6. /proc/filesystems – Current filesystems supported by the kernel.
  7. /proc/iomem – Current system memory map for devices.
  8. /proc/ioports – Registered port regions for input output communication with device.
  9. /proc/loadavg – System load average.
  10. /proc/locks – Files currently locked by kernel.
  11. /proc/meminfo – Info about system memory (see above example).
  12. /proc/misc – Miscellaneous drivers registered for miscellaneous major device.
  13. /proc/modules – Currently loaded kernel modules.
  14. /proc/mounts – List of all mounts in use by system.
  15. /proc/partitions – Detailed info about partitions available to the system.
  16. /proc/pci – Information about every PCI device.
  17. /proc/stat – Record or various statistics kept from last reboot.
  18. /proc/swap – Information about swap space.
  19. /proc/uptime – Uptime information (in seconds).
  20. /proc/version – Kernel version, gcc version, and Linux distribution installed.

Within /proc’s numbered directories you will find a few files and links. Remember that these directories’ numbers correlate to the PID of the command being run within them. Let’s use an example. On my system, there is a folder name /proc/12:

# cd /proc/12
# ls
Sample Output
attr        coredump_filter  io         mounts      oom_score_adj  smaps    wchan
autogroup   cpuset           latency    mountstats  pagemap        stack
auxv        cwd              limits     net         personality    stat
cgroup      environ          loginuid   ns          root           statm
clear_refs  exe              maps       numa_maps   sched          status
cmdline     fd               mem        oom_adj     schedstat      syscall
comm        fdinfo           mountinfo  oom_score   sessionid      task

If I run:

# cat /proc/12/status

I get the following:

Name:	xenwatch
State:	S (sleeping)
Tgid:	12
Pid:	12
PPid:	2
TracerPid:	0
Uid:	0	0	0	0
Gid:	0	0	0	0
FDSize:	64
Groups:
Threads:	1
SigQ:	1/4592
SigPnd:	0000000000000000
ShdPnd:	0000000000000000
SigBlk:	0000000000000000
SigIgn:	ffffffffffffffff
SigCgt:	0000000000000000
CapInh:	0000000000000000
CapPrm:	ffffffffffffffff
CapEff:	ffffffffffffffff
CapBnd:	ffffffffffffffff
Cpus_allowed:	1
Cpus_allowed_list:	0
Mems_allowed:	00000000,00000001
Mems_allowed_list:	0
voluntary_ctxt_switches:	84
nonvoluntary_ctxt_switches:	0

So, what does this mean? Well, the important part is at the top. We can see from the status file that this process belongs to xenwatch. Its current state is sleeping, and its process ID is 12, obviously. We also can see who is running this, as UID and GID are 0, indicating that this process belongs to the root user.

In any numbered directory, you will have a similar file structure. The most important ones, and their descriptions, are as follows:

  1. cmdline – command line of the process
  2. environ – environmental variables
  3. fd – file descriptors
  4. limits – contains information about the limits of the process
  5. mounts – related information

You will also notice a number of links in the numbered directory:

  1. cwd – a link to the current working directory of the process
  2. exe – link to the executable of the process
  3. root – link to the work directory of the process

This should get you started with familiarizing yourself with the /proc directory. It should also provide insight to how a number of commands obtain their info, such as uptimelsofmount, and ps, just to name a few.

Source

Learn How to Use ‘fuser’ Command with Examples in Linux

One of the most important task in Linux systems administration, is process management. Its involves several operations under monitoring, signaling processes as well as setting processes priorities on the system.

There are numerous Linux tools/utilities designed for monitoring/handling processes such as toppspgrepkillkillallnice coupled with many others.

In this article, we shall uncover how to find processes using a resourceful Linux utility called fuser.

Suggested Read: Find Top Running Processes by Highest Memory and CPU Usage

fuser is a simple yet powerful command line utility intended to locate processes based on the files, directories or socket a particular process is accessing. In short, it helps a system user identify processes using files or sockets.

How to Use fuser in Linux Systems

The conventional syntax for using fuser is:

# fuser [options] [file|socket]
# fuser [options] -SIGNAL [file|socket]
# fuser -l 

Below are a few examples of using fuser to locate processes on your system.

Find Which Process Accessing a Directory

Running fuser command without any option will displays the PIDs of processes currently accessing your current working directory.

$ fuser .
OR
$ fuser /home/tecmint

Find Running Processes of Directory

Find Running Processes of Directory

For a more detailed and clear output, enable the -v or --verbose as follows. In the output, fuser prints out the name of the current directory, then columns of the process owner (USER), process ID (PID), the access type (ACCESS) and command (COMMAND) as in the image below.

$ fuser -v

List of Running Processes of Directory

List of Running Processes of Directory

Under the ACCESS column, you will see access types signified by the following letters:

  1. c – current directory
  2. e – an executable file being run
  3. f – open file, however, f is left out in the output
  4. F – open file for writing, F is as well excluded from the output
  5. r – root directory
  6. m – mmap’ed file or shared library

Find Which Process Accessing A File System

Next, you can determine which processes are accessing your ~.bashrc file like so:

$ fuser -v -m .bashrc

The option, -m NAME or --mount NAME means name all processes accessing the file NAME. In case you a spell out directory as NAME, it is spontaneously changed to NAME/, to use any file system that is possibly mounted on that directory.

Suggested Read: Find Top 15 Processes by Memory Usage in Linux

How to Kill and Signal Processes Using fuser

In this section we shall work through using fuser to kill and send signals to processes.

In order to kill a processes accessing a file or socket, employ the -k or --kill option like so:

$ sudo fuser -k .

To interactively kill a process, where you are that asked to confirm your intention to kill the processes accessing a file or socket, make use of -i or --interactive option:

$ sudo fuser -ki .

Interactively Kill Process in Linux

Interactively Kill Process in Linux

The two previous commands will kill all processes accessing your current directory, the default signal sent to the processes is SIGKILL, except when -SIGNAL is used.

Suggested Read: A Guide to Kill, Pkill and Killall Commands in Linux

You can list all the signals using the -l or --list-signals options as below:

$ sudo fuser --list-signals 

List All Kill Process Signals

List All Kill Process Signals

Therefore, you can send a signal to processes as in the next command, where SIGNAL is any of the signals listed in the output above.

$ sudo fuser -k -SIGNAL

For example, this command below sends the HUP signal to all processes that have your /boot directory open.

$ sudo fuser -k -HUP /boot 

Try to read through the fuser man page for advanced usage options, additional and more detailed information.

That is it for now, you can reach us by means of the feedback section below for any assistance that you possibly need or suggestions you wish to make.

Source

10 Amazing and Mysterious Uses of (!) Symbol or Operator in Linux Commands

The '!' symbol or operator in Linux can be used as Logical Negation operator as well as to fetch commands from history with tweaks or to run previously run command with modification. All the commands below have been checked explicitly in bash Shell. Though I have not checked but a major of these won’t run in other shell. Here we go into the amazing and mysterious uses of '!' symbol or operator in Linux commands.

1. Run a command from history by command number.

You might not be aware of the fact that you can run a command from your history command (already/earlier executed commands). To get started first find the command number by running ‘history‘ command.

$ history

Find Last Executed Commands with History Command

Now run a command from history just by the number at which it appears, in the output of history. Say run a command that appears at number 1551 in the output of ‘history‘ command.

$ !1551

Run Last Executed Commands by Number ID

And, it runs the command (top command in the above case), that was listed at number 1551. This way to retrieving already executed command is very helpful specially in case of those commands which are long. You just need to call it using ![Number at which it appears in the output of history command].

2. Run previously executed command as 2nd last command, 7th last command,etc.

You may run those commands which you have run previously by their running sequence being the last run command will be represented as -1, second last as -2, seventh last as -7,….

First run history command to get a list of last executed command. It is necessary to run history command, so that you can be sure that there is no command like rm command > file and others just to make sure you do not run any dangerous command accidentally. And then check Sixth last command, Eight last command and Tenth last command.

$ history
$ !-6
$ !-8
$ !-10

Run Last Executed Commands By Numbers

Run Last Executed Commands By Numbers

3. Pass arguments of last command that we run to the new command without retyping

I need to list the content of directory ‘/home/$USER/Binary/firefox‘ so I fired.

$ ls /home/$USER/Binary/firefox

Then I realized that I should have fired ‘ls -l‘ to see which file is executable there? So should I type the whole command again! No I don’t need. I just need to carry the last argument to this new command as:

$ ls -l !$

Here !$ will carry arguments passed in last command to this new command.

Pass Arguments of Last Executed Command to New

Pass Arguments of Last Executed Command to New

4. How to handle two or more arguments using (!)

Let’s say I created a text file 1.txt on the Desktop.

$ touch /home/avi/Desktop/1.txt

and then copy it to ‘/home/avi/Downloads‘ using complete path on either side with cp command.

$ cp /home/avi/Desktop/1.txt /home/avi/downloads

Now we have passed two arguments with cp command. First is ‘/home/avi/Desktop/1.txt‘ and second is ‘/home/avi/Downloads‘, lets handle them differently, just execute echo [arguments] to print both arguments differently.

$ echo “1st Argument is : !^”
$ echo “2nd Argument is : !cp:2”

Note 1st argument can be printed as “!^” and rest of the arguments can be printed by executing “![Name_of_Command]:[Number_of_argument]”.

In the above example the first command was ‘cp‘ and 2nd argument was needed to print. Hence “!cp:2”, if any command say xyz is run with 5 arguments and you need to get 4th argument, you may use “!xyz:4”, and use it as you like. All the arguments can be accessed by “!*”.

Handle Two or More Arguments

Handle Two or More Arguments

5. Execute last command on the basis of keywords

We can execute the last executed command on the basis of keywords. We can understand it as follows:

$ ls /home > /dev/null						[Command 1]
$ ls -l /home/avi/Desktop > /dev/null		                [Command 2]	
$ ls -la /home/avi/Downloads > /dev/null	                [Command 3]
$ ls -lA /usr/bin > /dev/null				        [Command 4]

Here we have used same command (ls) but with different switches and for different folders. Moreover we have sent to output of each command to ‘/dev/null‘ as we are not going to deal with the output of the command also the console remains clean.

Now Execute last run command on the basis of keywords.

$ ! ls					[Command 1]
$ ! ls -l				[Command 2]	
$ ! ls -la				[Command 3]
$ ! ls -lA				[Command 4]

Check the output and you will be astonished that you are running already executed commands just by lskeywords.

Run Commands Based on Keywords

Run Commands Based on Keywords

6. The power of !! Operator

You can run/alter your last run command using (!!). It will call the last run command with alter/tweak in the current command. Lets show you the scenario

Last day I run a one-liner script to get my private IP so I run,

$ ip addr show | grep inet | grep -v 'inet6'| grep -v '127.0.0.1' | awk '{print $2}' | cut -f1 -d/

Then suddenly I figured out that I need to redirect the output of the above script to a file ip.txt, so what should I do? Should I retype the whole command again and redirect the output to a file? Well an easy solution is to use UP navigation key and add '> ip.txt' to redirect the output to a file as.

$ ip addr show | grep inet | grep -v 'inet6'| grep -v '127.0.0.1' | awk '{print $2}' | cut -f1 -d/ > ip.txt

Thanks to the life Savior UP navigation key here. Now consider the below condition, the next time I run below one-liner script.

$ ifconfig | grep "inet addr:" | awk '{print $2}' | grep -v '127.0.0.1' | cut -f2 -d:

As soon as I run script, the bash prompt returned an error with the message “bash: ifconfig: command not found”, It was not difficult for me to guess I run this command as user where it should be run as root.

So what’s the solution? It is difficult to login to root and then type the whole command again! Also (UP Navigation Key) in last example didn’t came to rescue here. So? We need to call “!!” without quotes, which will call the last command for that user.

$ su -c “!!” root

Here su is switch user which is root, -c is to run the specific command as the user and the most important part !! will be replaced by command and last run command will be substituted here. Yeah! You need to provide root password.

The Power of !! Key

The Power of !! Key

I make use of !! mostly in following scenarios,

1. When I run apt-get command as normal user, I usually get an error saying you don’t have permission to execute.

$ apt-get upgrade && apt-get dist-upgrade

Opps error…don’t worry execute below command to get it successful..

$ su -c !!

Same way I do for,

$ service apache2 start
or
$ /etc/init.d/apache2 start
or
$ systemctl start apache2

OOPS User not authorized to carry such task, so I run..

$ su -c 'service apache2 start'
or
$ su -c '/etc/init.d/apache2 start'
or
$ su -c 'systemctl start apache2'
7. Run a command that affects all the file except ![FILE_NAME]

The ! (Logical NOT) can be used to run the command on all the files/extension except that is behind '!'.

A. Remove all the files from a directory except the one the name of which is 2.txt.

$ rm !(2.txt)

B. Remove all the file type from the folder except the one the extension of which is ‘pdf‘.

$ $ rm !(*.pdf)
8. Check if a directory (say /home/avi/Tecmint)exist or not? Printf if the said directory exist or not.

Here we will use '! -d' to validate if the directory exist or not followed by Logical AND Operator (&&) to print that directory does not exist and Logical OR Operator (||) to print the directory is present.

Logic is, when the output of [ ! -d /home/avi/Tecmint ] is 0, it will execute what lies beyond Logical ANDelse it will go to Logical OR (||) and execute what lies beyond Logical OR.

$ [ ! -d /home/avi/Tecmint ] && printf '\nno such /home/avi/Tecmint directory exist\n' || printf '\n/home/avi/Tecmint directory exist\n'
9. Check if a directory exist or not? If not exit the command.

Similar to the above condition, but here if the desired directory doesn’t exist it will exit the command.

$ [ ! -d /home/avi/Tecmint ] && exit
10. Create a directory (say test) in your home directory if it does not exist.

A general implementation in Scripting Language where if the desired directory does not exist, it will create one.

[ ! -d /home/avi/Tecmint ] && mkdir /home/avi/Tecmint

That’s all for now. If you know or come across any other use of '!' which is worth knowing, you may like to provide us with your suggestion in the feedback. Keep connected!

Source

MultiCD – Create a MultiBoot Linux Live USB

Having a single CD or USB drive with multiple available operating systems, for install, can be extremely useful in all kind of scenarios. Either for quickly testing or debugging something or simply reinstalling the operating system of your laptop or PC, this can save you lots of time.

Read AlsoHow to Install Linux on USB and Run It On Any PC

In this article, you will learn how to create multi bootable USB media, by using tool called MultiCD – is a shell script, designed to create a multiboot image with different Linux distributions (means it combine several boot CDs into one). That image can later be written to CD/DVD or flash drive so you can use it to install the OS by your choice.

The advantages to making a CD with MultiCD script are:

  • No need to create multiple CDs for small distributions.
  • If you already have the ISO images, it’s not required to download them again.
  • When new distributions is released, simply download and run the script again to build a new multiboot image.

Read Also2 Ways to Create an ISO from a Bootable USB in Linux

Download MultiCD Script

MultiCD can be obtained by either using git command or by downloading the tar archive.

If you wish to use the git repository, use the following command.

# git clone git://github.com/IsaacSchemm/MultiCD.git

Create Multiboot Image

Before we start creating our multiboot image, we will need to download the images for the Linux distributions we like to use. You can see a list of all supported Linux distros on the MultiCD page.

Once you have downloaded the image files, you will have to place them in the same directory as the MultiCDscript. For me that directory is MultiCD. For the purpose of this tutorial, I have prepared two ISO images:

CentOS-7 minimal
Ubuntu 18 desktop

Multi Linux Distros

Multi Linux Distros

It is important to note that the downloaded images should be renamed as listed in the Supported distros list or a symlink to be created. So reviewing the supported images, you can see that the filename for Ubuntu can remain the same as the original file.

For CentOS however, it must be renamed to centos-boot.iso as shown.

# mv CentOS-7-x86_64-Minimal-1810.iso centos-boot.iso

Now to create the multiboot image, run the following command.

# sudo multicd.sh 

The script will look for your .iso files and attempt to create the new file.

Create Multiboot Linux Image

Create Multiboot Linux Image

Once the process is complete, you will end up having a file called multicd.iso inside the build folder. You can now burn the new image file to CD or USB flash drive. Next you can test it by trying to boot from the new media. The boot page should look like this:

Test Multiboot Media

Test Multiboot Media

Choose the OS you wish to install and you will be redirected to the options for that OS.

Select Linux Distro to Install

Select Linux Distro to Install

Just like that, you can create a single bootable media with multiple Linux distros on it. The most important part is to always check the correct name for the iso image that you want to write as otherwise it might not be detected by multicd.sh.

Conclusion

MultiCD is no doubt one of the useful tools that can save you time from burning CDs or creating multiple bootable flash drives. Personally I have created my own USB flash drive few distros on it to keep in my desk. You never know when you will want to install another distro on your device.

Source

How to Repair and Defragment Linux System Partitions and Directories

People who use Linux often think that it doesn’t require defragmentation. This is a common misunderstanding across Linux users. Actually, the Linux operating system does support defragmentation. The point of the defragmentation is to improve I/O operations like allowing local videos to load faster or extracting archives significantly faster.

Defragment Linux System Partitions

Defragment Linux System Partitions and Directories

The Linux ext2, ext3 and ext4 filesystems don’t need that much attention, but with time, after executing many many many read/writes the filesystem may require optimization. Otherwise the hard disk might become slower and may affect the entire system.

In this tutorial I am going to show you few different techniques to perform defragmentation on files. Before we start, we should mention what the common filesystems like ext2,3,4 do to prevent fragmentation. These filesystems include technique to prevent the effect. For example filesystems reserve free block groups on the hard disk to store growing files completely.

Unfortunately the problem is not always solved with such mechanism. While other operating systems may require expensive additional software to resolve such issues, Linux has some easy to install tools that can help you resolve such problems.

How to Check a Filesystem Requires Defragmentation?

Before we start I would like to point that the operations below should only be ran on HDDs and not on SSD. Defragging your SSD drive will only increase its read/write count and therefore shorten it’s life. Instead, if you are using SSD, you should use the TRIM function, which is not covered in this tutorial.

let’s test if the system actually requires defragmentation. We can easily check this with tool such as e2fsck. Before you use this tool on a partition on your system, it is recommended to unmount that partition with. This is not completely necessary, but it’s the safe way to go:

$ sudo umount <device file>

In my case I have /dev/sda1 mounted at /tmp:

Disk Partition Table Before

Disk Partition Table Before

Keep in mind that in your case the partition table might be different so make sure to unmount the right partition. To unmount that partition you can use:

$ sudo umount /dev/sda1

Now let’s check if this partition requires defragmentation, with e2fsck. You will need to run the following command:

$ sudo e2fsck -fn /dev/sda1

The above command will perform a file system check. The -f option forces the check, even if the system seems clean. The -n option is used to open the filesystem in read-only and assume answer of "no" to all questions that may appear.

This options basically allows to use e2fsck non-interactively. If everything is Okay, you should see result similar to the one shown on the screenshot below:

e2fsck Healthy Partition

e2fsck Healthy Partition

Here is another example that shows errors on a system:

e2fsck With Errors

e2fsck With Errors

How to Repair Linux Filesystem Using e2fsck

If errors appear, you can attempt a repair of the filesystem with e2fsck with the “-p” option. Note that in order to run the command below, the partition will need to be unmounted:

$ sudo e2fsck -p <device file>

The “-p” options attempts automatic repair on the file system for problems that can be safely fixed without human intervention. If a problem is discovered that may require the system administrator to take additional corrective action, e2fsck will print a description of the problem and will exit with code 4, which means “File system errors left uncorrected”. Depending on the issue that has been found, different actions might be required.

If the issue appears on a partition that cannot be unmounted, you can use another tool called e4defrag. It comes pre-installed on many Linux distros, but if you don’t have it on yours, you can install it with:

$ sudo apt-get install e2fsprogs         [On Debian and Derivatives]
# yum install e2fsprogs                  [On CentOS based systems]
# dnf install e2fsprogs                  [On Fedora 22+ versions] 

How to Defragment Linux Partitions

Now it’s time to defragment Linux partitions using following command.

$ sudo e4defrag <location>
or
$ sudo e4defrag <device>

How to Defragment Linux Directory

For example, if you wish to defragment a single directory or device, you can use:

$ sudo e4defrag /home/user/directory/
# sudo e4defrag /dev/sda5

How to Defragment All Linux Partitions

If you prefer to defragment your entire system, the safe way of doing this is:

$ sudo e4defrag /

Keep in mind that this process may take some time to be completed.

Conclusion

Defragmentation is an operation that you will rarely need to run in Linux. It’s meant for power users who know what exactly they are doing and is not recommended for Linux newbies. The point of the whole action is to have your filesystem optimized so that new read/write operations are performed more efficiently.

Source

3 Useful GUI and Terminal Based Linux Disk Scanning Tools

There mainly two reasons for scanning a computer hard disk: one is to examine it for filesystem inconsistencies or errors that can result from persistent system crashes, improper closure of critical system software and more significantly by destructive programs (such as malware, viruses etc).

And another is to analyze its physical condition, where we can check a hard disk for bad sectors resulting from physical damage on the disk surface or failed memory transistor.

Suggested Read: How to Repair and Defragment Linux Partitions

In this article, we will review a mix of GUI and terminal based disk scanning utilities for Linux.

In case you notice any unusual behavior from a computer hard disk or a particular partition, one of the first things you can always investigate is filesystem inconsistency or errors and there is no other better utility for performing this other than fsck.

1. fsck – Filesystem Consistency Check

fsck is a system utility used to check and optionally repair a Linux filesystem. It is a front-end for several filesystem checkers.

Warning: Try out fsck commands on test Linux servers only, unless you know what you’re doing..

Always unmount a partition first before you can run fsck on it.

$ sudo unmount /dev/sdc1
$ sudo fsck -Vt vfat /dev/sdc1

In the command below, the switch:

  1. -t – specifies the filesystem type.
  2. -V – enables verbose mode.

You can find detailed usage instructions in the fsck man page:

$ man fsck

Once you have performed filesystem inconsistency tests, you proceed to carry out physical condition assessments.

2. badblock

badblocks is a utility for scanning bad blocks or bad sectors in hard disks. Assuming you detect any bad blocks on your hard disk, you can use it together with fsck or e2fsck to instruct the kernel not to use the bad blocks.

For more information on how to check bad blocks using badblock utility, read: How to Check Bad Sectors or Bad Blocks on Hard Disk in Linux.

3. S.M.A.R.T System Utilities

S.M.A.R.T (Self-Monitoring, Analysis and Reporting Technology) is a system built into nearly all modern ATA/SATA and SCSI/SAS hard disks as well as solid-state disks.

It collects in-depth information about a supported hard disk and you can get that data using the utilities below.

i. Smartctl

smartctl is one of the two utilities under the smartmontools package. It is a command line utility which controls and monitors the S.M.A.R.T system.

To install smartmontools package, run the applicable command below for your distro:

$ sudo apt-get install smartmontools   #Debian/Ubuntu systems 
$ sudo yum install smartmontools       #RHEL/CentOS systems

The following is an example of a smartctl command for reporting hard disk partition health where the option -Hhelps to show the general partition health condition after a self-test:

$ sudo smartctl -H /dev/sda6

Look through the smartctl man page for more usage guidelines:

$ man smartctl 

There is a GUI front-end for smartctl called gsmartcontrol which can be installed as follows:

$ sudo apt-get install gsmartcontrol  #Debian/Ubuntu systems 
$ sudo yum install gsmartcontrol       #RHEL/CentOS systems

GSmart Control - Linux Disk Scanning Tool

GSmart Control – Linux Disk Scanning Tool

ii. Gnome Disk Utility( or Disks)

Gnome disk utility offers a GUI for doing all the partition management related tasks such as creating, deleting, mounting partitions and beyond. It comes pre-installed in majority of mainstream Linux systems such as Ubuntu, Fedora, Linux Mint and others.

To use it on Ubuntu, open the Dash and search for Disks, on Linux Mint, open Menu and search for Disks and on Fedora, click on Activities type Disks.

Gnome Disk Utility for Linux Disk Scanning

Gnome Disk Utility for Linux Disk Scanning

More importantly, it can as well provide S.M.A.R.T data and effect self-tests as in the following interface.

Gnome Disk Utility for Linux Disk Scanning

Gnome Disk Utility for Linux Disk Scanning

That’s it! In this article, we reviewed hard disk scanning utilities for Linux operating system. You can share with us any utilities/tools for the same purpose, that are not mentioned in the list above or ask any related questions all in the comments.

Source

10 Best Open Source Forum Software for Linux

A forum is a discussion platform where related ideas and views on a particular issue can be exchanged. You can setup a forum for your site or blog, where your team, customers, fans, patrons, audience, users, advocates, supporters, or friends can hold public or private discussions, as a whole or in smaller groups.

If you are planning to launch a forum, and you can’t build your own software from scratch, you can opt for any of the existing forum applications out there. Some forum applications allow you to setup only a single discussion site on a single installation, while others support multiple-forums for a single installation instance.

In this article, we will review 10 best open source forum software for Linux systems. By the end of this article, you will know exactly which open source forum software best suites your needs.

You can get a 2GB RAM VPS from Linode for $10, but it’s unmanaged. If you want a Managed VPS, then use our new BlueHost Promotion Offer, you will get upto 40% OFF on hosting with one Free Domain for Life. If you get a Managed VPS, they will probably install any of the following Forum Software for you.

1. Discourse – Discussion Platform

Discourse is a free open source, simple, modern, incredibly powerful and feature-rich community discussion software.

Discourse Forum Software

Discourse Forum Software

It works as a mailing list, discussion forum, long-form chat room, and so much more. Its front-end is built using JavaScript and it is powered by the Ember.js framework; and the server side is developed using Ruby on Railsbacked by a PostgreSQL database and Redis cache.

It is responsive (auto-switches to a mobile layout for small screens), it supports: dynamic notifications, community moderation, social login, spam blocking, reply via email, emojis and badges. It also comes with a trust system and so much more. Above all, Discourse is simple, modern, awesome and fun, and has a one-click upgrade feature, once installed.

2. phpBB – Bulletin Board Software

phpBB is a free open source, powerful, feature-rich and highly extensible forum or bulletin board software. There are numerous extensions and a styles database (with hundreds of style and image packages) for you to enhance its core functionality and to customize your board respectively.

phpBB Bulletin Board Forum

phpBB Bulletin Board Forum

It is secure and comes with various tools to protect your forum from unwanted users and spam. It supports: a search system, private messaging, multiple methods of notifying users of forum activities, conversation moderators, and user-groups. Importantly, it has an advanced caching system for increased performance. You can integrate it with other applications via multiple plugins and so much more.

3. Vanilla – Modern Community Forum

Vanilla is an open source, fully-featured, intuitive, robust cloud-based and multi-lingual community forum software. It is easy to use giving users a modern forum experience, allows users to post questions and polls; it has an advance editor for formating posts with html, markdown, or bbcode, and supports @ mentions.

Vanilla Community Forum

Vanilla Community Forum

It also supports user-profiles, notifications, auto-save, avatars, private messaging, real-time preview, a powerful search facility, user-groups, single sign on and so much more. Vanilla can be integrated with social networks for easy sharing, login and more. It comes with numerous plugins and themes to enhance its primary features and customize its look and feel.

4. SimpleMachinesForum (SMF)

SimpleMachinesForum is a free, open source, simple, beautiful and powerful forum software. It is available in over 45 different languages. SMF is easy to use and highly customizable, with a multitude of powerful and effective features. It comes with high quality and reliable support.

Simple Machines Forum

Simple Machines Forum

SMF is highly customizable; it has many extensions/packages (under various categories such as security, socialization, administration, permissions, posting, theme enhancements and more) to modify its core functionality, add or remove features, and lot more.

5. bbPress – Forum Software

bbPress is a free open source, simple, lightweight, fast and secure bulletin board software built in a WordPress-fashion. It is easy to install, and configure, fully integrated and supports setting up multiple forums on one site installation.

bbPress Forum Software

bbPress Forum Software

It is highly extensible and customizable, supports several plugins. It also supports RSS feeds and offers spam blocking functionality for additional security.

6. MyBB – Powerful Forum Software

MyBB is a free open source, simple, easy-to-use, intuitive yet powerful, and extremely efficient forum software. It is a discussion-oriented application that supports: user profiles, private messages, reputation, warnings, calenders and events, user promotion, moderation, and more.

MyBB Community Forum

MyBB Community Forum

It ships in with a number of plugins, and templates and themes to extend its core functionality and customize its default look and feel, allowing you to setup a fully customized and effective online community forum with ease.

7. miniBB – Community Discussion Forum

miniBB is a free open source, standalone, lightweight, fast, and highly customizable software for building a web forum. It is suitable and effective for setting up a simple and stable community discussion platform, especially for novices. It allows for dynamic and content-rich discussions, and you can enable it to be responsive via the mobile template.

MiniBB Community Discussion Forum

MiniBB Community Discussion Forum

It can easily be integrated with your website, allowing you to change its layout to the look of your website. In addition, miniBB offers facilities for you to synchronize with an existing membership system. Importantly, it supports guest posts and quick moderation.

8. Phorum – Forum Software

Phorum is a free open source, simple, highly-customizable, and easy-to-use PHP message board software. It has a very flexible hook and module system for you to customize your web community discussion platform.

Phorum Forum Software

Phorum Forum Software

You can easily changes its default using HTML templates that have simple to understand text commands in-built.

9. FluxBB – Forum Software

FluxBB is fast, light, easy-to-use, stable, secure, user-friendly and multi-lingual PHP forum software. It comes with a well organized administration interface and admin panel plugins, supports a flexible permission system, and it is XHTML compliant.

FluxBB Forum Software

FluxBB Forum Software

It supports user profiles, avatar, forum categories, announcements, topic search, post preview RSS/Atom feeds, user selectable CSS styles and language and so much more.

10. PunBB – Bulletin Board Software

PunBB is a free open source, lightweight and fast PHP bulletin board software. It has a simple layout and design, like most forum software listed above, it supports private messaging, polls, linking to off-site avatars, advanced text formatting commands, file attachments, multi-forums and so much more.

PunBB Bulletin Board Forum

PunBB Bulletin Board Forum

That’s all for now! In this article, we reviewed 10 best open source forum software for Linux. If you are interested in setting up a forum for your site or blog, by now, you should be knowing which open source software to use. If your favorite software is missing in the list, let us know via the feedback form below.

Source

WP2Social Auto Publish Powered By : XYZScripts.com