Display Command Output or File Contents in Column Format

Are you fed up of viewing congested command output or file content on the terminal. This short article will demonstrate how to display command output or a file content in a much clear “columnated” format.

We can use the column utility to transform standard input or a file content into tabular form of multiple columns, for a much clear output.

Read Also12 Useful Commands For Filtering Text for Effective File Operations in Linux

To understand more clearly, we have created a following file “tecmint-authors.txt” which contains a list of top 10 authors names, number of articles written and number of comments they received on the article till now.

To demonstrate this, run the cat command below to view the tecmint-authors.txt file.

$ cat tecmint-authors.txt
Sample Output
pos|author|articles|comments
1|ravisaive|431|9785
2|aaronkili|369|7894
3|avishek|194|2349
4|cezarmatei|172|3256
5|gacanepa|165|2378
6|marintodorov|44|144
7|babin lonston|40|457
8|hannyhelal|30|367
9|gunjit kher|20|156
10|jesseafolabi|12|89

Using the column command, we can display a much clear output as follows, where the -t helps to determine the number of columns the input contains and creates a table and the -s specifies a delimiter character.

$ cat tecmint-authors.txt  | column -t -s "|"
Sample Output
pos  author         articles  comments
1    ravisaive      431       9785
2    aaronkili      369       7894
3    avishek        194       2349
4    cezarmatei     172       3256
5    gacanepa       165       2378
6    marintodorov   44        144
7    babin lonston  40        457
8    hannyhelal     30        367
9    gunjit kher    20        156
10   jesseafolabi   12        89

By default, rows are filled before columns, to fill columns before filling rows use the -x switch and to instruct column command consider empty lines (which are ignored by default), include the -e flag.

Here is another practical example, run the two commands below and see difference to further understand the magic column can do

$ mount
$ mount | column -t
Sample Output
sysfs        on  /sys                             type  sysfs            (rw,nosuid,nodev,noexec,relatime)
proc         on  /proc                            type  proc             (rw,nosuid,nodev,noexec,relatime)
udev         on  /dev                             type  devtmpfs         (rw,nosuid,relatime,size=4013172k,nr_inodes=1003293,mode=755)
devpts       on  /dev/pts                         type  devpts           (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs        on  /run                             type  tmpfs            (rw,nosuid,noexec,relatime,size=806904k,mode=755)
/dev/sda10   on  /                                type  ext4             (rw,relatime,errors=remount-ro,data=ordered)
securityfs   on  /sys/kernel/security             type  securityfs       (rw,nosuid,nodev,noexec,relatime)
tmpfs        on  /dev/shm                         type  tmpfs            (rw,nosuid,nodev)
tmpfs        on  /run/lock                        type  tmpfs            (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs        on  /sys/fs/cgroup                   type  tmpfs            (rw,mode=755)
cgroup       on  /sys/fs/cgroup/systemd           type  cgroup           (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/
....

To save the nicely formatted output in a file, use the output redirection as shown.

$ mount | column -t >mount.out

For more information, see the columns man page:

$ man column 

You might also like to read these following related articles.

  1. How to Use Awk and Regular Expressions to Filter Text or String in Files
  2. How to Find and Sort Files Based on Modification Date and Time in Linux
  3. 11 Advanced Linux ‘Grep’ Commands on Character Classes and Bracket Expressions

If you have any question, use the comment form below to write to us. You can as well share with us any useful command line tips and tricks in Linux.

Source

Pscp – Transfer/Copy Files to Multiple Linux Servers Using Single Shell

Pscp utility allows you to transfer/copy files to multiple remote Linux servers using single terminal with one single command, this tool is a part of Pssh (Parallel SSH Tools), which provides parallel versions of OpenSSH and other similar tools such as:

  1. pscp – is utility for copying files in parallel to a number of hosts.
  2. prsync – is a utility for efficiently copying files to multiple hosts in parallel.
  3. pnuke – it helps to kills processes on multiple remote hosts in parallel.
  4. pslurp – it helps to copy files from multiple remote hosts to a central host in parallel.

When working in a network environment where there are multiple hosts on the network, a System Administrator may find these tools listed above very useful.

Copy Files and Directories to Multiple Linux Servers

Pscp – Copy Files to Multiple Linux Servers

In this article, we shall look at some useful examples of Pscp utility to transfer/copy files to multiple Linux hosts on a network.

To use the pscp tool, you need to install the PSSH utility on your Linux system, for installation of PSSH you can read this article.

  1. How to Install Pssh Tool to Execute Commands on Multiple Linux Servers

Almost all the different options used with these tools are the same except for few that are related to the specific functionality of a given utility.

How to Use Pscp to Transfer/Copy Files to Multiple Linux Servers

While using pscp you need to create a separate file that includes the number of Linux server IP address and SSH port number that you need to connect to the server.

Copy Files to Multiple Linux Servers

Let’s create a new file called “myscphosts.txt” and add the list of Linux hosts IP address and SSH port (default 22) number as shown.

192.168.0.3:22
192.168.0.9:22

Once you’ve added hosts to the file, it’s time to copy files from local machine to multiple Linux hosts under /tmpdirectory with the help of following command.

# pscp -h myscphosts.txt -l tecmint -Av wine-1.7.55.tar.bz2 /tmp/
OR
# pscp.pssh -h myscphosts.txt -l tecmint -Av wine-1.7.55.tar.bz2 /tmp/
Sample Output
Warning: do not enter your password if anyone else has superuser
privileges or access to your account.
Password: 
[1] 17:48:25 [SUCCESS] 192.168.0.3:22
[2] 17:48:35 [SUCCESS] 192.168.0.9:22

Explanation about the options used in the above command.

  1. -h switch used to read a hosts from a given file and location.
  2. -l switch reads a default username on all hosts that do not define a specific user.
  3. -A switch tells pscp ask for a password and send to ssh.
  4. -v switch is used to run pscp in verbose mode.

Copy Directories to Multiple Linux Servers

If you want to copy entire directory use -r option, which will recursively copy entire directories as shown.

# pscp -h myscphosts.txt -l tecmint -Av -r Android\ Games/ /tmp/
OR
# pscp.pssh -h myscphosts.txt -l tecmint -Av -r Android\ Games/ /tmp/
Sample Output
Warning: do not enter your password if anyone else has superuser
privileges or access to your account.
Password: 
[1] 17:48:25 [SUCCESS] 192.168.0.3:22
[2] 17:48:35 [SUCCESS] 192.168.0.9:22

You can view the manual entry page for the pscp or use pscp --help command to seek for help.

Conclusion

This tool is worth trying as if you control multiple Linux systems and already have SSH key-based passwordless login setup.

Source

Learn The Basics of How Linux I/O (Input/Output) Redirection Works

One of the most important and interesting topics under Linux administration is I/O redirection. This feature of the command line enables you to redirect the input and/or output of commands from and/or to files, or join multiple commands together using pipes to form what is known as a “command pipeline”.

All the commands that we run fundamentally produce two kinds of output:

  1. the command result – data the program is designed to produce, and
  2. the program status and error messages that informs a user of the program execution details.

In Linux and other Unix-like systems, there are three default files named below which are also identified by the shell using file descriptor numbers:

  1. stdin or 0 – it’s connected to the keyboard, most programs read input from this file.
  2. stdout or 1 – it’s attached to the screen, and all programs send their results to this file and
  3. stderr or 2 – programs send status/error messages to this file which is also attached to the screen.

Therefore, I/O redirection allows you to alter the input source of a command as well as where its output and error messages are sent to. And this is made possible by the “<” and “>” redirection operators.

How To Redirect Standard Output to File in Linux

You can redirect standard output as in the example below, here, we want to store the output of the top command for later inspection:

$ top -bn 5 >top.log

Where the flags:

  1. -b – enables top to run in batch mode, so that you can redirect its output to a file or another command.
  2. -n – specifies the number of iterations before the command terminates.

You can view the contents of top.log file using cat command as follows:

$ cat top.log

To append the output of a command, use the “>>” operator.

For instance to append the output of top command above in the top.log file especially within a script (or on the command line), enter the line below:

$ top -bn 5 >>top.log

Note: Using the file descriptor number, the output redirect command above is the same as:

$ top -bn 5 1>top.log

How To Redirect Standard Error to File in Linux

To redirect standard error of a command, you need to explicitly specify the file descriptor number, 2 for the shell to understand what you are trying to do.

For example the ls command below will produce an error when executed by a normal system user without root privileges:

$ ls -l /root/

You can redirect the standard error to a file as below:

$ ls -l /root/ 2>ls-error.log
$ cat ls-error.log 

Redirect Standard Error to File

Redirect Standard Error to File

In order to append the standard error, use the command below:

$ ls -l /root/ 2>>ls-error.log

How To Redirect Standard Output/ Error To One File

It is also possible to capture all the output of a command (both standard output and standard error) into a single file. This can be done in two possible ways by specifying the file descriptor numbers:

1. The first is a relatively old method which works as follows:

$ ls -l /root/ >ls-error.log 2>&1

The command above means the shell will first send the output of the ls command to the file ls-error.log (using >ls-error.log), and then writes all error messages to the file descriptor 2 (standard output) which has been redirected to the file ls-error.log (using 2>&1). Implying that standard error is also sent to the same file as standard output.

2. The second and direct method is:

$ ls -l /root/ &>ls-error.log

You can as well append standard output and standard error to a single file like so:

$ ls -l /root/ &>>ls-error.log

How To Redirect Standard Input to File

Most if not all commands get their input from standard input, and by default standard input is attached to the keyboard.

To redirect standard input from a file other than the keyboard, use the “<” operator as below:

$ cat <domains.list 

Redirect Standard Input to File

Redirect Standard Input to File

How To Redirect Standard Input/Output to File

You can perform standard input, standard output redirection at the same time using sort command as below:

$ sort <domains.list >sort.output

How to Use I/O Redirection Using Pipes

To redirect the output of one command as input of another, you can use pipes, this is a powerful means of building useful command lines for complex operations.

For example, the command below will list the top five recently modified files.

$ ls -lt | head -n 5 

Here, the options:

  1. -l – enables long listing format
  2. -t – sort by modification time with the newest files are shown first
  3. -n – specifies the number of header lines to show

Important Commands for Building Pipelines

Here, we will briefly review two important commands for building command pipelines and they are:

xargs which is used to build and execute command lines from standard input. Below is an example of a pipeline which uses xargs, this command is used to copy a file into multiple directories in Linux:

$ echo /home/aaronkilik/test/ /home/aaronkilik/tmp | xargs -n 1 cp -v /home/aaronkilik/bin/sys_info.sh

Copy Files to Multiple Directories

Copy Files to Multiple Directories

And the options:

  1. -n 1 – instructs xargs to use at most one argument per command line and send to the cp command
  2. cp – copies the file
  3. -v – displays progress of copy command.

For more usage options and info, read through the xargs man page:

$ man xargs 

tee command reads from standard input and writes to standard output and files. We can demonstrate how teeworks as follows:

$ echo "Testing how tee command works" | tee file1 

tee Command Example

tee Command Example

File or text filters are commonly used with pipes for effective Linux file operations, to process information in powerful ways such as restructuring output of commands (this can be vital for generation of useful Linux reports), modifying text in files plus several other Linux system administration tasks.

To learn more about Linux filters and pipes, read this article Find Top 10 IP Addresses Accessing Apache Server, shows a useful example of using filters and pipes.

In this article, we explained the fundamentals of I/O redirection in Linux. Remember to share your thoughts via the feedback section below.

Source

How to Split Large ‘tar’ Archive into Multiple Files of Certain Size

Are you worried of transferring or uploading large files over a network, then worry no more, because you can move your files in bits to deal with slow network speeds by splitting them into blocks of a given size.

In this how-to guide, we shall briefly explore the creation of archive files and splitting them into blocks of a selected size. We shall use tar, one of the most popular archiving utilities on Linux and also take advantage of the split utility to help us break our archive files into small bits.

Create and Split tar into Multiple Files or Parts in Linux

Create and Split tar into Multiple Files or Parts in Linux

Before we move further, let us take note of, how these utilities can be used, the general syntax of a tar and split command is as follows:

# tar options archive-name files 
# split options file "prefix”

Let us now delve into a few examples to illustrate the main concept of this article.

Example 1: We can first of all create an archive file as follows:

$ tar -cvjf home.tar.bz2 /home/aaronkilik/Documents/* 

Create Tar Archive File

Create Tar Archive File

To confirm that out archive file has been created and also check its size, we can use ls command:

$ ls -lh home.tar.bz2

Then using the split utility, we can break the home.tar.bz2 archive file into small blocks each of size 10MB as follows:

$ split -b 10M home.tar.bz2 "home.tar.bz2.part"
$ ls -lh home.tar.bz2.parta*

Split Tar File into Parts in Linux

Split Tar File into Parts in Linux

As you can see from the output of the commands above, the tar archive file has been split to four parts.

Note: In the split command above, the option -b is used to specify the size of each block and the "home.tar.bz2.part" is the prefix in the name of each block file created after splitting.

Example 2: Similar to the case above, here, we can create an archive file of a Linux Mint ISO image file.

$ tar -cvzf linux-mint-18.tar.gz linuxmint-18-cinnamon-64bit.iso 

Then follow the same steps in example 1 above to split the archive file into small bits of size 200MB.

$ ls -lh linux-mint-18.tar.gz 
$ split -b 200M linux-mint-18.tar.gz "ISO-archive.part"
$ ls -lh ISO-archive.parta*

Split Tar Archive File to Fixed Sizes

Split Tar Archive File to Fixed Sizes

Example 3: In this instance, we can use a pipe to connect the output of the tar command to split as follows:

$ tar -cvzf - wget/* | split -b 150M - "downloads-part"

Create and Split Tar Archive File into Parts

Create and Split Tar Archive File into Parts

Confirm the files:

$ ls -lh downloads-parta*

Check Parts of Tar Files

Check Parts of Tar Files

In this last example, we do not have to specify an archive name as you have noticed, simply use a - sign.

How to Join Tar Files After Splitting

After successfully splitting tar files or any large file in Linux, you can join the files using the cat command. Employing cat is the most efficient and reliable method of performing a joining operation.

To join back all the blocks or tar files, we issue the command below:

# cat home.tar.bz2.parta* >backup.tar.gz.joined

We can see that after running the cat command, it combines all the small blocks we had earlier on created to the original tar archive file of the same size.

Conclusion

The whole idea is simple, as we have illustrated above, you simply need to know and understand how to use the various options of tar and split utilities.

You can refer to their manual entry pages of to learn more other options and perform some complex operations or you can go through the following article to learn more about tar command.

Don’t Miss: 18 Useful ‘tar’ Command Examples

For any questions or further tips, you can share your thoughts via the comment section below.

Source

dutree – A CLI Tool to Analyze Disk Usage in Coloured Output

dutree is a free open-source, fast command-line tool for analyzing disk usage, written in Rust programming language. It is developed from durep (disk usage reporter) and tree (list directory content in tree-like format) command line tools. dutree therefore reports disk usage in a tree-like format.

Read AlsoAgedu – A Useful Tool for Tracking Down Wasted Disk Space in Linux

It displays coloured output, depending on values configured in the GNU LS_COLORS environment variable. This env variable enables for setting the colours of files based on extension, permissions as well as file type.

dutree Features:

  • Show the file system tree.
  • Supports aggregating of small files.
  • Allows for comparing different directories.
  • Supports excluding of files or directories.

How to Install dutree in Linux Systems

To install dutree in Linux distributions, you must have rust programming language installed on your system as shown.

$ sudo curl https://sh.rustup.rs -sSf | sh

Once rust installed, you can run the following command to install strong>dutree in Linux distributions as shown.

$ cargo install --git https://github.com/nachoparker/dutree.git

After installing dutree, it uses environment colors according to the variable LS_COLORS, it has the same colors ls --color command that our distro has configured.

$ ls --color

The simplest way of running dutree is without arguments, this way it shows a filesystem tree.

$ dutree

Linux Filesystem Disk Usage

Linux Filesystem Disk Usage

To display real disk usage instead of file size, use the -u flag.

$ dutree -u 

Show Linux Disk Usage

Show Linux Disk Usage

Show Directories in Depth

You can show directories up to a given depth (default 1), using the -d flag. The command below will show directories up to a depth of 3, under the current working directory.

For example if the current working directory (~/), then display size of ~/*/*/* as shown in the following sample screenshot.

$ dutree -d 3

Show Directories in Depth Disk Usage

Show Directories in Depth Disk Usage

Exclude Files or Directories in Output

To exclude matching a file or directory name, use the -x flag.

$ dutree -x CentOS-7.0-1406-x86_64-DVD.iso 

Show Disk Usage with Exclude Filename

Show Disk Usage with Exclude Filename

You can also get a quick local overview by skipping directories, using the -f option, like so.

$ dutree -f

Quick Overview by Skipping Directories

Quick Overview by Skipping Directories

A full summary/overview can be generated using the -s flag as shown.

$ dutree -s

Linux Disk Usage Summary

Linux Disk Usage Summary

Aggregate Small Files

It is possible to aggregate files smaller than a certain size, default is 1M as shown.

$ dutree -a 

Aggregate Small Files

Aggregate Small Files

Exclude Hidden Files

The -H switch allows for excluding hidden files in the output.

$ dutree -H

The -b option is used to print sizes in bytes, instead of kilobytes (default).

$ dutree -b

To turn off colors, and only display ASCII characters, use the -A flag like so.

$ dutree -A

You can view the dutree help message using the -h option.

$ dutree -h

Usage: dutree [options]  [..]
 
Options:
    -d, --depth [DEPTH] show directories up to depth N (def 1)
    -a, --aggr [N[KMG]] aggregate smaller than N B/KiB/MiB/GiB (def 1M)
    -s, --summary       equivalent to -da, or -d1 -a1M
    -u, --usage         report real disk usage instead of file size
    -b, --bytes         print sizes in bytes
    -x, --exclude NAME  exclude matching files or directories
    -H, --no-hidden     exclude hidden files
    -A, --ascii         ASCII characters only, no colors
    -h, --help          show help
    -v, --version       print version number

dutree Github Repositoryhttps://github.com/nachoparker/dutree

dutree is a simple yet powerful command-line tool to show file size and analyze disk usage in tree-like format, on Linux systems. Use the comment form below to share your thoughts or queries about it, with us.

Source

How to Clone a Partition or Hard drive in Linux

There are many reasons why you may want to clone a Linux partition or even hard drive, most of which are related to creating backups of your data. There are multiple ways you can achieve this in Linux by using some external tools such as partimage or Clonezilla.

However in this tutorial we are going to review Linux disk cloning with tool called dd, which is most commonly used to convert or copy files and it comes pre-installed in most Linux distributions.

How to Clone Linux Partition

With dd command you can copy entire hard drive or just a Linux partition. Lets start with cloning one of our partitions. In my case I have the following drives: /dev/sdb/dev/sdc.. I will clone /dev/sdb1/ to /dev/sdc1.

Read AlsoHow to Clone Linux Partitions Using ‘cat’ Command

First list the these partitions using the fdisk command as shown.

# fdisk -l /dev/sdb1/ /dev/sdc1

List Linux Partitions

List Linux Partitions

Now clone a partition /dev/sdb1/ to /dev/sdc1 using the following dd command.

# dd if=/dev/sdb1  of=/dev/sdc1 

The above command tells dd to use /dev/sdb1 as input file and write it to output file /dev/sdc1.

Clone Linux Partition with dd Command

Clone Linux Partition with dd Command

After cloning Linux partition, you can then check both partitions with:

# fdisk -l /dev/sdb1 /dev/sdc1

Verify Linux Partition Cloning

Verify Linux Partition Cloning

How to Clone Linux Hard Drive

Cloning a Linux hard drive is similar to cloning a partition. However, instead of specifying the partition, you just use the entire drive. Note that in this case it is recommended that the hard drive is same in size (or bigger) than the source drive.

# dd if=/dev/sdb of=/dev/sdc

Clone Hard Drive in Linux

Clone Hard Drive in Linux

This should have copied the drive /dev/sdb with its partitions on the target hard drive /dev/sdc. You can verify the changes by listing both drives with fdisk command.

# fdisk -l /dev/sdb /dev/sdc

Verify Linux Hard Drive Cloning

Verify Linux Hard Drive Cloning

How to Backup MBR in Linux

dd command can also be used to backup your MBR, which is located at the first sector of the device, before the first partition. So if you want to create backup of your MBR, simply run:

# dd if=/dev/sda of=/backup/mbr.img bs=512 count=1. 

The above command tells dd to copy /dev/sda to /backup/mbr.img with step of 512 bytes and the count option tells to copy only 1 block. In other words you tell dd to copy the first 512 bytes from /dev/sda to the file you have provided.

Backup MBR in Linux

Backup MBR in Linux

That’s all! dd command is a powerful Linux tool that should be used with caution when copying or cloning Linux partitions or drives.

Source

12 Practical Examples of Linux grep Command

Have you ever been confronted with the task of looking for a particular string or pattern in a file, yet have no idea where to start looking? Well then, here is grep to the rescue!

Grep Command Examples

12 Grep Command Examples

grep is a powerful file pattern searcher that comes equipped on every distribution of Linux. If, for whatever reason, it is not installed on your system, you can easily install it via your package manager (apt-get on Debian/Ubuntu and yum on RHEL/CentOS/Fedora).

$ sudo apt-get install grep         #Debian/Ubuntu
$ sudo yum install grep             #RHEL/CentOS/Fedora

I have found that the easiest way to get your feet wet with grep is to just dive right in and use some real world examples.

1. Search and Find Files

Let’s say that you have just installed a fresh copy of the new Ubuntu on your machine, and that you are going to give Python scripting a shot. You have been scouring the web looking for tutorials, but you see that there are two different versions of Python in use, and you don’t know which one was installed on your system by the Ubuntu installer, or if it installed any modules. Simply run this command:

# dpkg -l | grep -i python
Sample Output
ii  python2.7                        2.7.3-0ubuntu3.4                    Interactive high-level object-oriented language (version 2.7)
ii  python2.7-minimal                2.7.3-0ubuntu3.4                    Minimal subset of the Python language (version 2.7)
ii  python-openssl                   0.12-1ubuntu2.1                     Python wrapper around the OpenSSL library
ii  python-pam                       0.4.2-12.2ubuntu4                   A Python interface to the PAM library

First, we ran dpkg –l, which lists installed *.deb packages on your system. Second, we piped that output to grep –i python, which simple states “go to grep and filter out and return everything with ‘python’ in it.” The –i option is there to ignore-case, as grep is case-sensitive. Using the –i option is a good habit of getting into, unless of course you are trying to nail down a more specific search.

2. Search and Filter Files

The grep can also be used to search and filter within individual files or multiple files. Lets take this scenario:

You are having some trouble with your Apache Web Server, and you have reached out to one of the many awesome forums on the net asking for some help. The kind soul who replies to you has asked you to post the contents of your /etc/apache2/sites-available/default-ssl file. Wouldn’t it be easier for you, the guy helping you, and everyone reading it, if you could remove all of the commented lines? Well you can! Just run this:

# grep –v “#”  /etc/apache2/sites-available/default-ssl

The –v option tells grep to invert its output, meaning that instead of printing matching lines, do the opposite and print all of the lines that don’t match the expression, in this case, the # commented lines.

3. Find all .mp3 Files Only

The grep can be very useful for filtering from stdout. For example, let’s say that you have an entire folder full of music files in a bunch of different formats. You want to find all of the *.mp3 files from the artist JayZ, but you don’t want any of the remixed tracks. Using a find command with a couple of grep pipes will do the trick:

# find . –name “*.mp3” | grep –i JayZ | grep –vi “remix”

In this example, we are using find to print all of the files with a *.mp3 extension, piping it to grep –i to filter out and prints all files with the name “JayZ” and then another pipe to grep –vi which filters out and does not print all filenames with the string (in any case) “remix”.

Suggested Read: 35 Practical Examples of Linux Find Command

4. Display Number of Lines Before or After Search String

Another couple of options are the –A and –B switches, which displays the matched line and number of lines either that come before or after the search string. While the man page gives a more detailed explanation, I find it easiest to remember the options as –A = after, and –B = before:

# ifconfig | grep –A 4 eth0
# ifconfig | grep  -B 2 UP

5. Prints Number of Lines Around Match

The grep’s –C option is similar, but instead of printing the lines that come either before or after the string, it prints the lines in either direction:

# ifconfig | grep –C 2 lo

6. Count Number of Matches

Similar to piping a grep string to word count (wc program) grep’s built-in option can perform the same for you:

# ifconfig | grep –c inet6

7. Search Files by Given String

The –n option for grep is very useful when debugging files during compile errors. It displays the line number in the file of the given search string:

# grep –n “main” setup..py

8. Search a string Recursively in all Directories

If you would like to search for a string in the current directory along with all of the subdirectories, you can specify the –r option to search recursively:

# grep –r “function” *

9. Searches for the entire pattern

Passing the –w option to grep searches for the entire pattern that is in the string. For example, using:

# ifconfig | grep –w “RUNNING”

Will print out the line containing the pattern in quotes. On the other hand, if you try:

# ifconfig | grep –w “RUN”

Nothing will be returned as we are not searching for a pattern, but an entire word.

10. Search a string in Gzipped Files

Deserving some mention are grep’s derivatives. The first is zgrep, which, similar to zcat, is for use on gzippedfiles. It takes the same options as grep and is used in the same way:

# zgrep –i error /var/log/syslog.2.gz

11. Match Regular Expression in Files

The egrep is another derivative that stands for “Extended Global Regular Expression”. It recognizes additional expression meta-characters such at + ? | and ().

Suggested Read: What’s Difference Between Grep, Egrep and Fgrep in Linux?

egrep is very useful for searching source files, and other pieces of code, should the need arise. It can be invoked from regular grep by specifying the –E option.

# grep –E

12. Search a Fixed Pattern String

The fgrep searches a file or list of files for a fixed pattern string. It is the same as grep –F. A common way of using fgrep is to pass a file of patterns to it:

# fgrep –f file_full_of_patterns.txt file_to_search.txt

This is just a starting point with grep, but as you are probably able to see, it is invaluable for a variety of purposes. Aside from the simple one line commands we have implemented, grep can be used to write powerful cron jobs, and robust shell scripts, for a start.

Suggested Read: 11 ‘Grep’ Commands on Character Classes and Bracket Expressions

Be creative, experiment with the options in the man page, and come up with grep expressions that serve your own purposes!

Source

18 Tar Command Examples in Linux

The Linux “tar” stands for tape archive, which is used by large number of Linux/Unix system administrators to deal with tape drives backup. The tar command used to rip a collection of files and directories into highly compressed archive file commonly called tarball or targzip and bzip in Linux. The tar is most widely used command to create compressed archive files and that can be moved easily from one disk to another disk or machine to machine.

Linux Tar Command Examples

Linux Tar Command Examples

In this article we will be going to review and discuss various tar command examples including how to create archive files using (tartar.gz and tar.bz2) compression, how to extract archive file, extract a single file, view content of file, verify a file, add files or directories to archive file, estimate the size of tar archive file, etc.

The main purpose of this guide is to provide various tar command examples that might be helpful for you to understand and become expert in tar archive manipulation.

1. Create tar Archive File

The below example command will create a tar archive file tecmint-14-09-12.tar for a directory /home/tecmint in current working directory. See the example command in action.

# tar -cvf tecmint-14-09-12.tar /home/tecmint/

/home/tecmint/
/home/tecmint/cleanfiles.sh
/home/tecmint/openvpn-2.1.4.tar.gz
/home/tecmint/tecmint-14-09-12.tar
/home/tecmint/phpmyadmin-2.11.11.3-1.el5.rf.noarch.rpm
/home/tecmint/rpmforge-release-0.5.2-2.el5.rf.i386.rpm

Let’s discuss each option that we have used in the above command for creating a tar archive file.

  1. c – Creates a new .tar archive file.
  2. v – Verbosely show the .tar file progress.
  3. f – File name type of the archive file.

2. Create tar.gz Archive File

To create a compressed gzip archive file we use the option as z. For example the below command will create a compressed MyImages-14-09-12.tar.gz file for the directory /home/MyImages. (Note : tar.gz and tgz both are similar).

# tar cvzf MyImages-14-09-12.tar.gz /home/MyImages
OR
# tar cvzf MyImages-14-09-12.tgz /home/MyImages

/home/MyImages/
/home/MyImages/Sara-Khan-and-model-Priyanka-Shah.jpg
/home/MyImages/RobertKristenviolent101201.jpg
/home/MyImages/Justintimerlake101125.jpg
/home/MyImages/Mileyphoto101203.jpg
/home/MyImages/JenniferRobert101130.jpg
/home/MyImages/katrinabarbiedoll231110.jpg
/home/MyImages/the-japanese-wife-press-conference.jpg
/home/MyImages/ReesewitherspoonCIA101202.jpg
/home/MyImages/yanaguptabaresf231110.jpg

3. Create tar.bz2 Archive File

The bz2 feature compress and create archive file less than the size of the gzip. The bz2 compression takes more time to compress and decompress files as compared to gzip which takes less time. To create highly compressed tar file we use option as j. The following example command will create a Phpfiles-org.tar.bz2 file for a directory /home/php. (Note: tar.bz2 and tbz is similar as tb2).

# tar cvfj Phpfiles-org.tar.bz2 /home/php
OR
# tar cvfj Phpfiles-org.tar.tbz /home/php
OR 
# tar cvfj Phpfiles-org.tar.tb2 /home/php

/home/php/
/home/php/iframe_ew.php
/home/php/videos_all.php
/home/php/rss.php
/home/php/index.php
/home/php/vendor.php
/home/php/video_title.php
/home/php/report.php
/home/php/object.html
/home/php/video.php

4. Untar tar Archive File

To untar or extract a tar file, just issue following command using option x (extract). For example the below command will untar the file public_html-14-09-12.tar in present working directory. If you want to untar in a different directory then use option as -C (specified directory).

## Untar files in Current Directory ##
# tar -xvf public_html-14-09-12.tar

## Untar files in specified Directory ##
# tar -xvf public_html-14-09-12.tar -C /home/public_html/videos/

/home/public_html/videos/
/home/public_html/videos/views.php
/home/public_html/videos/index.php
/home/public_html/videos/logout.php
/home/public_html/videos/all_categories.php
/home/public_html/videos/feeds.xml

5. Uncompress tar.gz Archive File

To Uncompress tar.gz archive file, just run following command. If would like to untar in different directory just use option -C and the path of the directory,  like we shown in the above example.

# tar -xvf thumbnails-14-09-12.tar.gz

/home/public_html/videos/thumbnails/
/home/public_html/videos/thumbnails/katdeepika231110.jpg
/home/public_html/videos/thumbnails/katrinabarbiedoll231110.jpg
/home/public_html/videos/thumbnails/onceuponatime101125.jpg
/home/public_html/videos/thumbnails/playbutton.png
/home/public_html/videos/thumbnails/ReesewitherspoonCIA101202.jpg
/home/public_html/videos/thumbnails/snagItNarration.jpg
/home/public_html/videos/thumbnails/Minissha-Lamba.jpg
/home/public_html/videos/thumbnails/Lindsaydance101201.jpg
/home/public_html/videos/thumbnails/Mileyphoto101203.jpg

6. Uncompress tar.bz2 Archive File

To Uncompress highly compressed tar.bz2 file, just use the following command. The below example command will untar all the .flv files from the archive file.

# tar -xvf videos-14-09-12.tar.bz2

/home/public_html/videos/flv/katrinabarbiedoll231110.flv
/home/public_html/videos/flv/BrookmuellerCIA101125.flv
/home/public_html/videos/flv/dollybackinbb4101125.flv
/home/public_html/videos/flv/JenniferRobert101130.flv
/home/public_html/videos/flv/JustinAwardmovie101125.flv
/home/public_html/videos/flv/Lakme-Fashion-Week.flv
/home/public_html/videos/flv/Mileyphoto101203.flv
/home/public_html/videos/flv/Minissha-Lamba.flv

7. List Content of tar Archive File

To list the contents of tar archive file, just run the following command with option t (list content). The below command will list the content of uploadprogress.tar file.

# tar -tvf uploadprogress.tar

-rw-r--r-- chregu/staff   2276 2011-08-15 18:51:10 package2.xml
-rw-r--r-- chregu/staff   7877 2011-08-15 18:51:10 uploadprogress/examples/index.php
-rw-r--r-- chregu/staff   1685 2011-08-15 18:51:10 uploadprogress/examples/server.php
-rw-r--r-- chregu/staff   1697 2011-08-15 18:51:10 uploadprogress/examples/info.php
-rw-r--r-- chregu/staff    367 2011-08-15 18:51:10 uploadprogress/config.m4
-rw-r--r-- chregu/staff    303 2011-08-15 18:51:10 uploadprogress/config.w32
-rw-r--r-- chregu/staff   3563 2011-08-15 18:51:10 uploadprogress/php_uploadprogress.h
-rw-r--r-- chregu/staff  15433 2011-08-15 18:51:10 uploadprogress/uploadprogress.c
-rw-r--r-- chregu/staff   1433 2011-08-15 18:51:10 package.xml

8. List Content tar.gz Archive File

Use the following command to list the content of tar.gz file.

# tar -tvf staging.tecmint.com.tar.gz

-rw-r--r-- root/root         0 2012-08-30 04:03:57 staging.tecmint.com-access_log
-rw-r--r-- root/root       587 2012-08-29 18:35:12 staging.tecmint.com-access_log.1
-rw-r--r-- root/root       156 2012-01-21 07:17:56 staging.tecmint.com-access_log.2
-rw-r--r-- root/root       156 2011-12-21 11:30:56 staging.tecmint.com-access_log.3
-rw-r--r-- root/root       156 2011-11-20 17:28:24 staging.tecmint.com-access_log.4
-rw-r--r-- root/root         0 2012-08-30 04:03:57 staging.tecmint.com-error_log
-rw-r--r-- root/root      3981 2012-08-29 18:35:12 staging.tecmint.com-error_log.1
-rw-r--r-- root/root       211 2012-01-21 07:17:56 staging.tecmint.com-error_log.2
-rw-r--r-- root/root       211 2011-12-21 11:30:56 staging.tecmint.com-error_log.3
-rw-r--r-- root/root       211 2011-11-20 17:28:24 staging.tecmint.com-error_log.4

9. List Content tar.bz2 Archive File

To list the content of tar.bz2 file, issue the following command.

# tar -tvf Phpfiles-org.tar.bz2

drwxr-xr-x root/root         0 2012-09-15 03:06:08 /home/php/
-rw-r--r-- root/root      1751 2012-09-15 03:06:08 /home/php/iframe_ew.php
-rw-r--r-- root/root     11220 2012-09-15 03:06:08 /home/php/videos_all.php
-rw-r--r-- root/root      2152 2012-09-15 03:06:08 /home/php/rss.php
-rw-r--r-- root/root      3021 2012-09-15 03:06:08 /home/php/index.php
-rw-r--r-- root/root      2554 2012-09-15 03:06:08 /home/php/vendor.php
-rw-r--r-- root/root       406 2012-09-15 03:06:08 /home/php/video_title.php
-rw-r--r-- root/root      4116 2012-09-15 03:06:08 /home/php/report.php
-rw-r--r-- root/root      1273 2012-09-15 03:06:08 /home/php/object.html

10. Untar Single file from tar File

To extract a single file called cleanfiles.sh from cleanfiles.sh.tar use the following command.

# tar -xvf cleanfiles.sh.tar cleanfiles.sh
OR
# tar --extract --file=cleanfiles.sh.tar cleanfiles.sh

cleanfiles.sh

11. Untar Single file from tar.gz File

To extract a single file tecmintbackup.xml from tecmintbackup.tar.gz archive file, use the command as follows.

# tar -zxvf tecmintbackup.tar.gz tecmintbackup.xml
OR
# tar --extract --file=tecmintbackup.tar.gz tecmintbackup.xml

tecmintbackup.xml

12. Untar Single file from tar.bz2 File

To extract a single file called index.php from the file Phpfiles-org.tar.bz2 use the following option.

# tar -jxvf Phpfiles-org.tar.bz2 home/php/index.php
OR
# tar --extract --file=Phpfiles-org.tar.bz2 /home/php/index.php

/home/php/index.php

13. Untar Multiple files from tar, tar.gz and tar.bz2 File

To extract or untar multiple files from the tartar.gz and tar.bz2 archive file. For example the below command will extract “file 1” “file 2” from the archive files.

# tar -xvf tecmint-14-09-12.tar "file 1" "file 2" 

# tar -zxvf MyImages-14-09-12.tar.gz "file 1" "file 2" 

# tar -jxvf Phpfiles-org.tar.bz2 "file 1" "file 2"

14. Extract Group of Files using Wildcard

To extract a group of files we use wildcard based extracting. For example, to extract a group of all files whose pattern begins with .php from a tar, tar.gz and tar.bz2 archive file.

# tar -xvf Phpfiles-org.tar --wildcards '*.php'

# tar -zxvf Phpfiles-org.tar.gz --wildcards '*.php'

# tar -jxvf Phpfiles-org.tar.bz2 --wildcards '*.php'

/home/php/iframe_ew.php
/home/php/videos_all.php
/home/php/rss.php
/home/php/index.php
/home/php/vendor.php
/home/php/video_title.php
/home/php/report.php
/home/php/video.php

15. Add Files or Directories to tar Archive File

To add files or directories to existing tar archived file we use the option r (append). For example we add file xyz.txt and directory php to existing tecmint-14-09-12.tar archive file.

# tar -rvf tecmint-14-09-12.tar xyz.txt

# tar -rvf tecmint-14-09-12.tar php

drwxr-xr-x root/root         0 2012-09-15 02:24:21 home/tecmint/
-rw-r--r-- root/root  15740615 2012-09-15 02:23:42 home/tecmint/cleanfiles.sh
-rw-r--r-- root/root    863726 2012-09-15 02:23:41 home/tecmint/openvpn-2.1.4.tar.gz
-rw-r--r-- root/root  21063680 2012-09-15 02:24:21 home/tecmint/tecmint-14-09-12.tar
-rw-r--r-- root/root   4437600 2012-09-15 02:23:41 home/tecmint/phpmyadmin-2.11.11.3-1.el5.rf.noarch.rpm
-rw-r--r-- root/root     12680 2012-09-15 02:23:41 home/tecmint/rpmforge-release-0.5.2-2.el5.rf.i386.rpm
-rw-r--r-- root/root 0 2012-08-18 19:11:04 xyz.txt
drwxr-xr-x root/root 0 2012-09-15 03:06:08 php/ 
-rw-r--r-- root/root 1751 2012-09-15 03:06:08 php/iframe_ew.php 
-rw-r--r-- root/root 11220 2012-09-15 03:06:08 php/videos_all.php 
-rw-r--r-- root/root 2152 2012-09-15 03:06:08 php/rss.php 
-rw-r--r-- root/root 3021 2012-09-15 03:06:08 php/index.php 
-rw-r--r-- root/root 2554 2012-09-15 03:06:08 php/vendor.php 
-rw-r--r-- root/root 406 2012-09-15 03:06:08 php/video_title.php

16. Add Files or Directories to tar.gz and tar.bz2 files

The tar command don’t have a option to add files or directories to an existing compressed tar.gz and tar.bz2archive file. If we do try will get the following error.

# tar -rvf MyImages-14-09-12.tar.gz xyz.txt

# tar -rvf Phpfiles-org.tar.bz2 xyz.txt

tar: This does not look like a tar archive
tar: Skipping to next header
xyz.txt
tar: Error exit delayed from previous errors

17. How To Verify tar, tar.gz and tar.bz2 Archive File

To verfify any tar or compressed archived file we use option as W (verify). To do, just use the following examples of command. (Note : You cannot do verification on a compressed ( *.tar.gz, *.tar.bz2 ) archive file).

# tar tvfW tecmint-14-09-12.tar

tar: This does not look like a tar archive
tar: Skipping to next header
tar: Archive contains obsolescent base-64 headers
tar: VERIFY FAILURE: 30740 invalid headers detected
Verify -rw-r--r-- root/root    863726 2012-09-15 02:23:41 /home/tecmint/openvpn-2.1.4.tar.gz
Verify -rw-r--r-- root/root  21063680 2012-09-15 02:24:21 /home/tecmint/tecmint-14-09-12.tar
tar: /home/tecmint/tecmint-14-09-12.tar: Warning: Cannot stat: No such file or directory
Verify -rw-r--r-- root/root   4437600 2012-09-15 02:23:41 home/tecmint/phpmyadmin-2.11.11.3-1.el5.rf.noarch.rpm
tar: /home/tecmint/phpmyadmin-2.11.11.3-1.el5.rf.noarch.rpm: Warning: Cannot stat: No such file or directory
Verify -rw-r--r-- root/root     12680 2012-09-15 02:23:41 home/tecmint/rpmforge-release-0.5.2-2.el5.rf.i386.rpm
tar: /home/tecmint/rpmforge-release-0.5.2-2.el5.rf.i386.rpm: Warning: Cannot stat: No such file or directory
Verify -rw-r--r-- root/root         0 2012-08-18 19:11:04 xyz.txt
Verify drwxr-xr-x root/root         0 2012-09-15 03:06:08 php/

18. Check the Size of the tar, tar.gz and tar.bz2 Archive File

To check the size of any tartar.gz and tar.bz2 archive file, use the following command. For example the below command will display the size of archive file in Kilobytes (KB).

# tar -czf - tecmint-14-09-12.tar | wc -c
12820480

# tar -czf - MyImages-14-09-12.tar.gz | wc -c
112640

# tar -czf - Phpfiles-org.tar.bz2 | wc -c
20480

Tar Usage and Options

  1. c – create a archive file.
  2. x – extract a archive file.
  3. v – show the progress of archive file.
  4. f – filename of archive file.
  5. t – viewing content of archive file.
  6. j – filter archive through bzip2.
  7. z – filter archive through gzip.
  8. r – append or update files or directories to existing archive file.
  9. W – Verify a archive file.
  10. wildcards – Specify patterns in unix tar command.

That’s it for now, hope the above tar command examples are enough for you to learn and for more information please use man tar command.

If you are looking to split any large tar archive file into multiple parts or blocks, just go through this article:

Don’t Miss: Split Large ‘tar’ Archive into Multiple Files of Certain Size

If we’ve missed any example please do share with us via comment box and please don’t forget to share this article with your friends. This is the best way to say thanks…..

Source

Learn How to Use ‘dir’ Command with Different Options and Arguments in Linux

This article shows some examples of using the dir command to list the contents of a directory. The dircommand is not a commonly used command in Linux. Though it works more less like the ls command which most Linux users prefer to use. We’ll be discussing the dir command where we shall look at how to use different options and arguments.

dir Command Usage in Linux

dir Command Usage in Linux

The general syntax of the dir command is as follows.

# dir [OPTION] [FILE]

dir Command Syntax

dir Command Syntax

dir Command Usage with Examples

Simple output of the dir command

# dir /

dir Command Output

dir Command Output

Output of the dir command with the /etc directory file is as follows. As you can see from the output not all files in the /etc directory are listed.

# dir /etc

List /etc Directory

List /etc Directory

To list one file per line use -1 option as follows.

# dir
# dir -1

List Files per Line

List Files per Line

View all files in a directory including hidden files

To list all files in a directory including . (hidden) files, use the -a option. You can include the -l option to format output as a list.

# dir -a
# dir -al

List Hidden Files

List Hidden Files

Long List Hidden Files

Long List Hidden Files

View directory entries instead of content

When you need to list only directory entries instead of directory content, you can use the -d option. In the output below, the option -d lists entries for the /etc directory.

When you use -dl, it shows a long listing of the directory including owner, group owner, permissions.

# dir -d /etc
# dir -dl /etc

Long List /etc Directory

Long List /etc Directory

View index number of files

In case you want to view the index number of each file, use the option -i. From the output below, you can see that first column shows numbers. These numbers are called inodes which are sometimes referred to as index nodes or index numbers.

An inode in Linux systems is a data storage on a filesystem that stores information about a file except the filename and its actual data.

# dir -il

List Index Number of Files

List Index Number of Files

List files and their allocated sizes in blocks

You can view files sizes using the -s option. If you need to sort the files according to size, then use the -S option.

In this case you need to also use the -h option to view the files sizes in a human-readable format.

# dir -shl

List Files with Sizes

List Files with Sizes

In the output above, the first column shows the size of files in Kilobytes. The output below shows a sorted list of files according to their sizes by using the -S option.

# dir -ashlS /home/kone

Sort Files with Sizes

Sort Files with Sizes

You can also sort by modification time, with the file that has recently been modified appearing first on the list. This can be done using the -t option.

# dir -ashlt /home/kone

Sort Files by Modification Time

Sort Files by Modification Time

List files without owner or group owner

To list files without their owners, you have to use -g option which works like the -l option only that it does not print out the file owner. And to list files without group owner use the -G option as follows.

# dir -ahgG /home/kone

List Files without Owner

List Files without Owner

As you can notice from the output above that the name of the file owner and the group owner are not printed. You can as well view the author of a file by using the –author flag as follows.

# dir -al --author /home/kone

View Author of Files

View Author of Files

In the output above, the fifth column shows the name of the author of a file. The examples.desktop files is owned by user kone, belongs to group kili and it was authored by user kone.

List directories before other files

You may wish to view directories before all other files and this can be done by using the –group-directories-firstflag as follows.

# dir -l --group-directories-first

List Group Directory Files

List Group Directory Files

When you observe the output above, you can see that all the directories are listed before the regular files. The letter d before the permissions indicates a directory and a indicates a regular file.

You can also view subdirectories recursively, meaning that you can list all other subdirectories in a directory using the -R option as follows.

# dir -R

List Directories Recursively

List Directories Recursively

In the above output, the (.) sign means the current directory and home directory of user Kone has three subdirectories that is Backupdir and Docs.

The Backup subdirectory has two other subdirectories that is mariadb and mysql which have no subdirectories.

The dir subdirectory does not have any subdirectory. And the Docs subdirectory has two subdirectories namely Books and Tuts which do not have subdirectories.

View user and group IDs instead of names

To view user and group IDs, you need to use -n option. Let us observe the difference between the next two outputs.

Output without -n option.

# dir -l --author

List Files Without ID's

List Files Without ID’s

Output with -n option.

# dir -nl --author

List Files with ID's

List Files with ID’s

View entries separated by commas

This can be archived by using -m option.

# dir -am

List Entries by Comma

List Entries by Comma

To find help in using the dir command use –help flag and to view version details of dir use –version.

Conclusion

These are just examples of basic usage of the dir command, to use many other options see the manual entry for dir command on your system.

Source

5 Interesting Command Line Tips and Tricks in Linux

Are you making most out of the Linux? There are lots of helpful features which appears to be Tips and Tricks for many of Linux Users. Sometimes Tips and Tricks become the need. It helps you get productive with the same set of commands yet with enhanced functionality.

5 Command Line Tips and Tricks

5 Command Line Tips and Tricks

Here we are starting a new series, where we will be writing some tips and tricks and will try to yield as more as we can in small time.

1. To audit the commands we’d run in past, we use history command. Here is a sample output of history command.

# history

Linux history Command Usage

history command example

Obvious from output, the history command do not output the time stamp with the log of last executed commands. Any solution for this? Yeah! Run the below command.

# HISTTIMEFORMAT="%d/%m/%y %T "
# history

If you want to permanently append this change, add the below line to ~/.bashrc.

export HISTTIMEFORMAT="%d/%m/%y %T "

and then, from terminal run,

# source ~/.bashrc

Explanation of commands and switches.

  1. history – GNU History Library
  2. HISTIMEFORMAT – Environmental Variable
  3. %d – Day
  4. %m – Month
  5. %y – Year
  6. %T – Time Stamp
  7. source – in short send the contents of file to shell
  8. .bashrc – is a shell script that BASH runs whenever it is started interactively.

history Command Logs

history Command Logs

2. The next gem in the list is – how to check disk write speed? Well one liner dd command script serves the purpose.

# dd if=/dev/zero of=/tmp/output.img bs=8k count=256k conv=fdatasync; rm -rf /tmp/output.img

dd Command Example

dd Command Example

Explanation of commands and switches.

  1. dd – Convert and Copy a file
  2. if=/dev/zero – Read the file and not stdin
  3. of=/tmp/output.img – Write to file and not stdout
  4. bs – Read and Write maximum upto M bytes, at one time
  5. count – Copy N input block
  6. conv – Convert the file as per comma separated symbol list.
  7. rm – Removes files and folder
  8. -rf – (-r) removes directories and contents recursively and (-f) Force the removal without prompt.

3. How will you check the top six files that are eating out your space? A simple one liner script made from du command, which is primarily used as file space usages.

# du -hsx * | sort -rh | head -6

Check Disk Space Usage

Check Disk Space Usage

Explanation of commands and switches.

  1. du – Estimate file space usages
  2. -hsx – (-h) Human Readable Format, (-s) Summaries Output, (-x) One File Format, skip directories on other file format.
  3. sort – Sort text file lines
  4. -rh – (-r) Reverse the result of comparison, (-h) for compare human readable format.
  5. head – output first n lines of file.

4. The next step involves statistics in terminal of a file of every kind. We can output the statistics related to a file with the help of stat (output file/fileSystem status) command.

# stat filename_ext  (viz., stat abc.pdf)

Check File Statistics

Check File Statistics

5. The next and last but not the least, this one line script is for those, who are newbies. If you are an experienced user you probably don’t need it, unless you want some fun out of it. Well newbies are Linux-command-line phobic and the below one liner will generate random man pages. The benefit is as a newbie you always get something to learn and never get bored.

# man $(ls /bin | shuf | head -1)

Generate Random Man Pages

Generate Random Man Pages

Explanation of commands and switches.

  1. man – Linux Man pages
  2. ls – Linux Listing Commands
  3. /bin – System Binary file Location
  4. shuf – Generate Random Permutation
  5. head – Output first n line of file.

That’s all for now. If you know any such tips and tricks you may share with us and we will post the same in your words on our reputed Tecmint.com website.

If you want to share any tips and tricks that you cannot make into article you may share it at tecmint[dot]com[at]gmail[dot]com and we will include it in our article. Don’t forget to provide us with your valuable feedback in the comments below. Keep connected. Like and share us and help us get spread.

Don’t Miss:

  1. 10 Useful Commandline Tricks for Newbies – Part 2
  2. 5 Useful Commands to Manage Linux File Types and System Time – Part 3

Source

WP2Social Auto Publish Powered By : XYZScripts.com