How To Parse And Pretty Print JSON With Linux Commandline Tools

parse and pretty print json

JSON is a lightweight and language independent data storage format, easy to integrate with most programming languages and also easy to understand by humans, of course when properly formatted. The word JSON stands for JavaScript Object Notation, though it starts with JavaScript, and primarily used to exchange data between server and browser, but now being used in many fields including embedded systems. Here we’re going to parse and pretty print JSON with command line tools on Linux. It’s extremely useful for handling large JSON data in a shell scripts, or manipulating JSON data in a shell script.

What is pretty printing?

The JSON data is structured to be somewhat more human readable. However in most cases, JSON data is stored in a single line, even without a line ending character.

Obviously that’s not very convenient for reading and editing manually.

That’s when pretty print is useful. The name is quite self explanatory, re-formatting the JSON text to be more legible by humans. This is known as JSON pretty printing.

Parse And Pretty Print JSON With Linux Commandline Tools

JSON data could be parsed with command line text processors like awksed and gerp. In fact JSON.awk is an awk script to do that. However there are some dedicated tools for the same purpose.

  1. jq or jshon, JSON parser for shell, both of them are quite useful.

  2. Shell scripts like JSON.sh or jsonv.sh to parse JSON in bash, zsh or dash shell.

  3. JSON.awk, JSON parser awk script.

  4. Python modules like json.tool.

  5. underscore-cli, Node.js and javascript based.

In this tutorial I’m focusing only on jq, which is quite powerful JSON parser for shells with advanced filtering and scripting capability.

JSON pretty printing

JSON data could be in one and nearly illegible for humans, so to make it somewhat readable, JSON pretty printing is here.

Example: A data from jsonip.com, to get external IP address in JSON format, use curl or wget tools like below.

$ wget -cq http://jsonip.com/ -O -

The actual data looks like this:

{"ip":"111.222.333.444","about":"/about","Pro!":"http://getjsonip.com"}

Now pretty print it with jq:

$ wget -cq http://jsonip.com/ -O - | jq '.'

This should look like below, after filtering the result with jq.

{

   "ip": "111.222.333.444",

   "about": "/about",

   "Pro!": "http://getjsonip.com"

}

The Same thing could be done with python json.tool module. Here is an example:

$ cat anything.json | python -m json.tool

This Python based solution should be fine for most users, but it’s not that useful where Python is not pre-installed or could not be installed, like on embedded systems.

However the json.tool python module has a distinct advantage, it’s cross platform. So, you can use it seamlessly on Windows, Linux or mac OS.


Suggested read:


How to parse JSON with jq

First, you need to install jq, it’s already picked up by most GNU/Linux distributions, install it with their respective package installer commands.

On Arch Linux:

$ sudo pacman -S jq

On Debian, Ubuntu, Linux Mint:

$ sudo apt-get install jq

On Fedora:

$ sudo dnf install jq

On openSUSE:

$ sudo zypper install jq

For other OS or platforms, see the official installation instructions.

Basic filters and identifiers of jq

jq could read the JSON data either from stdin or a file. You’ve to use both depending on the situation.

The single symbol of . is the most basic filter. These filters are also called as object identifier-index. Using a single along with jq basically pretty prints the input JSON file.

Single quotes – You don’t have to use the single quote always. But if you’re combining several filters in a single line, then you must use them.

Double quotes – You’ve to enclose any special character like @#$ within two double quotes, like this example, jq .foo.”@bar”

Raw data print – For any reason, if you need only the final parsed data, not enclosed within a double quote, use the -r flag with the jq command, like this. – jq -r .foo.bar.

Parsing specific data

To filter out a specific part of JSON, you’ve to look into the pretty printed JSON file’s data hierarchy.

An example of JSON data, from Wikipedia:

{

  "firstName": "John",

  "lastName": "Smith",

  "age": 25,

  "address": {

    "streetAddress": "21 2nd Street",

    "city": "New York",

    "state": "NY",

    "postalCode": "10021"

},

  "phoneNumber": [

{

  "type": "home",

  "number": "212 555-1234"

},

{

  "type": "fax",

  "number": "646 555-4567"

}

],

  "gender": {

  "type": "male"

  }

}

I’m going to use this JSON data as an example in this tutorial, saved this as sample.json.

Let’s say I want to filter out the address from sample.json file. So the command should be like:

$ jq .address sample.json

Sample output:

{

  "streetAddress": "21 2nd Street",

  "city": "New York",

  "state": "NY",

  "postalCode": "10021"

}

Again let’s say I want the postal code, then I’ve to add another object identifier-index, i.e. another filter.

$ cat sample.json | jq .address.postalCode

Also note that the filters are case sensitive and you’ve to use the exact same string to get something meaningful output instead of null.

Parsing elements from JSON array

Elements of JSON array are enclosed within square brackets, undoubtedly quite versatile to use.

To parse elements from a array, you’ve to use the []identifier along with other object identifier-index.

In this sample JSON data, the phone numbers are stored inside an array, to get all the contents from this array, you’ve to use only the brackets, like this example.

$ jq .phoneNumber[] sample.json

Let’s say you just want the first element of the array, then use the array object numbers starting for 0, for the first item, use [0], for the next items, it should be incremented by one each step.

$ jq .phoneNumber[0] sample.json

Scripting examples

Let’s say I want only the the number for home, not entire JSON array data. Here’s when scripting within jq command comes handy.

$ cat sample.json | jq -r '.phoneNumber[] | select(.type == "home") | .number'

Here first I’m piping the results of one filer to another, then using the select attribute to select a particular type of data, again piping the result to another filter.

Explaining every type of jq filters and scripting is beyond the scope and purpose of this tutorial. It’s highly suggested to read the JQ manual for better understanding given below.

Resources:

Source

How to Use Awk and Regular Expressions to Filter Text or String in Files

When we run certain commands in Unix/Linux to read or edit text from a string or file, we most times try to filter output to a given section of interest. This is where using regular expressions comes in handy.

Read Also: 10 Useful Linux Chaining Operators with Practical Examples

What are Regular Expressions?

A regular expression can be defined as a strings that represent several sequence of characters. One of the most important things about regular expressions is that they allow you to filter the output of a command or file, edit a section of a text or configuration file and so on.

Features of Regular Expression

Regular expressions are made of:

  1. Ordinary characters such as space, underscore(_), A-Z, a-z, 0-9.
  2. Meta characters that are expanded to ordinary characters, they include:
    1. (.) it matches any single character except a newline.
    2. (*) it matches zero or more existences of the immediate character preceding it.
    3. [ character(s) ] it matches any one of the characters specified in character(s), one can also use a hyphen (-) to mean a range of characters such as [a-f][1-5], and so on.
    4. ^ it matches the beginning of a line in a file.
    5. $ matches the end of line in a file.
    6. \ it is an escape character.

In order to filter text, one has to use a text filtering tool such as awk. You can think of awk as a programming language of its own. But for the scope of this guide to using awk, we shall cover it as a simple command line filtering tool.

The general syntax of awk is:

# awk 'script' filename

Where 'script' is a set of commands that are understood by awk and are execute on file, filename.

It works by reading a given line in the file, makes a copy of the line and then executes the script on the line. This is repeated on all the lines in the file.

The 'script' is in the form '/pattern/ action' where pattern is a regular expression and the action is what awk will do when it finds the given pattern in a line.

How to Use Awk Filtering Tool in Linux

In the following examples, we shall focus on the meta characters that we discussed above under the features of awk.

A simple example of using awk:

The example below prints all the lines in the file /etc/hosts since no pattern is given.

# awk '//{print}'/etc/hosts

Awk Prints all Lines in a File

Awk Prints all Lines in a File

Use Awk with Pattern:

I the example below, a pattern localhost has been given, so awk will match line having localhost in the /etc/hosts file.

# awk '/localhost/{print}' /etc/hosts 

Awk Print Given Matching Line in a File

Awk Print Given Matching Line in a File

Using Awk with (.) wild card in a Pattern

The (.) will match strings containing loclocalhostlocalnet in the example below.

That is to say * l some_single_character c *.

# awk '/l.c/{print}' /etc/hosts

Use Awk to Print Matching Strings in a File

Use Awk to Print Matching Strings in a File

Using Awk with (*) Character in a Pattern

It will match strings containing localhostlocalnetlinescapable, as in the example below:

# awk '/l*c/{print}' /etc/localhost

Use Awk to Match Strings in File

Use Awk to Match Strings in File

You will also realize that (*) tries to a get you the longest match possible it can detect.

Let look at a case that demonstrates this, take the regular expression t*t which means match strings that start with letter t and end with t in the line below:

this is tecmint, where you get the best good tutorials, how to's, guides, tecmint. 

You will get the following possibilities when you use the pattern /t*t/:

this is t
this is tecmint
this is tecmint, where you get t
this is tecmint, where you get the best good t
this is tecmint, where you get the best good tutorials, how t
this is tecmint, where you get the best good tutorials, how tos, guides, t
this is tecmint, where you get the best good tutorials, how tos, guides, tecmint

And (*) in /t*t/ wild card character allows awk to choose the the last option:

this is tecmint, where you get the best good tutorials, how to's, guides, tecmint

Using Awk with set [ character(s) ]

Take for example the set [al1], here awk will match all strings containing character a or l or 1 in a line in the file /etc/hosts.

# awk '/[al1]/{print}' /etc/hosts

Use-Awk to Print Matching Character in File

Use-Awk to Print Matching Character in File

The next example matches strings starting with either K or k followed by T:

# awk '/[Kk]T/{print}' /etc/hosts 

Use Awk to Print Matched String in File

Use Awk to Print Matched String in File

Specifying Characters in a Range

Understand characters with awk:

  1. [0-9] means a single number
  2. [a-z] means match a single lower case letter
  3. [A-Z] means match a single upper case letter
  4. [a-zA-Z] means match a single letter
  5. [a-zA-Z 0-9] means match a single letter or number

Lets look at an example below:

# awk '/[0-9]/{print}' /etc/hosts 

Use Awk To Print Matching Numbers in File

Use Awk To Print Matching Numbers in File

All the line from the file /etc/hosts contain at least a single number [0-9] in the above example.

Use Awk with (^) Meta Character

It matches all the lines that start with the pattern provided as in the example below:

# awk '/^fe/{print}' /etc/hosts
# awk '/^ff/{print}' /etc/hosts

Use Awk to Print All Matching Lines with Pattern

Use Awk to Print All Matching Lines with Pattern

Use Awk with ($) Meta Character

It matches all the lines that end with the pattern provided:

# awk '/ab$/{print}' /etc/hosts
# awk '/ost$/{print}' /etc/hosts
# awk '/rs$/{print}' /etc/hosts

Use Awk to Print Given Pattern String

Use Awk to Print Given Pattern String

Use Awk with (\) Escape Character

It allows you to take the character following it as a literal that is to say consider it just as it is.

In the example below, the first command prints out all line in the file, the second command prints out nothing because I want to match a line that has $25.00, but no escape character is used.

The third command is correct since a an escape character has been used to read $ as it is.

# awk '//{print}' deals.txt
# awk '/$25.00/{print}' deals.txt
# awk '/\.00/{print}' deals.txt

Use Awk with Escape Character

Use Awk with Escape Character

Summary

That is not all with the awk command line filtering tool, the examples above a the basic operations of awk. In the next parts we shall be advancing on how to use complex features of awk. Thanks for reading through and for any additions or clarifications, post a comment in the comments section.

Source

NLTK Tutorial in Python – Linux Hint

The era of data is already here. The rate at which the data is generated today is higher than ever and it is always growing. Most of the times, the people who deal with data everyday work mostly with unstructured textual data. Some of this data has associated elements like images, videos, audios etc. Some of the sources of this data are websites, daily blogs, news websites and many more. Analysing all of this data at a faster rate is necessary and many time, crucial too.

For example, a business might run a text analysis engine which processes the tweets about its business mentioning the company name, location, process and analyse the emotion related to that tweet. Correct actions can be taken faster if that business get to know about growing negative tweets for it in a particular location to save itself from a blunder or anything else. Another common example will for Youtube. The Youtube admins and moderators get to know about effect of a video depending on the type of comments made on a video or the video chat messages. This will help them find inappropriate content on the website much faster because now, they have eradicated the manual work and employed automated smart text analysis bots.

In this lesson, we will study some of the concepts related to text analysis with the help of NLTK library in Python. Some of these concepts will involve:

  • Tokenization, how to break a piece of text into words, sentences
  • Avoiding stop words based on English language
  • Performing stemming and lemmatization on a piece of text
  • Identifying the tokens to be analysed

NLP will be the main area of focus in this lesson as it is applicable to enormous real-life scenarios where it can solve big and crucial problems. If you think this sounds complex, well it does but the concepts are equally easy to understand if you try examples side by side. Let’s jump into installing NLTK on your machine to get started with it.

Installing NLTK

Just a note before starting, you can use a virtual environment for this lesson which we can be made with the following command:

python -m virtualenv nltk
source nltk/bin/activate

Once the virtual environment is active, you can install NLTK library within the virtual env so that examples we create next can be executed:

pip install nltk

We will make use of Anaconda and Jupyter in this lesson. If you want to install it on your machine, look at the lesson which describes “How to Install Anaconda Python on Ubuntu 18.04 LTS” and share your feedback if you face any issues. To install NLTK with Anaconda, use the following command in the terminal from Anaconda:

conda install -c anaconda nltk

We see something like this when we execute the above command:

Once all of the packages needed are installed and done, we can get started with using the NLTK library with the following import statement:

import nltk

Let’s get started with basic NLTK examples now that we have the prerequisites packages installed.

Tokenization

We will start with Tokenization which is the first step in performing text analysis. A token can be any smaller part of a piece of text which can be analysed. There are two types of Tokenization which can be performed with NLTK:

  • Sentence Tokenization
  • Word Tokenization

You can guess what happens on each of the Tokenization so let’s dive into code examples.

Sentence Tokenization

As the name reflects, Sentence Tokenizers breaks a piece of text into sentences. Let’s try a simple code snippet for the same where we make use of a text we picked from Apache Kafka tutorial. We will perform the necessary imports

import nltk
from nltk.tokenize import sent_tokenize

Please note that you might face an error due to a missing dependency for nltk called punkt. Add the following line right after the imports in the program to avoid any warnings:

nltk.download(‘punkt’)

For me, it gave the following output:

Next, we make use of the sentence tokenizer we imported:

text = “””A Topic in Kafka is something where a message is sent. The consumer
applications which are interested in that topic pulls the message inside that
topic and can do anything with that data. Up to a specific time, any number of
consumer applications can pull this message any number of times.”””

sentences = sent_tokenize(text)
print(sentences)

We see something like this when we execute the above script:

As expected, the text was correctly organised into sentences.

Word Tokenization

As the name reflects, Word Tokenizers breaks a piece of text into words. Let’s try a simple code snippet for the same with the same text as the previous example:

from nltk.tokenize import word_tokenize

words = word_tokenize(text)
print(words)

We see something like this when we execute the above script:

As expected, the text was correctly organised into words.

Frequency Distribution

Now that we have broken the text, we can also calculate frequency of each word in the text we used. It is very simple to do with NLTK, here is the code snippet we use:

from nltk.probability import FreqDist

distribution = FreqDist(words)
print(distribution)

We see something like this when we execute the above script:

Next, we can find most common words in the text with a simple function which accepts the number of words to show:

# Most common words
distribution.most_common(2)

We see something like this when we execute the above script:

Finally, we can make a frequency distribution plot to clear out the words and their count in the given text and clearly understand the distribution of words:

Stopwords

Just like when we talk to another person via a call, there tends to be some noise over the call which is unwanted information. In the same manner, text from real world also contain noise which is termed as Stopwords. Stopwords can vary from language to language but they can be easily identified. Some of the Stopwords in English language can be – is, are, a, the, an etc.

We can look at words which are considered as Stopwords by NLTK for English language with the following code snippet:

from nltk.corpus import stopwords
nltk.download(‘stopwords’)

language = “english”
stop_words = set(stopwords.words(language))
print(stop_words)

As of course the set of stop words can be big, it is stored as a separate dataset which can be downloaded with NLTK as we shown above. We see something like this when we execute the above script:

These stop words should be removed from the text if you want to perform a precise text analysis for the piece of text provided. Let’s remove the stop words from our textual tokens:

filtered_words = []

for word in words:
if word not in stop_words:
filtered_words.append(word)

filtered_words

We see something like this when we execute the above script:

Word Stemming

A stem of a word is the base of that word. For example:

We will perform stemming upon the filtered words from which we removed stop words in the last section. Let’s write a simple code snippet where we use NLTK’s stemmer to perform the operation:

from nltk.stem import PorterStemmer
ps = PorterStemmer()

stemmed_words = []
for word in filtered_words:
stemmed_words.append(ps.stem(word))

print(“Stemmed Sentence:”, stemmed_words)

We see something like this when we execute the above script:

POS Tagging

Next step in textual analysis is after stemming is to identify and group each word in terms of their value, i.e. if each of the word is a noun or a verb or something else. This is termed as Part of Speech tagging. Let’s perform POS tagging now:

tokens=nltk.word_tokenize(sentences[0])
print(tokens)

We see something like this when we execute the above script:

Now, we can perform the tagging, for which we will have to download another dataset to identify the correct tags:

nltk.download(‘averaged_perceptron_tagger’)
nltk.pos_tag(tokens)


Here is the output of the tagging:

Now that we have finally identified the tagged words, this is the dataset on which we can perform sentiment analysis to identify the emotions behind a sentence.

Conclusion

In this lesson, we looked at an excellent natural language package, NLTK which allows us to work with unstructured textual data to identify any stop words and perform deeper analysis by preparing a sharp data set for text analysis with libraries like sklearn.

Find all of the source code used in this lesson on Github.

Source

10 Useful Tips for Writing Effective Bash Scripts in Linux

Shell scripting is the easiest form of programming you can learn/do in Linux. More so, it is a required skill for system administration for automating tasks, developing new simple utilities/tools just to mention but a few.

In this article, we will share 10 useful and practical tips for writing effective and reliable bash scripts and they include:

1. Always Use Comments in Scripts

This is a recommended practice which is not only applied to shell scripting but all other kinds of programming. Writing comments in a script helps you or some else going through your script understand what the different parts of the script do.

For starters, comments are defined using the # sign.

#TecMint is the best site for all kind of Linux articles

Sometimes bash may continue to execute a script even when a certain command fails, thus affecting the rest of the script (may eventually result in logical errors). Use the line below to exit a script when a command fails:

#let script exit if a command fails
set -o errexit 
OR
set -e

3. Make a Script exit When Bash Uses Undeclared Variable

Bash may also try to use an undeclared script which could cause a logical error. Therefore use the following line to instruct bash to exit a script when it attempts to use an undeclared variable:

#let script exit if an unsed variable is used
set -o nounset
OR
set -u

4. Use Double Quotes to Reference Variables

Using double quotes while referencing (using a value of a variable) helps to prevent word splitting (regarding whitespace) and unnecessary globbing (recognizing and expanding wildcards).

Check out the example below:

#!/bin/bash
#let script exit if a command fails
set -o errexit 

#let script exit if an unsed variable is used
set -o nounset

echo "Names without double quotes" 
echo
names="Tecmint FOSSMint Linusay"
for name in $names; do
        echo "$name"
done
echo

echo "Names with double quotes" 
echo
for name in "$names"; do
        echo "$name"
done

exit 0

Save the file and exit, then run it as follows:

$ ./names.sh

Use Double Quotes in Scripts

Use Double Quotes in Scripts

5. Use functions in Scripts

Except for very small scripts (with a few lines of code), always remember to use functions to modularize your code and make scripts more readable and reusable.

The syntax for writing functions is as follows:

function check_root(){
	command1; 
	command2;
}

OR
check_root(){
	command1; 
	command2;
}

For single line code, use termination characters after each command like this:

check_root(){ command1; command2; }

6. Use = instead of == for String Comparisons

Note that == is a synonym for =, therefore only use a single = for string comparisons, for instance:

value1=”tecmint.com”
value2=”fossmint.com”
if [ "$value1" = "$value2" ]

7. Use $(command) instead of legacy ‘command’ for Substitution

Command substitution replaces a command with its output. Use $(command) instead of backquotes `command` for command substitution.

This is recommended even by shellcheck tool (shows warnings and suggestions for shell scripts). For example:

user=`echo “$UID”`
user=$(echo “$UID”)

8. Use Read-only to Declare Static Variables

A static variable doesn’t change; its value can not be altered once it’s defined in a script:

readonly passwd_file=”/etc/passwd”
readonly group_file=”/etc/group”

9. Use Uppercase Names for ENVIRONMENT Variables and Lowercase for Custom Variables

All bash environment variables are named with uppercase letters, therefore use lowercase letters to name your custom variables to avoid variable name conflicts:

#define custom variables using lowercase and use uppercase for env variables
nikto_file=”$HOME/Downloads/nikto-master/program/nikto.pl”
perl “$nikto_file” -h  “$1”

10. Always Perform Debugging for Long Scripts

If you are writing bash scripts with thousands of lines of code, finding errors may become a nightmare. To easily fix things before executing a script, perform some debugging. Master this tip by reading through the guides provided below:

  1. How To Enable Shell Script Debugging Mode in Linux
  2. How to Perform Syntax Checking Debugging Mode in Shell Scripts
  3. How to Trace Execution of Commands in Shell Script with Shell Tracing

That’s all! Do you have any other best bash scripting practices to share? If yes, then use the comment form below to do that.

Source

10 Practical Examples Using Wildcards to Match Filenames in Linux

Wildcards (also referred to as meta characters) are symbols or special characters that represent other characters. You can use them with any command such as ls command or rm command to list or remove files matching a given criteria, receptively.

Read Also: 10 Useful Practical Examples on Chaining Operators in Linux

These wildcards are interpreted by the shell and the results are returned to the command you run. There are three main wildcards in Linux:

  • An asterisk (*) – matches one or more occurrences of any character, including no character.
  • Question mark (?) – represents or matches a single occurrence of any character.
  • Bracketed characters ([ ]) – matches any occurrence of character enclosed in the square brackets. It is possible to use different types of characters (alphanumeric characters): numbers, letters, other special characters etc.

You need to carefully choose which wildcard to use to match correct filenames: it is also possible to combine all of them in one operation as explained in the examples below.

How to Match Filenames Using Wildcards in Linux

For the purpose of this article, we will use following files to demonstrate each example.

createbackup.sh  list.sh  lspace.sh        speaker.sh
listopen.sh      lost.sh  rename-files.sh  topprocs.sh

1. This command matches all files with names starting with l (which is the prefix) and ending with one or more occurrences of any character.

$ ls -l l*	

List Files with Character

List Files with Character

2. This example shows another use of * to copy all filenames prefixed with users-0 and ending with one or more occurrences of any character.

$ mkdir -p users-info
$ ls users-0*
$ mv -v users-0* users-info/	# Option -v flag enables verbose output

List and Copy All Files

List and Copy All Files

3. The following command matches all files with names beginning with l followed by any single character and ending with st.sh (which is the suffix).

$ ls l?st.sh	

Match File with Character Name

Match File with Character Name

4. The command below matches all files with names starting with l followed by any of the characters in the square bracket but ending with st.sh.

$ ls l[abdcio]st.sh 

Matching Files with Names

Matching Files with Names

How to Combine Wildcards to Match Filenames in Linux

You can combine wildcards to build a complex filename matching criteria as described in the following examples.

5. This command will match all filenames prefixed with any two characters followed by st but ending with one or more occurrence of any character.

$ ls
$ ls ??st*

Match File Names with Prefix

Match File Names with Prefix

6. This example matches filenames starting with any of these characters [clst] and ending with one or more occurrence of any character.

$ ls
$ ls [clst]*

Match Files with Characters

Match Files with Characters

7. In this examples, only filenames starting with any of these characters [clst] followed by one of these [io] and then any single character, followed by a t and lastly, one or more occurrence of any character will be listed.

$ ls
$ ls [clst][io]?t*

List Files with Multiple Characters

List Files with Multiple Characters

8. Here, filenames prefixed with one or more occurrence of any character, followed by the letters tar and ending with one or more occurrence of any character will be removed.

$ ls
$ rm *tar*
$ ls

Remove Files with Character Letters

Remove Files with Character Letters

How to Match Characters Set in Linux

9. Now lets look at how to specify a set of characters. Consider the filenames below containing system users information.

$ ls

users-111.list  users-1AA.list  users-22A.list  users-2aB.txt   users-2ba.txt
users-111.txt   users-1AA.txt   users-22A.txt   users-2AB.txt   users-2bA.txt
users-11A.txt   users-1AB.list  users-2aA.txt   users-2ba.list
users-12A.txt   users-1AB.txt   users-2AB.list  users-2bA.list

This command will match all files whose name starts with users-i, followed by a number, a lower case letter or number, then a number and ends with one or more occurrences of any character.

$ ls users-[0-9][a-z0-9][0-9]*

The next command matches filenames beginning with users-i, followed by a number, a lower or upper case letter or number, then a number and ends with one or more occurrences of any character.

$ ls users-[0-9][a-zA-Z0-9][0-9]*

This command that follows will match all filenames beginning with users-i, followed by a number, a lower or upper case letter or number, then a lower or upper case letter and ends with one or more occurrences of any character.

$ ls users-[0-9][a-zA-Z0-9][a-zA-Z]*

Match Characters in Filenames

Match Characters in Filenames

How to Negate a Set of Characters in Linux

10. You can as well negate a set of characters using the ! symbol. The following command lists all filenames starting with users-i, followed by a number, any valid file naming character apart from a number, then a lower or upper case letter and ends with one or more occurrences of any character.

$ ls users-[0-9][!0-9][a-zA-Z]*

That’s all for now! If you have tried out the above examples, you should now have a good understanding of how wildcards work to match filenames in Linux.

You might also like to read these following articles that shows examples of using wildcards in Linux:

  1. How to Extract Tar Files to Specific or Different Directory in Linux
  2. 3 Ways to Delete All Files in a Directory Except One or Few Files with Extensions
  3. 10 Useful Tips for Writing Effective Bash Scripts in Linux
  4. How to Use Awk and Regular Expressions to Filter Text or String in Files

If you have any thing to share or a question(s) to ask, use the comment form below.

Source

Real Time Interactive IP LAN Monitoring with IPTraf Tool

There are number of monitoring tools available. Moreover, i came across a IPTraf monitoring tool which i find very useful and it’s a simple tool to monitor Inbound and Outbound network traffic passing through interface.

Install IPTraf Network Monitoring

Install IPTraf LAN Monitoring

IPTraf is an ncurses-based IP LAN monitoring tool (text-based) wherein we can monitor various connections like TCPUDPICMPnon-IP counts and also Ethernet load information etc.

This article guides you on how to install IPTraf monitoring tool using YUM command.

Installing IPTraf

IPTraf is part of the Linux distribution and can be installed on RHELCentOS and Fedora server’s using yum command from terminal.

# yum install iptraf

Under Ubuntu, iptraf can be installed using Ubuntu Software Center or ‘apt-get’ method. For example, use the ‘apt-get‘ command to install it.

$ sudo apt-get install iptraf
IPTraf Usage

Once IPTraf installed, run the following command from the terminal to launch an ascii based menu interface that will allow you to view current IP traffic monitoringGeneral interface statisticsDetailed interface statisticsStatistical breakdownsFilters and also provide some configure options where you can configure as per your need.

[root@tecmint ~]# iptraf

Start IPTraf

IPTraf Startup Screen

The iptraf interactive screen, displays a menu system with different options to choose from. Here are the some screenshots that shows real time IP traffic counts and interface statistics etc.

IPTraf System Menu

IPTraf System Menu

IP traffic monitor

IP Traffic Monitor

IP Traffic Monitor

General interface statistics

IPTraf General interface statistics

IPTraf General interface statistics

Detailed interface statistics

IPTraf Detailed interface statistics

IPTraf Detailed interface statistics

Statistical breakdowns

IPTraf Statistical breakdowns

IPTraf Statistical breakdowns

LAN station monitor

IPTraf LAN station monitor

IPTraf LAN station monitor

Configure

IPTraf Configure

IPTraf Configure

IPTraf Options

Using “iptraf -i” will immediately start the IP traffic monitor on a particular interface. For example, the following command will start the IP traffic on interface eth0. This is the primary interface card that attached to your system. Else you can also monitor all your network interface traffic using argument as “iptraf -i all“.

# iptraf -i eth0

IPTraf Eth0 Monitoring

IPTraf Eth0 Monitoring

Similarly, you can also monitor TCP/UDP traffic on a specific interface, using the following command.

# iptraf -s eth0

IPTraf TCP/UDP Monitoring

IPTraf TCP/UDP Monitoring

If you want to know more options and how to use them, check iptraf ‘man page‘ or use the command as ‘iptraf -help‘ for more parameters. Fore more information visit the official project page.

Source

How to Find and Sort Files Based on Modification Date and Time in Linux

Usually, we are in habit of saving a lot of information in form of files on our system. Some, hidden files, some kept in a separate folder created for our ease of understanding, while some as it is. But, this whole stuff fills our directories; usually desktop, making it look like a mess. But, the problem arises when we need to search for a particular file modified on particular date and time in this huge collection.

Find and Sort Files by Date and Time in Linux

Find and Sort Files by Date and Time in Linux

People comfortable with GUI’s can find it using File Manager, which lists files in long listing format, making it easy to figure out what we wanted, but those users having habit of black screens, or even anyone working on servers which are devoid of GUI’s would want a simple command or set of commands that could ease out their search.

Real beauty of Linux shows here, as Linux has a collection of commands which if used separately or together can help to search for a file, or sort a collection of files according to their name, date of modification, time of creation, or even any filter you could think of applying to get your result.

Here, we will unveil the real strength of Linux by examining a set of commands which can help sorting a file or even a list of files by Date and Time.

Linux Utilities to Sort Files in Linux

Some basic Linux command line utilities that are just sufficient for sorting a directory based on Date and Timeare:

ls command

ls – Listing contents of directory, this utility can list the files and directories and can even list all the status information about them including: date and time of modification or access, permissions, size, owner, group etc.

We’ve already covered many articles on Linux ls command and sort command, you can find them below:

  1. Learn ls Command with 15 Basic Examples
  2. Learn 7 Advance ls Commands with Examples
  3. 15 Useful Interview Questions on ls Command in Linux

sort command

sort – This command can be used to sort the output of any search just by any field or any particular column of the field.

We’ve already covered two articles on Linux sort command, you can find them below:

  1. 14 Linux ‘sort’ Command Examples – Part 1
  2. 7 Useful Linux ‘sort’ Command Examples – Part 2

These commands are in themselves very powerful commands to master if you work on black screens and have to deal with lots of files, just to get the one you want.

Some Ways to Sort Files using Date and Time

Below are the list of commands to sort based on Date and Time.

1. List Files Based on Modification Time

The below command lists files in long listing format, and sorts files based on modification time, newest first. To sort in reverse order, use '-r' switch with this command.

# ls -lt

total 673768
-rwxr----- 1 tecmint tecmint  3312130 Jan 19 15:24 When You Are Gone.MP3
-rwxr----- 1 tecmint tecmint  4177212 Jan 19 15:24 When I Dream At Night - Marc Anthony-1.mp3
-rwxr----- 1 tecmint tecmint  4177212 Jan 19 15:24 When I Dream At Night - Marc Anthony.mp3
-rwxr----- 1 tecmint tecmint  6629090 Jan 19 15:24 Westlife_Tonight.MP3
-rwxr----- 1 tecmint tecmint  3448832 Jan 19 15:24 We Are The World by USA For Africa (Michael Jackson).mp3
-rwxr----- 1 tecmint tecmint  8580934 Jan 19 15:24 This Love.mp3
-rwxr----- 1 tecmint tecmint  2194832 Jan 19 15:24 The Cross Of Changes.mp3
-rwxr----- 1 tecmint tecmint  5087527 Jan 19 15:24 T.N.T. For The Brain 5.18.mp3
-rwxr----- 1 tecmint tecmint  3437100 Jan 19 15:24 Summer Of '69.MP3
-rwxr----- 1 tecmint tecmint  4360278 Jan 19 15:24 Smell Of Desire.4.32.mp3
-rwxr----- 1 tecmint tecmint  4582632 Jan 19 15:24 Silence Must Be Heard 4.46.mp3
-rwxr----- 1 tecmint tecmint  4147119 Jan 19 15:24 Shadows In Silence 4.19.mp3
-rwxr----- 1 tecmint tecmint  4189654 Jan 19 15:24 Sarah Brightman  & Enigma - Eden (remix).mp3
-rwxr----- 1 tecmint tecmint  4124421 Jan 19 15:24 Sade - Smooth Operator.mp3
-rwxr----- 1 tecmint tecmint  4771840 Jan 19 15:24 Sade - And I Miss You.mp3
-rwxr----- 1 tecmint tecmint  3749477 Jan 19 15:24 Run To You.MP3
-rwxr----- 1 tecmint tecmint  7573679 Jan 19 15:24 Roger Sanchez_Another Chance_Full_Mix.mp3
-rwxr----- 1 tecmint tecmint  3018211 Jan 19 15:24 Principal Of Lust.3.08.mp3
-rwxr----- 1 tecmint tecmint  5688390 Jan 19 15:24 Please Forgive Me.MP3
-rwxr----- 1 tecmint tecmint  3381827 Jan 19 15:24 Obvious.mp3
-rwxr----- 1 tecmint tecmint  5499073 Jan 19 15:24 Namstey-London-Viraaniya.mp3
-rwxr----- 1 tecmint tecmint  3129210 Jan 19 15:24 MOS-Enya - Only Time (Pop Radio mix).m

2. List Files Based on Last Access Time

Listing of files in directory based on last access time, i.e. based on time the file was last accessed, not modified.

# ls -ltu

total 3084272
drwxr-xr-x  2 tecmint tecmint       4096 Jan 19 15:24 Music
drwxr-xr-x  2 tecmint tecmint       4096 Jan 19 15:22 Linux-ISO
drwxr-xr-x  2 tecmint tecmint       4096 Jan 19 15:22 Music-Player
drwx------  3 tecmint tecmint       4096 Jan 19 15:22 tor-browser_en-US
drwxr-xr-x  2 tecmint tecmint       4096 Jan 19 15:22 bin
drwxr-xr-x 11 tecmint tecmint       4096 Jan 19 15:22 Android Games
drwxr-xr-x  2 tecmint tecmint       4096 Jan 19 15:22 Songs
drwxr-xr-x  2 tecmint tecmint       4096 Jan 19 15:22 renamefiles
drwxr-xr-x  2 tecmint tecmint       4096 Jan 19 15:22 katoolin-master
drwxr-xr-x  2 tecmint tecmint       4096 Jan 19 15:22 Tricks
drwxr-xr-x  3 tecmint tecmint       4096 Jan 19 15:22 Linux-Tricks
drwxr-xr-x  6 tecmint tecmint       4096 Jan 19 15:22 tuptime
drwxr-xr-x  4 tecmint tecmint       4096 Jan 19 15:22 xdm
drwxr-xr-x  2 tecmint tecmint      20480 Jan 19 15:22 ffmpeg usage
drwxr-xr-x  2 tecmint tecmint       4096 Jan 19 15:22 xdm-helper

3. List Files Based on Last Modification Time

Listing of files in directory based on last modification time of file’s status information, or the 'ctime'. This command would list that file first whose any status information like: owner, group, permissions, size etc has been recently changed.

# ls -ltc

total 3084272
drwxr-xr-x  2 tecmint tecmint       4096 Jan 19 15:24 Music
drwxr-xr-x  2 tecmint tecmint       4096 Jan 19 13:05 img
-rw-------  1 tecmint tecmint     262191 Jan 19 12:15 tecmint.jpeg
drwxr-xr-x  5 tecmint tecmint       4096 Jan 19 10:57 Desktop
drwxr-xr-x  7 tecmint tecmint      12288 Jan 18 16:00 Downloads
drwxr-xr-x 13 tecmint tecmint       4096 Jan 18 15:36 VirtualBox VMs
-rwxr-xr-x  1 tecmint tecmint        691 Jan 13 14:57 special.sh
-rw-r--r--  1 tecmint tecmint     654325 Jan  4 16:55 powertop-2.7.tar.gz.save
-rw-r--r--  1 tecmint tecmint     654329 Jan  4 11:17 filename.tar.gz
drwxr-xr-x  3 tecmint tecmint       4096 Jan  4 11:04 powertop-2.7
-rw-r--r--  1 tecmint tecmint     447795 Dec 31 14:22 Happy-New-Year-2016.jpg
-rw-r--r--  1 tecmint tecmint         12 Dec 18 18:46 ravi
-rw-r--r--  1 tecmint tecmint       1823 Dec 16 12:45 setuid.txt
...

If '-a' switch is used with above commands, they can list and sort even the hidden files in current directory, and '-r' switch lists the output in reverse order.

For more in-depth sorting, like sorting on Output of find command, however ls can also be used, but there 'sort' proves more helpful as the output may not have only file name but any fields desired by user.

Below commands show usage of sort with find command to sort the list of files based on Date and Time.

To learn more about find command, follow this link: 35 Practical Examples of ‘find’ Command in Linux

4. Sorting Files based on Month

Here, we use find command to find all files in root (‘/’) directory and then print the result as: Month in which file was accessed and then filename. Of that complete result, here we list out top 11 entries.

# find / -type f -printf "\n%Ab %p" | head -n 11

Dec /usr/lib/nvidia/pre-install
Dec /usr/lib/libcpufreq.so.0.0.0
Apr /usr/lib/libchromeXvMCPro.so.1.0.0
Apr /usr/lib/libt1.so.5.1.2
Apr /usr/lib/libchromeXvMC.so.1.0.0
Apr /usr/lib/libcdr-0.0.so.0.0.15
Dec /usr/lib/msttcorefonts/update-ms-fonts
Nov /usr/lib/ldscripts/elf32_x86_64.xr
Nov /usr/lib/ldscripts/elf_i386.xbn
Nov /usr/lib/ldscripts/i386linux.xn

The below command sorts the output using key as first field, specified by '-k1' and then it sorts on Month as specified by 'M' ahead of it.

# find / -type f -printf "\n%Ab %p" | head -n 11 | sort -k1M

Apr /usr/lib/libcdr-0.0.so.0.0.15
Apr /usr/lib/libchromeXvMCPro.so.1.0.0
Apr /usr/lib/libchromeXvMC.so.1.0.0
Apr /usr/lib/libt1.so.5.1.2
Nov /usr/lib/ldscripts/elf32_x86_64.xr
Nov /usr/lib/ldscripts/elf_i386.xbn
Nov /usr/lib/ldscripts/i386linux.xn
Dec /usr/lib/libcpufreq.so.0.0.0
Dec /usr/lib/msttcorefonts/update-ms-fonts
Dec /usr/lib/nvidia/pre-install

5. Sort Files Based on Date

Here, again we use find command to find all the files in root directory, but now we will print the result as: last date the file was accessed, last time the file was accessed and then filename. Of that we take out top 11 entries.

# find / -type f -printf "\n%AD %AT %p" | head -n 11

12/08/15 11:30:38.0000000000 /usr/lib/nvidia/pre-install
12/07/15 10:34:45.2694776230 /usr/lib/libcpufreq.so.0.0.0
04/11/15 06:08:34.9819910430 /usr/lib/libchromeXvMCPro.so.1.0.0
04/11/15 06:08:34.9939910430 /usr/lib/libt1.so.5.1.2
04/11/15 06:08:35.0099910420 /usr/lib/libchromeXvMC.so.1.0.0
04/11/15 06:08:35.0099910420 /usr/lib/libcdr-0.0.so.0.0.15
12/18/15 11:19:25.2656728990 /usr/lib/msttcorefonts/update-ms-fonts
11/12/15 12:56:34.0000000000 /usr/lib/ldscripts/elf32_x86_64.xr
11/12/15 12:56:34.0000000000 /usr/lib/ldscripts/elf_i386.xbn
11/12/15 12:56:34.0000000000 /usr/lib/ldscripts/i386linux.xn

The below sort command first sorts on basis of last digit of the year, then sorts on basis of last digit of month in reverse order and finally sorts on basis of first field. Here, ‘1.8‘ means 8th column of first field and ‘n’ ahead of it means numerical sort, while ‘r’ indicates reverse order sorting.

# find / -type f -printf "\n%AD %AT %p" | head -n 11 | sort -k1.8n -k1.1nr -k1

12/07/15 10:34:45.2694776230 /usr/lib/libcpufreq.so.0.0.0
12/08/15 11:30:38.0000000000 /usr/lib/nvidia/pre-install
12/18/15 11:19:25.2656728990 /usr/lib/msttcorefonts/update-ms-fonts
11/12/15 12:56:34.0000000000 /usr/lib/ldscripts/elf32_x86_64.xr
11/12/15 12:56:34.0000000000 /usr/lib/ldscripts/elf_i386.xbn
11/12/15 12:56:34.0000000000 /usr/lib/ldscripts/i386linux.xn
04/11/15 06:08:34.9819910430 /usr/lib/libchromeXvMCPro.so.1.0.0
04/11/15 06:08:34.9939910430 /usr/lib/libt1.so.5.1.2
04/11/15 06:08:35.0099910420 /usr/lib/libcdr-0.0.so.0.0.15
04/11/15 06:08:35.0099910420 /usr/lib/libchromeXvMC.so.1.0.0

6. Sorting Files Based on Time

Here, again we use find command to list out top 11 files in root directory and print the result in format: last time file was accessed and then filename.

# find / -type f -printf "\n%AT %p" | head -n 11

11:30:38.0000000000 /usr/lib/nvidia/pre-install
10:34:45.2694776230 /usr/lib/libcpufreq.so.0.0.0
06:08:34.9819910430 /usr/lib/libchromeXvMCPro.so.1.0.0
06:08:34.9939910430 /usr/lib/libt1.so.5.1.2
06:08:35.0099910420 /usr/lib/libchromeXvMC.so.1.0.0
06:08:35.0099910420 /usr/lib/libcdr-0.0.so.0.0.15
11:19:25.2656728990 /usr/lib/msttcorefonts/update-ms-fonts
12:56:34.0000000000 /usr/lib/ldscripts/elf32_x86_64.xr
12:56:34.0000000000 /usr/lib/ldscripts/elf_i386.xbn
12:56:34.0000000000 /usr/lib/ldscripts/i386linux.xn

The below command sorts the output based on first column of the first field of the output which is first digit of hour.

# find / -type f -printf "\n%AT %p" | head -n 11 | sort -k1.1n

06:08:34.9819910430 /usr/lib/libchromeXvMCPro.so.1.0.0
06:08:34.9939910430 /usr/lib/libt1.so.5.1.2
06:08:35.0099910420 /usr/lib/libcdr-0.0.so.0.0.15
06:08:35.0099910420 /usr/lib/libchromeXvMC.so.1.0.0
10:34:45.2694776230 /usr/lib/libcpufreq.so.0.0.0
11:19:25.2656728990 /usr/lib/msttcorefonts/update-ms-fonts
11:30:38.0000000000 /usr/lib/nvidia/pre-install
12:56:34.0000000000 /usr/lib/ldscripts/elf32_x86_64.xr
12:56:34.0000000000 /usr/lib/ldscripts/elf_i386.xbn
12:56:34.0000000000 /usr/lib/ldscripts/i386linux.xn

7. Sorting Ouptut of ls -l based on Date

This command sorts the output of 'ls -l' command based on 6th field month wise, then based on 7th field which is date, numerically.

# ls -l | sort -k6M -k7n

total 116
-rw-r--r-- 1 root root     0 Oct  1 19:51 backup.tgz
drwxr-xr-x 2 root root  4096 Oct  7 15:27 Desktop
-rw-r--r-- 1 root root 15853 Oct  7 15:19 powertop_report.csv
-rw-r--r-- 1 root root 79112 Oct  7 15:25 powertop.html
-rw-r--r-- 1 root root     0 Oct 16 15:26 file3
-rw-r--r-- 1 root root    13 Oct 16 15:17 B
-rw-r--r-- 1 root root    21 Oct 16 15:16 A
-rw-r--r-- 1 root root    64 Oct 16 15:38 C

Conclusion

Likewise, by having some knowledge of sort command, you can sort almost any listing based on any field and even its any column you desire. These were some of tricks to help you sort files based on Date or Time. You can have your own tricks build based on these. However, if you have any other interesting trick, you can always mention that in your comments.

Source

20 practical Python libraries for every Python programmer

Web apps, web crawling, database access, GUI creation, parsing, image processing, and lots more—these handy tools have you covered.

20 practical Python libraries for every Python programmer

Want a good reason for the smashing success of the Python programming language? Look no further than the massive collection of libraries available for Python, both native and third-party libraries. With so many Python libraries out there, though, it’s no surprise that some don’t get all the attention they deserve. Plus, programmers who work exclusively in one domain don’t always know about the goodies available to them for other kinds of work.Here are 20 Python libraries you may have overlooked but are definitely worth your attention. These gems run the gamut of usefulness, simplifying everything from file system access, database programming, and working with cloud services to building lightweight web apps, creating GUIs, and working with images, ebooks, and Word files—and much more besides. Some are well-known, others lesser-known, but all of these Python libraries deserve a place in your toolbox.

Apache Libcloud

What Libcloud does: Access multiple cloud providers through a single, consistent, unified API.

Why use Libcloud: If the above description of Apache Libcloud doesn’t make you clap your hands for joy, then you haven’t tried working with multiple clouds. Cloud providers all love to do things their way, making a unified mechanism for dealing with dozens of providers a huge timesaver and headache-soother. APIs are available for compute, storage, load balancing, and DNS, with support for Python 2.x and Python 3.x as well as PyPy, the performance-boosting JIT compiler for Python.

Arrow

What Arrow does: Cleaner handling of dates and times in Python.

Why use Arrow: Dealing with time zones, date conversions, date formats, and all the rest is already a headache and a half. Throw in Python’s standard library for date/time work, and you get two headaches and a half.

Arrow provides four big advantages. One, Arrow is a drop-in replacement for Python’s datetime module, meaning that common function calls like .now() and .utcnow() work as expected. Two, Arrow provides methods for common needs like shifting and converting time zones. Three, Arrow provides “humanized” date/time information—such as being able to say something happened “an hour ago” or will happen “in two hours” without much effort. Four, Arrow can localize date/time information without breaking a sweat.

Behold

What Behold does: Robust support for print-style debugging in Python.

Why use Behold: There is one simple way to debug in Python, or almost any programming language for that matter: Insert in-line print statements. But while print-debugging is a no-brainer in small programs, it’s not so easy to get useful results within large, sprawling, multi-module projects.

Behold provides a toolkit for contextual debugging via print statements. It allows you to impose a uniform look on the output, tag the results so they can be sorted via searches or filters, and provide contexts across modules so that functions that originate in one module can be debugged properly in another. Behold handles many common Python-specific scenarios like printing an object’s internal dictionary, unveiling nested attributes, and storing and reusing results for comparison at other points during the debugging process.

Bottle

What Bottle does: Lightweight and fast web apps.

Why use Bottle: When you want to throw together a quick RESTful API or use the bare bones of a web framework to build an app, capable yet tiny Bottle gives you no more than you need. Routing, templates, access to request and response data, support for multiple server types from plain old CGI on up, and support for more advanced features like WebSockets—it’s all here. The amount of work needed to get started is likewise minimal, and Bottle’s design is elegantly extensible when more advanced functions are needed. 

EbookLib

What EbookLib does: Read and write .epub files.

Why use EbookLib: Creating ebooks typically requires wrangling one command-line tool or another. EbookLib provides management tools and APIs that simplify the process. It works with EPUB 2 and EPUB 3 files, with Kindle support under development.

Provide the images and the text (the latter in HTML format), and EbookLib can assemble those pieces into an ebook complete with chapters, nested table of contents, images, HTML markup, and so on. Cover, spine, and stylesheet data are all supported, too. A plug-in system allows third parties to extend the library’s behaviors.

If you don’t need everything EbookLib has to offer, try Mkepub. Mkepub packs basic ebook assembly functionality in a library that is only a few kilobytes in size. One minor drawback of Mkepub is that it requires Jinja2, which in turn requires the MarkupSafe library.

Gooey

What Gooey does: Give a console-based Python program a platform-native GUI.

Why use Gooey: Presenting users, especially rank-and-file users, with a command-line interface is among the best ways to discourage use of your application. Few apart from the hardcore geek like figuring out what options to pass in and in what order. Gooey takes arguments expected by the argparse library and presents them to users as a GUI form, by way of the WxPython library. All options are labeled and displayed with appropriate controls (such as a drop-down for a multi-option argument). Very little additional coding—a single include and a single decorator—is needed to make it work, assuming you’re already using argparse.

Invoke

What Invoke does: ”Pythonic remote execution” – i.e., perform admin tasks using a Python library.

Why use Invoke: Using Python as a replacement for common shell scripting tasks makes a world of sense. Invoke provides a high-level API for running shell commands and managing command-line tasks as if they were Python functions, allowing you to embed those tasks in your own code or elegantly build around them.

Nuitka

What Nuitka does: Compile Python into self-contained C executables.

Why use Nuitka: Like CythonNuitka compiles Python into C. However, whereas Cython requires its own custom syntax for best results, and focuses mainly on math and statistics applications, Nuitka works with any Python program as-is, compiles it into C, and produces a single-file executable, applying optimizations where it can along the way. Nuitka is still in its early stages, and many of the planned optimizations are still to come. Nevertheless, it’s a convenient way to turn a Python script into a speedy command-line app.

Numba

What Numba does: Selectively speed up math-intensive functions.

Why use Numba: The Python world includes a whole subculture of packages for accelerating math operations. For example, NumPy works by wrapping high-speed C libraries in a Python interface, and Cython compiles Python to C with optional typing for accelerated performance. But Numba is easily the most convenient, as it allows Python functions to be selectively accelerated with nothing more than a decorator. For further speed boosts, you can use common Python idioms to parallelize workloads, or use SIMD or GPU instructions. Note that you can use NumPy with Numba, but in many cases Numba will outperform NumPy many times over.

Peewee

What Peewee does: A tiny ORM (object-relational mapper) that supports SQLite, MySQL, and PostgreSQL, with many extensions.

Why use Peewee: Not everyone loves an ORM; some would rather leave schema modeling on the database side and be done with it. But for developers who don’t want to touch databases, a well-constructed, unobtrusive ORM can be a godsend. And for developers who don’t want an ORM as full-blown as SQL AlchemyPeewee is a great fit.

Peewee models are easy to construct, connect, and manipulate. Plus, many common query-manipulation functions, such as pagination, are built right in. More features are available as add-ons including extensions for other databases, testing tools, and a schema migration system—a feature even an ORM hater could learn to love. Note that the Peewee 3.x branch (the recommended edition) is not completely backward-compatible with previous versions of Peewee.

Pillow

What Pillow does: Image processing without the pain.

Why use Pillow: Most Pythonistas who have performed image processing ought to be familiar with PIL (Python Imaging Library), but PIL is riddled with shortcomings and limitations, and it’s updated infrequently. Pillowaims to be both easier to use and code-compatible with PIL via minimal changes. Extensions are included for talking to both native Windows imaging functions and Python’s Tcl/Tk-backed Tkinter GUI package. Pillow is available through GitHub or the PyPI repository.

PyFilesystem

What PyFilesystem does: A Pythonic interface to any file system — any file system.

Why use PyFilesystem: The fundamental idea behind PyFilesystem couldn’t be simpler: Just as Python’s file objects abstract a single file, PyFilesystem’s FS objects abstract an entire file system. This doesn’t mean only on-disk file systems, either. PyFilesystem also supports FTP directories, in-memory files ystems, file systems for locations defined by the OS (such as the user directory), and even combinations of the above overlaid onto each other.

In addition to making it easier to write cross-platform code that manipulates files, PyFilesystem obviates the need to cobble together scripts from disparate parts of the standard library, mainly os and io. It also provides utilities that one might otherwise need to create from scratch, like a tool for printing console-friendly tree views of a file system.

Pygame

What Pygame does: Create video games, or game-quality front-ends, in Python.

Why use Pygame: If you think anyone outside of the game development world would ever bother with such a framework, think again. Pygame is a handy way to work with many GUI-oriented behaviors that might otherwise demand a lot of heavy lifting: drawing canvas and sprite graphics, dealing with multichannel sound, handling windows and click events, detecting collisions, and so on. Not every app—or even every GUI app—will benefit from being built with Pygame, but you ought to take a close look at what Pygame provides. You might be surprised!

Pyglet

What Pyglet does: Cross-platform multimedia and window graphics in pure Python.

Why use Pyglet: Pyglet provides handy access to items that are tedious to implement from scratch for a GUI application: window functions, OpenGL graphics, audio and video playback, keyboard and mouse handling, and working with image files. Note that Pyglet doesn’t provide UI widgets like buttons, toolbars, or menus, though.

All of this is done through the native platform capabilities in Windows, OS X, or Linux, so there are no binary dependencies; Pyglet is pure Python. It’s also BSD-licensed, so it can be included in any commercial or open source project.

PyInstaller

What PyInstaller does: Package a Python script as a stand-alone executable.

Why use PyInstaller: A common complaint with Python is that it’s harder than it ought to be to distribute a script to other users. PyInstaller lets you package any Python script—even scripts that include complex third-party modules with binaries, like NumPy—and distribute it as a single-folder or single-file application. PyInstaller tends to pack more into that folder or file than is really needed, so the final results can be bulky. But that tendency can be overcome with practice, and the sheer convenience PyInstaller provides is hard to beat.

PySimpleGUI

What PySimpleGUI does: Creating GUIs in Python with a minimum of fuss.

Why use PySimpleGUI: Python ships with the Tkinter library for creating GUIs, but Tkinter is not known for being easy to work with. PySimpleGUI wraps Tkinter with APIs that are far less exasperating. Many common effects, like a simple dialog box or pop-up menu, can be accomplished in a single line of code. The interfaces still have Tkinter’s trademark look, though. If you want a more sophisticated look and feel you’ll need to look elsewhere.

Python-docx

What Python-docx does: Programmatically manipulate Microsoft Word .docx files.

Why use Python-docx: In theory, it should be easy to write scripts that create and update XML-style Microsoft Word documents. In practice, it is far from simple, due to all of the internal complexities of the .docx format. Python-docx lets you do an end run around all of those complexities, by providing a high-level API for working with .docx files.

Python-docx lets you add or change text, images, tables, styles, document sections, and headers and footers. The library allows you to create new documents or change existing documents. Python-docx is a great way to pull raw text from Word files, or to avoid dealing with Word’s own built-in automation functions.

Scrapy

What Scrapy does: Screen scraping and web crawling.

Why use Scrapy: Scrapy makes scraping simple. Create a class that defines the items you want scraped and write some rules to extract that data from the page. The results can be exported as JSON, XML, CSV, or any number of other formats. The collected data can be saved raw or sanitized as it is imported.

Scrapy can be extended to handle many other tasks, such as logging into a website and handling session cookies. Images, too, can be scraped up by Scrapy and associated with the captured content. The latest versions add direct connections to cloud services for storing scraped data, re-usable proxy connections, and better handling of esoteric HTML and HTTP behaviors.

Sh

What Sh does: Call any external program, in a subprocess, and return the results to a Python program—using the same syntax as if the program in question were a native Python function.

Why use Sh: On any POSIX-compliant system, Sh is a godsend, allowing any command-line program available on that system to be used Pythonically. Not only are you freed from having to reinvent the wheel (why implement ping when it’s right there in the OS?), but you no longer have to struggle with adding that functionality elegantly to your application. However, be forewarned: Sh provides no sanitization of the parameters that are passed through. Be sure never to pass along raw user input.

Splinter

What Splinter does: Test web applications by automating browser actions.

Why you need it: Let’s face it, trying to automate web application testing is no one’s idea of fun. Splinter eliminates the low-level grunt work, invoking the browser, passing URLs, filling out forms, clicking buttons, and so on, automating the whole process from end to end.

Splinter provides drivers to work with Chrome and Firefox, and it can use Selenium Remote to control a browser running elsewhere. You can even manually execute JavaScript in the target browser.

Source

How to use the Linux timeout command

If you tend to issue commands and accidentally leave them running, you might want to employ the timeout command

How to use the Linux timeout command

It shows you how to use a built-in Linux command to keep you from accidentally leaving your commands running for hours on end.

Linux admins are notorious for depending on the command line. With good reason. The command line is incredibly powerful. There is no end to what you can do with Linux commands.

However, there are times when you want to run a command but don’t want the command to continue running until you forget it’s running and realize that the command has been gobbling up CPU cycles, filling up logs, or just generally doing its thing in the background, harming nothing.

How it works

Regardless of why you don’t want to allow a command to run forever, the how is quite simple—thanks to the timeout command. The timeout command should be installed by default and is very simple to use. Say you want to run a ping command, on google.com, for five seconds (because who hasn’t forgotten they’d run a ping command, only to come back hours later to see it still pinging the target address?).

To do this, log into your Ubuntu server or desktop, open a terminal window, and issue the command timeout 5 ping google.com. The ping command will do its thing for five seconds and stop. Or say you want to follow the syslog log file with tail for ten seconds. That command would be:

timeout 10 tail -f /var/log/syslog.

After the configured 10 seconds, the tail command will end. And that’s how you can automatically stop your commands, without having to resort to the old [Ctrl]+[C] keyboard combination. If you tend to issue commands and accidentally leave them running, you might want to start employing the timeout command, before your IT manager puts you in a timeout.

Also see

Source

Install RPM packages on Ubuntu

The Ubuntu repositories contain thousands of deb packages which can be installed from the Ubuntu Software Center or by using the apt command line utility. Deb is the installation package format used by all Debian based distributions including Ubuntu. Some packages are not available in the standard Ubuntu repositories but it can be easily installed by enabling the appropriate source.

In most cases when the software vendor does not provide a repository they will have a download page from where you can download and install the deb package or download and compile the software from sources.

Although not so often, some software may be distributed only as an RPM package. RPM is a package format used by Red Hat and its derivatives such as CentOS. Luckily, there is a tool called alien that allows us to install an RPM file on Ubuntu or to convert an RPM package file into a Debian package file.

This is not the recommended way to install software packages in Ubuntu. Whenever possible you should prefer installing software from the Ubuntu repositories.

Not all RPM packages can be installed on Ubuntu. Installing RPM packaged on Ubuntu may lead to package dependency conflicts.

You should never use this method to replace or update important system packages, like libc, systemd, or other services and libraries that are essential for the proper functioning of your system. Doing this may lead to errors and system instability.

Alien is a tool that supports conversion between Red Hat rpm, Debian deb, Stampede slp, Slackware tgz, and Solaris pkg file formats.

Before installing the alien package make sure the Universe repository is enabled on your system:

sudo add-apt-repository universe

Copy

Once the repository is enabled update the packages index and install the alien package with:

sudo apt updatesudo apt install alien

Copy

The command above will also install the necessary build tools.

To convert a package from RPM to DEB format use the alien command followed by the RPM package name:

sudo alien package_name.rpm

Copy

Depending on the package size the conversion may take some time. In most cases, you will see warning messages printed on your screen. If the package is successfully converted the output will indicate that the DEB package is generated:

package_name.deb generated

Copy

To install the deb package, you can either use the dpkg or apt utility:

sudo dpkg -i package_name.deb

Copy

sudo apt ./package_name.deb

Copy

The package should now be installed, assuming it’s compatible with your system and all dependencies are met.

You’ll need to be logged in as a user with sudo access to be able to install packages on your Ubuntu system.

Instead of converting and then installing the package you can use the -i option that will tell alien to install the RPM package directly.

sudo alien -i package_name.rpm

Copy

The command above will automatically generate and installed the package and remove the package file after it has been installed.

In this tutorial, you learned how to install RPM packages on Ubuntu.

If you have any question or feedback feel free to leave a comment.

Source

WP2Social Auto Publish Powered By : XYZScripts.com