Introduction to Linux Server Security Hardening

Securing your Linux server(s) is a difficult and time consuming task for System Administrators but its necessary to harden the server’s security to keep it safe from Attackers and Black Hat Hackers. You can secure your server by configuring the system properly and installing as minimum softwares as possible. There are some tips which can help you secure your server from network and privilege escalation attacks.

Upgrade your Kernel

Outdated kernel is always prone to several network and privilege escalation attacks. So you can update your kernel using apt in Debian or yum in Fedora.

sudo apt-get update
sudo apt-get dist-upgrade

Disabling Root Cron Jobs

Cron jobs running by root or high privilege account can be used as a way to gain high privileges by attackers. You can see running cron jobs by

ls /etc/cron*

Strict Firewall Rules

You should block any unnecessary inbound or outbound connection on uncommon ports. You can update your firewalls rules by using iptables. Iptables is a very flexible and easy to use utility used to block or allow incoming or outgoing traffic. To install, write

sudo apt-get install iptables

Here’s an example to block incoming on FTP port using iptables

iptables -A INPUT -p tcp –dport ftp -j DROP

Disable unnecessary Services

Stop any unwanted services and daemons running on your system. You can list running services using following commands.

ubuntu@ubuntu:~$ service –status-all

[ + ]  acpid
[ – ]  alsa-utils
[ – ]  anacron
[ + ]  apache-htcacheclean
[ + ]  apache2
[ + ]  apparmor
[ + ]  apport
[ + ]  avahi-daemon
[ + ]  binfmt-support
[ + ]  bluetooth
[ – ]  cgroupfs-mount

…snip…

OR using the following command

chkconfig –list | grep ‘3:on’

To stop a service, type

sudo service [SERVICE_NAME] stop

OR

sudo systemctl stop [SERVICE_NAME]

Check for Backdoors and Rootkits

Utilities like rkhunter and chkrootkit can be used to detect known and unknown backdoors and rootkits. They verify installed packages and configurations to verify system’s security. To install write,

ubuntu@ubuntu:~$ sudo apt-get install rkhunter -y

To scan your system, type

ubuntu@ubuntu:~$ sudo rkhunter –check

[ Rootkit Hunter version 1.4.6 ]

Checking system commands…

Performing ‘strings’ command checks
Checking ‘strings’ command                           [ OK ]

Performing ‘shared libraries’ checks
Checking for preloading variables                    [ None found ]
Checking for preloaded libraries                     [ None found ]
Checking LD_LIBRARY_PATH variable                    [ Not found ]

Performing file properties checks
Checking for prerequisites                           [ OK ]
/usr/sbin/adduser                                    [ OK ]
/usr/sbin/chroot                                      [ OK ]

…snip…

Check Listening Ports

You should check for listening ports that aren’t used and disable them. To check for open ports, write.

azad@ubuntu:~$ sudo netstat -ulpnt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address      Foreign Address   State      PID/Program name

tcp        0    0 127.0.0.1:6379        0.0.0.0:*        LISTEN     2136/redis-server 1

tcp        0    0 0.0.0.0:111           0.0.0.0:*        LISTEN     1273/rpcbind

tcp        0    0 127.0.0.1:5939        0.0.0.0:*        LISTEN     2989/teamviewerd

tcp        0    0 127.0.0.53:53         0.0.0.0:*        LISTEN     1287/systemd-resolv

tcp        0    0 0.0.0.0:22            0.0.0.0:*        LISTEN     1939/sshd

tcp        0    0 127.0.0.1:631         0.0.0.0:*        LISTEN     20042/cupsd

tcp        0    0 127.0.0.1:5432        0.0.0.0:*        LISTEN     1887/postgres

tcp        0    0 0.0.0.0:25            0.0.0.0:*        LISTEN     31259/master

…snip…

Use an IDS (Intrusion Testing System)

Use an IDS to check network logs and to prevent any malicious activities. There’s an open source IDS Snort available for Linux. You can install it by,

wget https://www.snort.org/downloads/snort/daq-2.0.6.tar.gz
wget https://www.snort.org/downloads/snort/snort-2.9.12.tar.gz
tar xvzf daq-2.0.6.tar.gz
cd daq-2.0.6
$ ./configure && make && sudo make install
tar xvzf snort-2.9.12.tar.gz
cd snort-2.9.12
$ ./configure –enable-sourcefire && make && sudo make install

To monitor network traffic, type

ubuntu@ubuntu:~$ sudo snort

Running in packet dump mode
== Initializing Snort ==–

Initializing Output Plugins!
pcap DAQ configured to passive.

Acquiring network traffic from “tun0”.
Decoding Raw IP4

== Initialization Complete ==–

…snip…

Disable Logging as Root

Root acts as a user with full privileges, it has power to do anything with the system. Instead, you should enforce using sudo to run administrative commands.

Remove no owner Files

Files owned by no user or group can be security threat. You should search for these files and remove them or assign them a proper user a group. To search for these files, type

find /dir -xdev \( -nouser -o -nogroup \) -print

Use SSH and sFTP

For file transferring and remote administration, use SSH and sFTP instead of telnet and other insecure, open and unencrypted protocols. To install, type

sudo apt-get install vsftpd -y
sudo apt-get install openssh-server -y

Monitor Logs

Install and setup a log analyzer utility to check system logs and event data regularly to prevent any suspicious activity. Type

sudo apt-get install -y loganalyzer

Uninstall unused Softwares

Install softwares as minimum as possible to maintain small attack surface. The more softwares you have, the more chances of attacks you have. So remove any unneeded software from your system. To see installed packages, write

dpkg –list
dpkg –info
apt-get list [PACKAGE_NAME]

 

To remove a package:

sudo apt-get remove [PACKAGE_NAME] -y
sudo apt-get clean

Conlusion

Linux server security hardening is very important for enterprises and businesses. Its a difficult and tiresome task for System Administrators. Some processes can be automated by some automated utilities like SELinux and other similar softwares. Also, keeping minimus softwares and disabling unused services and ports reduces the attack surface.

Source

OpenStack Deployment using Devstack on CentOS 7 and RHEL7

Devstack is a collection of scripts which deploy the latest version of openstack environment on virtual machine, personal laptop or a desktop. As the name suggests it is used for development environment and can be used for Openstack Project’s functional testing and sometime openstack environment deployed by devstack can also be used for demonstrations purpose and for some basic PoC.

In this article I will demonstrate how to install Openstack on CentOS 7 / RHEL 7 System using Devstack. Following are the minimum system requirements,

  • Dual Core Processor
  • Minimum 8 GB RAM
  • 60 GB Hard Disk
  • Internet Connection

Following are  the details of my Lab Setup for Openstack deployment using devstack

  • Minimal Installed CentOS 7 / RHEL 7 (VM)
  • Hostname – devstack-linuxtechi
  • IP Address – 169.144.104.230
  • 10 vCPU
  • 14 GB RAM
  • 60 GB Hard disk

Let’s start deployment steps, login to your CentOS 7 or RHEL 7 System

Step:1 Update Your System and Set Hostname

Run the following yum command to apply latest updates to system and then take a reboot. Also after reboot set the hostname

~]# yum update -y && reboot
~]# hostnamectl set-hostname "devstack-linuxtechi"
~]# exec bash

Step:2) Create a Stack user and assign sudo rights to it

All the installations steps are to be carried out by a user name “stack“, refer the below commands to create and assign sudo rights .

[root@devstack-linuxtechi ~]# useradd -s /bin/bash -d /opt/stack -m stack
[root@devstack-linuxtechi ~]# echo "stack ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/stack
stack ALL=(ALL) NOPASSWD: ALL
[root@devstack-linuxtechi ~]#

Step:3) Install git and download devstack

Switch to stack user and install git package using yum command

[root@devstack-linuxtechi ~]# su - stack
[stack@devstack-linuxtechi ~]$ sudo yum install git -y

Download devstack using below git command,

[stack@devstack-linuxtechi ~]$ git clone https://git.openstack.org/openstack-dev/devstack
Cloning into 'devstack'...
remote: Counting objects: 42729, done.
remote: Compressing objects: 100% (21438/21438), done.
remote: Total 42729 (delta 30283), reused 32549 (delta 20625)
Receiving objects: 100% (42729/42729), 8.93 MiB | 3.77 MiB/s, done.
Resolving deltas: 100% (30283/30283), done.
[stack@devstack-linuxtechi ~]$

Step:4) Create local.conf file and start openstack installation

To start openstack installation using devstack script (stack.sh), first we need to prepare local.conf file that suits to our setup.

Change to devstack folder and create local.conf file with below contents

[stack@devstack-linuxtechi ~]$ cd devstack/
[stack@devstack-linuxtechi devstack]$ vi local.conf
[[local|localrc]]
#Specify the IP Address of your VM / Server in front of HOST_IP Parameter
HOST_IP=169.144.104.230

#Specify the name of interface of your Server or VM in front of FLAT_INTERFACE
FLAT_INTERFACE=eth0

#Specify the Tenants Private Network and its Size
FIXED_RANGE=10.4.128.0/20
FIXED_NETWORK_SIZE=4096

#Specify the range of external IPs that will be used in Openstack for floating IPs
FLOATING_RANGE=172.24.10.0/24

#Number Host on which Openstack will be deployed
MULTI_HOST=1

#Installation Logs file
LOGFILE=/opt/stack/logs/stack.sh.log

#KeyStone Admin Password / Database / RabbitMQ / Service Password
ADMIN_PASSWORD=openstack
DATABASE_PASSWORD=db-secret
RABBIT_PASSWORD=rb-secret
SERVICE_PASSWORD=sr-secret

#Additionally installing Heat Service
enable_plugin heat https://git.openstack.org/openstack/heat master
enable_service h-eng h-api h-api-cfn h-api-cw

Save and exit the file.

Now start the deployment or installation by executing the script (stack.sh)

[stack@devstack-linuxtechi devstack]$ ./stack.sh

It will take between 30 to 45 minutes depending upon your internet connection.

While running the above command, if you got the below errors

+functions-common:git_timed:607            timeout -s SIGINT 0 git clone git://git.openstack.org/openstack/requirements.git /opt/stack/requirements --branch master
fatal: unable to connect to git.openstack.org:
git.openstack.org[0: 104.130.246.85]: errno=Connection timed out
git.openstack.org[1: 2001:4800:7819:103:be76:4eff:fe04:77e6]: errno=Network is unreachable
Cloning into '/opt/stack/requirements'...
+functions-common:git_timed:610            [[ 128 -ne 124 ]]
+functions-common:git_timed:611            die 611 'git call failed: [git clone' git://git.openstack.org/openstack/requirements.git /opt/stack/requirements --branch 'master]'
+functions-common:die:195                  local exitcode=0
[Call Trace]
./stack.sh:758:git_clone
/opt/stack/devstack/functions-common:547:git_timed
/opt/stack/devstack/functions-common:611:die
[ERROR] /opt/stack/devstack/functions-common:611 git call failed: [git clone git://git.openstack.org/openstack/requirements.git /opt/stack/requirements --branch master]
Error on exit
/bin/sh: brctl: command not found
[stack@devstack-linuxtechi devstack]$

To Resolve these errors, perform the following steps

Install bridge-utils package and change parameter from “GIT_BASE=${GIT_BASE:-git://git.openstack.org}” to “GIT_BASE=${GIT_BASE:-https://www.github.com}” in stackrc file

[stack@devstack-linuxtechi devstack]$ sudo yum install bridge-utils -y
[stack@devstack-linuxtechi devstack]$ vi stackrc
……
#GIT_BASE=${GIT_BASE:-git://git.openstack.org}
GIT_BASE=${GIT_BASE:-https://www.github.com}
……

Now re-run the stack.sh script,

[stack@devstack-linuxtechi devstack]$ ./stack.sh

Once the script is executed successfully, we will get the output something like below,

Stack-Command-output-CentOS7

This confirms that openstack has been deployed successfully,

Step:5 Access OpenStack either via Openstack CLI or Horizon Dashboard

if you want to perform any task from openstack cli, then you have to firsr source openrc file, which contain admin credentials.

[stack@devstack-linuxtechi devstack]$ source openrc
WARNING: setting legacy OS_TENANT_NAME to support cli tools.
[stack@devstack-linuxtechi devstack]$ openstack network list
+--------------------------------------+---------+----------------------------------------------------------------------------+
| ID                                   | Name    | Subnets                                                                    |
+--------------------------------------+---------+----------------------------------------------------------------------------+
| 5ae5a9e3-01ac-4cd2-86e3-83d079753457 | private | 9caa54cc-f5a4-4763-a79e-6927999db1a1, a5028df6-4208-45f3-8044-a7476c6cf3e7 |
| f9354f80-4d38-42fc-a51e-d3e6386b0c4c | public  | 0202c2f3-f6fd-4eae-8aa6-9bd784f7b27d, 18050a8c-41e5-4bae-8ab8-b500bc694f0c |
+--------------------------------------+---------+----------------------------------------------------------------------------+
[stack@devstack-linuxtechi devstack]$ openstack image list
+--------------------------------------+--------------------------+--------+
| ID                                   | Name                     | Status |
+--------------------------------------+--------------------------+--------+
| 5197ed8e-39d2-4eca-b36a-d38381b57adc | cirros-0.3.6-x86_64-disk | active |
+--------------------------------------+--------------------------+--------+
[stack@devstack-linuxtechi devstack]$

Now Try accessing the Horizon Dashboard, URL details and Credentials are already there in stack command output.

http://{Your-Server-IP-Address}/dashboard

Login-OpenStack-Dashboard-DevStack-CentOS7

Horizon-Dashboard-DevStack-CentOS7

Remove/ Uninstall OpenStack using devstack scripts

If are done with testing and demonstration and want to remove openstack from your system then run the followings scripts via Stack user,

[stack@devstack-linuxtechi ~]$ cd devstack
[stack@devstack-linuxtechi devstack]$ ./clean.sh
[stack@devstack-linuxtechi devstack]$ ./unstack.sh
[stack@devstack-linuxtechi devstack]$ rm -rf /opt/stack/
[stack@devstack-linuxtechi ~]$ sudo rm -rf devstack
[stack@devstack-linuxtechi ~]$ sudo rm -rf /usr/local/bin/

That’s all from this tutorial, if you like the steps, please do share your valuable feedback and comments.

Source

C++ in the Linux kernel? – OSnews

OOP doesn’t imply using function pointers.

The essence of OOP is polymorphism, which you can achieve in C through function pointers.

What about just having a plain structure and associate functions to it ? This is OO. In C, it forces you to adopt conventions like prefixing all your function names with the class name,…

I assume you’re refering to encapsulation, and I don’t see having to adopt a convention of prefixing your function with a object name (you’re not forced to) is a big deal unless you hunt and peck.

…and explicitely dereference all the member variables whenever you access them.

I assume you’re referring to having to pass in a pointer to structure and then using -> to dereference within a function. I don’t see how that’s a problem.

What about access rights ? There’s no way in C to forbid access to certain members of a structure. You have to document them in some way, by adding a comment next to the mmbers you consider private. And its not enforced by the compiler, thus errors can creep up.

Programmers should be using the public API. having public, protected, and private is no silver bullet.

Yes, programming using interface is useful, but there are lots of cases where objects with zero runtime overhead, but that implicitely do various housekeeping stuff, are very useful. Or even, classes that are there only to keep your code from getting full of long and messy functions, but amount to nothing at runtime.

Interfaces with virtual functions have a runtime cost, and these functions can’t be inlined. They’re not a silver bullet.

You don’t pay for what you don’t use in C also…unlike Java where you’re relying on the runtime to be smart about all methods being virtual.

And what about error management anyway ? You have to think of freeing resources everywhere, and C has no mechanism whatsoever to help you to do it. I don’t think it’s much better than an exception system, even if it has not so hard to avoid pitfalls.

Exceptions are useful, even though it has been pointed out that C++ exceptions have some major warts.

I understand your point, but I don’t think it’s really C++ fault. It’s more that since it allow to do these things more easily, it’s tempting to over-complicate stuff. I myself prefer straightforward code where I know what each classes actually do, and not have a vast amount of layers of abstraction if I don’t need them.

Yes, you can do everything in C that you do in C++, like object programming and stuff.

The purpose of C++ isn’t to let you do things that can’t be done in C, it’s to simplify your life. Indeed, it can complicate it instead, but not if used properly.

Pretty much agree on these points. Shallow hierarchies and favoring composition are the way to go.

I’ve written lots of C++ code in my lifetime (more than any other language). I think I somewhat grok proper OO. I’ve got Stroustrup’s 3rd edition I’ve got the GOF Design Patterns book, and think patterns are useful.

I guess my biggest problem with C++ is the syntax, the overcomplification of the language that seems to stem from it’s legacy of needing to be C-backward compatible, can’t stand C++ streams, and think much of the standard library API is bad.

Source

Linux Today – 5 Useful Ways to Do Arithmetic in Linux Terminal

In this article, we will show you various useful ways of doing arithmetic’s in the Linux terminal. By the end of this article, you will learn basic different practical ways of doing mathematical calculations in the command line.

Let’s get started!

1. Using Bash Shell

The first and easiest way do basic math on the Linux CLI is a using double parenthesis. Here are some examples where we use values stored in variables:

$ ADD=$(( 1 + 2 ))
$ echo $ADD
$ MUL=$(( $ADD * 5 ))
$ echo $MUL
$ SUB=$(( $MUL - 5 ))
$ echo $SUB
$ DIV=$(( $SUB / 2 ))
$ echo $DIV
$ MOD=$(( $DIV % 2 ))
$ echo $MOD
Arithmetic in Linux Bash Shell

Arithmetic in Linux Bash Shell

2. Using expr Command

The expr command evaluates expressions and prints the value of provided expression to standard output. We will look at different ways of using expr for doing simple math, making comparison, incrementing the value of a variable and finding the length of a string.

The following are some examples of doing simple calculations using the expr command. Note that many operators need to be escaped or quoted for shells, for instance the * operator (we will look at more under comparison of expressions).

$ expr 3 + 5
$ expr 15 % 3
$ expr 5 \* 3
$ expr 5 – 3
$ expr 20 / 4
Basic Arithmetic Using expr Command in Linux

Basic Arithmetic Using expr Command in Linux

Next, we will cover how to make comparisons. When an expression evaluates to false, expr will print a value of 0, otherwise it prints 1.

Let’s look at some examples:

$ expr 5 = 3
$ expr 5 = 5
$ expr 8 != 5
$ expr 8 \> 5
$ expr 8 \< 5
$ expr 8 \<= 5
Comparing Arithmetic Expressions in Linux

Comparing Arithmetic Expressions in Linux

You can also use the expr command to increment the value of a variable. Take a look at the following example (in the same way, you can also decrease the value of a variable).

$ NUM=$(( 1 + 2))
$ echo $NUM
$ NUM=$(expr $NUM + 2)
$ echo $NUM
Increment Value of a Variable

Increment Value of a Variable

Let’s also look at how to find the length of a string using:

$ expr length "This is Tecmint.com"
Find Length of a String

Find Length of a String

For more information especially on the meaning of the above operators, see the expr man page:

$ man expr

3. Using bc Command

bc (Basic Calculator) is a command-line utility that provides all features you expect from a simple scientific or financial calculator. It is specifically useful for doing floating point math.

If bc command not installed, you can install it using:

$ sudo apt install bc   #Debian/Ubuntu
$ sudo yum install bc   #RHEL/CentOS
$ sudo dnf install bc   #Fedora 22+

Once installed, you can run it in interactive mode or non-interactively by passing arguments to it – we will look at both case. To run it interactively, type the command bc on command prompt and start doing some math, as shown.

$ bc 
Start bc in Non-Interactive Mode

Start bc in Non-Interactive Mode

The following examples show how to use bc non-interactively on the command-line.

$ echo '3+5' | bc
$ echo '15 % 2' | bc
$ echo '15 / 2' | bc
$ echo '(6 * 2) - 5' | bc
Do Math Using bc in Linux

Do Math Using bc in Linux

The -l flag is used to the default scale (digits after the decimal point) to 20, for example:

$ echo '12/5 | bc'
$ echo '12/5 | bc -l'
Do Math with Floating Numbers

Do Math with Floating Numbers

4. Using Awk Command

Awk is one of the most prominent text-processing programs in GNU/Linux. It supports the addition, subtraction, multiplication, division, and modulus arithmetic operators. It is also useful for doing floating point math.

You can use it to do basic math as shown.

$ awk 'BEGIN { a = 6; b = 2; print "(a + b) = ", (a + b) }'
$ awk 'BEGIN { a = 6; b = 2; print "(a - b) = ", (a - b) }'
$ awk 'BEGIN { a = 6; b = 2; print "(a *  b) = ", (a * b) }'
$ awk 'BEGIN { a = 6; b = 2; print "(a / b) = ", (a / b) }'
$ awk 'BEGIN { a = 6; b = 2; print "(a % b) = ", (a % b) }'
Do Basic Math Using Awk Command

Do Basic Math Using Awk Command

If you are new to Awk, we have a complete series of guides to get you started with learning it: Learn Awk Text Processing Tool.

5. Using factor Command

The factor command is use to decompose an integer into prime factors. For example:

$ factor 10
$ factor 127
$ factor 222
$ factor 110  
Factor a Number in Linux

Factor a Number in Linux

That’s all! In this article, we have explained various useful ways of doing arithmetic’s in the Linux terminal.

Source

Linux Today – Best Linux Multi-Core Compression Tools

Data compression is the process of storing data in a format that uses less space than the original representation would use. Compressing data can be very useful particularly in the field of communications as it enables devices to transmit or store data in fewer bits. Besides reducing transmission bandwidth, compression increases the amount of information that can be stored on a hard disk drive or other storage device.

There are two main types of compression. Lossy compression is a data encoding method which reduces a file by discarding certain information. When the file is uncompressed, not all of the original information will be recovered. Lossy compression is typically used to compress video, audio and images, as well as internet telephony. The fact that information is lost during compression will often be unnoticeable to most users. Lossy compression techniques are used in all DVDs, Blu-ray discs, and most multimedia available on the internet.

However, lossy compression is unsuitable where the original and the decompression data must be identical. In this situation, the user will need to use lossless compression. This type of compression is employed in compressing software applications, files, and text articles. Loseless compression is also popular in archiving music. This article focuses on lossless compression tools.

Popular lossless compression tools include gzip, bzip2, and xz. When compressing and decompressing files these tools use a single core. But these days, most people run machines with multi-core processors. You won’t see the speed advantage modern processors offer with the traditional tools. Step forward modern compression tools that use all the cores present on your system when compressing files, offering massive speed advantages.

Some of the tools covered in this article don’t provide significant acceleration when decompressing compressed files. The ones that do offer significant improvement, using multiple cores, when decompressing files are pbzip2, lbzip2, plzip, and lrzip.

Let’s check out the multi-core compression tools. See our time and size charts. And at the end of each page, there’s a table with links to a dedicated page for each of the multi-core tools setting out, in detail, their respective features.

Learn more about the features offered by the multi-core compression tool. We’ve compiled a dedicated page for each tool explaining, in detail, the features they offer.

Multi-Core Compression Tools
pigz Parallel implementation of gzip. It’s a fully functional replacement for gzip
PBZIP2 Parallel implementation of the bzip2 block-sorting file compressor
PXZ Runs LZMA compression on multiple cores and processors
lbzip2 Parallel bzip2 compression utility, suited for serial and parallel processing
plzip Massively parallel (multi-threaded) lossless data compressor based on lzlib
lrzip Compression utility that excels at compressing large files
pixz Parallel indexing XZ compression, fully compatible with XZ. LZMA and LZMA2

With Default Compression

Default compression refers to running the compression tool without any compression flag being applied.

Multi-core compression

pigz compressed our 537MB tarball on our quad-core machine in the quickest time of all the tools, completing the test in a swift 4 seconds. To put the result into some context, we also ran the same test using gzip, which compressed the file in 14.7 seconds. pigz therefore fell a bit short of being 4x quicker.

You’ll notice lbzip2 and pbzip2 bars are colored red. This is because these tools use the best compression as their default.

Multi-core compression

pigz compressed the 537MB tarball down to 110MB. lrzip offers the best compression ratio for the tarball, squeezing it down to a frugal 64MB, although there isn’t much difference between lrzip, pxz, pixz, or plzip.

Again lbzip2 and pbzip2 bars are colored red. This is because these tools use the best compression as their default.

Methodology used for the tests

We took a 537MB tarball of a popular source package. The tarball was copied to RAM (/dev/shm), and the tests ran in RAM on a quad-core CPU without hyper-threading (Core i5-2500K), with no X server running, and under negligible load.

Each test was run three times with the latest version (at the time of writing) of each multi-core compression tool. The average results are recorded in the charts above. The tests show the relative difference between the multi-core compression tools. They are for indicative purposes only.

With Fastest Compression

Most of the tools provide a flag to set the level of compression on a scale from 1 to 9. pxz and plzip and pixz scale from 0 to 9. This test uses the lowest available compression option.

All of the multi-core tools made fairly light work of compressing the tarball with their fastest compression option.

Multi-core compression

If you need to compress large files on a machine with a low powered multi-core machine, the fastest compression might be suitable. pigz compressed the 537MB tarball to 134MB in a whisker under 1.7 seconds. Most of the other tools shaved the tarball to around 100MB, and lrzip compressed the file to a mere 90MB.

Multi-core compression

Methodology used for the tests

We took a 537MB tarball of a popular source package. The tarball was copied to RAM (/dev/shm), and the tests ran in RAM on a quad-core CPU without hyper-threading (Core i5-2500K), with no X server running, and under negligible load.

Each test was run three times with the latest version (at the time of writing) of each multi-core compression tool. The average results are recorded in the charts above. The tests show the relative difference between the multi-core compression tools. They are for indicative purposes only.

With Best Compression

The time taken to complete this test using the best compression option varies significantly within our group of tools. The fastest software is lbzip2 and pigz, both completing the task in under 9 seconds.

plzip is the slowest of the group taking nearly 100 seconds.

Multi-core compression

Multi-core compression

If space is paramount, pxz, plzip, pixz and lrzip offer impressive compression ratios. But pxz and pixz are quicker to complete.

Recent versions of pigz include the Zopfli engine which achieves higher compression than other zlib implementations but takes much longer to complete the compression. pigz uses Zopfli with the -11 flag.

Using Zopfli, pigz takes a whopping 14 minutes 25 seconds to complete the test. And while the compression ratio is better, the compressed file weights in at 104MB (as opposed to 109MB with the -9 flag). That’s still larger than the output from most of the multi-core tools with their fastest compression option.

pxz also has an extreme option which is triggered with the -e flag. Using the extreme option is designed to improve the compression ratio by using more CPU time. But compression ratio wasn’t improved in our tests. With the -9 flag, the tarball is 62MB. Yet, using the -e flag, the tarball was 65MB. We’ll need to run more tests to determine if this is just an anomaly.

Methodology used for the tests

We took a 537MB tarball of a popular source package. The tarball was copied to RAM (/dev/shm), and the tests ran in RAM on a quad-core CPU without hyper-threading (Core i5-2500K), with no X server running, and under negligible load.

Each test was run three times with the latest version (at the time of writing) of each multi-core compression tool. The average results are recorded in the charts above. The tests show the relative difference between the multi-core compression tools. They are for indicative purposes only.

Long Range Zip (Lrzip)

Lrzip uses an extended version of rzip, which does a first pass long distance redundancy reduction. It uniquely offers a good range of compression methods:

  • LZMA (the default algorithm) – this is the Lempel–Ziv–Markov chain algorithm.
  • ZPAQ – designed for user-level backups.
  • LZO – Lempel–Ziv–Oberhume.
  • gzip – based on the DEFLATE algorithm, which is a combination of LZ77 and Huffman coding.
  • bzip2 – compression program that uses the Burrows–Wheeler algorithm.

Multi-core compression

When it comes to the size of the compressed tarball, zpaq offers the best compression.

Multi-core compression

 

 

 

 

 

 

 

 

Methodology used for the tests

We took a 537MB tarball of a popular source package. The tarball was copied to RAM (/dev/shm), and the tests ran in RAM on a quad-core CPU without hyper-threading (Core i5-2500K), with no X server running, and under negligible load.

Each test was run three times with the latest version (at the time of writing) of each multi-core compression tool. The average results are recorded in the charts above. The tests show the relative difference between the multi-core compression tools. They are for indicative purposes only.

Source

Python NumPy Tutorial – Linux Hint

In this lesson on Python NumPy library, we will look at how this library allows us to manage powerful N-dimensional array objects with sophisticated functions present to manipulate and operate over these arrays. To make this lesson complete, we will cover the following sections:

  • What is Python NumPy package?
  • NumPy arrays
  • Different operations which can be done over NumPy arrays
  • Some more special functions

What is Python NumPy package?

Simply put, NumPy stands for ‘Numerical Python’ and that is what it aims to fulfil, to allow complex numerical operations performed on N-dimensional array objects very easily and in an intuitive manner. It is the core library used in scientific computing, with functions present to perform linear algebraic operations and statistical operations.

One of the most fundamental (and attractive) concepts to NumPy is its usage of N-dimensional array objects. We can take this array as just a collection of rows and column, just like an MS-Excel file. It is possible to convert a Python list into a NumPy array and operate functions over it.

NumPy Array representation

Just a note before starting, we use a virtual environment for this lesson which we made with the following command:

python -m virtualenv numpy
source numpy/bin/activate

Once the virtual environment is active, we can install numpy library within the virtual env so that examples we create next can be executed:

pip install numpy

We see something like this when we execute the above command:

Let’s quickly test if the NumPy package has been installed correctly with the following short code snippet:

import numpy as np
= np.array([1,2,3])
print(a)

Once you run the above program, you should see the following output:

We can also have multi-dimensional arrays with NumPy:

multi_dimension = np.array([(1, 2, 3), (4, 5, 6)])
print(multi_dimension)

This will produce an output like:

[[1 2 3]
[4 5 6]]

You can use Anaconda as well to run these examples which is easier and that is what we have used above. If you want to install it on your machine, look at the lesson which describes “How to Install Anaconda Python on Ubuntu 18.04 LTS” and share your feedback. Now, let us move forward to various types of operations that can be performed with with Python NumPy arrays.

Using NumPy arrays over Python lists

It is important to ask that when Python already has a sophisticated data structure to hold multiple items than why do we need NumPy arrays at all? The NumPy arrays are preferred over Python lists due to the following reasons:

  • Convenient to use for mathematical and compute intensive operations due to presence of compatible NumPy functions
  • They are much fast faster due to the way they store data internally
  • Less memory

Let us prove that NumPy arrays occupy less memory. This can be done by writing a very simple Python program:

import numpy as np

import time
import sys

python_list = range(500)
print(sys.getsizeof(1) * len(python_list))

numpy_arr = np.arange(500)
print(numpy_arr.size * numpy_arr.itemsize)

When we run the above program, we will get the following output:

14000
4000

This shows that the same size list is more than 3 times in size when compared to same size NumPy array.

Performing NumPy operations

In this section, let us quickly glance over the operations that can be performed on NumPy arrays.

Finding dimensions in array

As the NumPy array can be used in any dimensional space to hold data, we can find the dimension of an array with the following code snippet:

import numpy as np

numpy_arr = np.array([(1,2,3),(4,5,6)])
print(numpy_arr.ndim)

We will see the output as “2” as this is a 2-dimensional array.

Finding datatype of items in array

We can use NumPy array to hold any data type. Let’s now find out the data type of the data an array contains:

other_arr = np.array([(‘awe’, ‘b’, ‘cat’)])
print(other_arr.dtype)

numpy_arr = np.array([(1,2,3),(4,5,6)])
print(numpy_arr.dtype)

We used different type of elements in the above code snippet. Here is the output this script will show:

<U3
int64

This happens as characters are interpreted as unicode characters and second one is obvious.

Reshape items of an array

If a NumPy array consists of 2 rows and 4 columns, it can be reshaped to contain 4 rows and 2 columns. Let’s write a simple code snippet for the same:

original = np.array([(‘1’, ‘b’, ‘c’, ‘4’), (‘5’, ‘f’, ‘g’, ‘8’)])
print(original)
reshaped = original.reshape(4, 2)
print(reshaped)

Once we run the above code snippet, we will get the following output with both arrays printed to the screen:

[[‘1’ ‘b’ ‘c’ ‘4’]
[‘5’ ‘f’ ‘g’ ‘8’]]

[[‘1’ ‘b’]
[‘c’ ‘4’]
[‘5’ ‘f’]
[‘g’ ‘8’]]

Note how NumPy took care of shifting and associating the elements to new rows.

Mathematical operations on items of an array

Performing mathematical operations on items of an array is very simple. We will start by writing a simple code snippet to find out maximum, minimum and addition of all items of the array. Here is the code snippet:

numpy_arr = np.array([(1, 2, 3, 4, 5)])
print(numpy_arr.max())
print(numpy_arr.min())
print(numpy_arr.sum())
print(numpy_arr.mean())
print(np.sqrt(numpy_arr))
print(np.std(numpy_arr))

In the last 2 operations above, we also calculated the square root and standard deviation of each array items. The above snippet will provide the following output:

5
1
15
3.0
[[1.   1.41421356   1.73205081   2.   2.23606798]]
1.4142135623730951

Converting Python lists to NumPy arrays

Even if you have been using Python lists in your existing programs and you don’t want to change all of that code but still want to make use of NumPy arrays in your new code, it is good to know that we can easily convert a Python list to a NumPy array. Here is an example:

# Create 2 new lists height and weight
height = [2.37,  2.87, 1.52, 1.51, 1.70, 2.05]
weight = [91.65, 97.52, 68.25, 88.98, 86.18, 88.45]

# Create 2 numpy arrays from height and weight
np_height = np.array(height)
np_weight = np.array(weight)

Just to check, we can now print out the type of one of the variables:

print(type(np_height))

And this will show:

<class ‘numpy.ndarray’>

We can now perform a mathematical operations over all the items at once. Let’s see how we can calculate the BMI of the people:

# Calculate bmi
bmi = np_weight / np_height ** 2

# Print the result
print(bmi)

This will show the BMI of all the people calculated element-wise:

[16.31682957 11.8394056  29.54033934 39.02460418 29.8200692  21.04699584]

Isn’t that easy and handy? We can even filter data easily with a condition in place of an index inside square brackets:

bmi[bmi > 25]

This will give:

array([29.54033934, 39.02460418, 29.8200692 ])

Create random sequences & repetitions with NumPy

With many features present in NumPy to create random data and arrange it in a required form, NumPy arrays are many times used in generating test dataset at many places, including debugging and testing purposes. For example, if you want to create an array from 0 to n, we can use the arange (note the single ‘r’) like the given snippet:

print(np.arange(5))

This will return the output as:

[0 1 2 3 4]

The same function can be used to provide a lower value so that the array starts from other numbers than 0:

print(np.arange(4, 12))

This will return the output as:

[ 4  5  6  7  8  9 10 11]

The numbers need not be continuous, they can skip a fix step like:

print(np.arange(4, 14, 2))

This will return the output as:

[ 4 6 8 10 12]

We can also get the numbers in a decreasing order with a negative skip value:

print(np.arange(14, 4, –1))

This will return the output as:

[14 13 12 11 10 9 8 7 6 5]

It is possible to fund n numbers between x and y with equal space with linspace method, here is the code snippet for the same:

np.linspace(start=10, stop=70, num=10, dtype=int)

This will return the output as:

array([10, 16, 23, 30, 36, 43, 50, 56, 63, 70])

Please note that the output items are not equally spaced. NumPy does its best to do so but you need not rely on it as it does the rounding off.

Finally, let us look at how we can generate a set of random sequence with NumPy which is one of the most used function for testing purposes. We will pass a range of numbers to NumPy which will be used as an initial and final point for the random numbers:

print(np.random.randint(0, 10, size=[2,2]))

The above snippet creates a 2 by 2 dimensional NumPy array which will contain random numbers between 0 and 10. Here is the sample output:

[[0 4]
[8 3]]

Please note as the numbers are random, the output can differ even between the 2 runs on the same machine.

Conclusion

In this lesson, we looked at various aspects of this computing library which we can use with Python to compute simple as well as complex mathematical problems which can arise in various use-cases The NumPy is one of the most important computation library when it comes to data engineering and calculating numerical dat, definitely a skill we need to have under our belt.

Source

Weekend Reading: All Things Bash

Bash is a shell and command language. It is distributed widely as the default login shell for most Linux distributions. We’ve rounded up some of the most popular Bash-related articles for your weekend reading.

Writing More Compact Bash Code

By Mitch Frazier

In most programming languages, non-scripting ones at least, you want to avoid uninitialized variables. In bash, using uninitialized variables can often simplify your code.

Normalizing Filenames and Data with Bash

By Dave Taylor

URLify: convert letter sequences into safe URLs with hex equivalents.

Roman Numerals and Bash

By Dave Taylor

Fun with retro-coding a Roman numeral converter—Dave heads back to his college years and solves homework anew!

Also read Dave’s followup article, More Roman Numerals and Bash.

Create Dynamic Wallpaper with a Bash Script

By Patrick Wheelan

Harness the power of bash and learn how to scrape websites for exciting new images every morning.

Developing Console Applications with Bash

By Andy Carlson

Bring the power of the Linux command line into your application development process.

Parsing an RSS News Feed with a Bash Script

By Jim Hall

I can automate an hourly job to retrieve a copy of an RSS feed, parse it, and save the news items to a local file that the website can incorporate. That reduces complexity on the website, with only a little extra work by parsing the RSS news feed with a Bash script.

Hacking a Safe with Bash

By Adam Kosmin

Being a minimalist, I have little interest in dealing with GUI applications that slow down my work flow or application-specific solutions (such as browser password vaults) that are applicable only toward a subset of my sensitive data. Working with text files affords greater flexibility over how my data is structured and provides the ability to leverage standard tools I can expect to find most anywhere.

Graph Any Data with Cacti!

By Shawn Powers

Cacti is not a new program. It’s been around for a long time, and in its own way, it’s a complicated beast itself. I finally really took the time to figure it out, however, and I realized that it’s not too difficult to use. The cool part is that Cacti makes RRDtool manipulation incredibly convenient. It did take me the better part of a day to understand Cacti fully, so hopefully this article will save you some time.

Reading Web Comics via Bash Script

By Jim Hall

I follow several Web comics. I used to open my Web browser and check out each comic’s Web site. That method was fine when I read only a few Web comics, but it became a pain to stay current when I followed more than about ten comics. These days, I read around 20 Web comics. It takes a lot of time to open each Web site separately just to read a Web comic. I could bookmark the Web comics, but I figured there had to be a better way—a simpler way for me to read all of my Web comics at once.

My Favorite bash Tips and Tricks

By Prentice Bisbal

Save a lot of typing with these handy bash features you won’t find in an old-fashioned UNIX shell.

Note: This article was originally published March 2018 and updated with additional and more current articles January 2019.

Source

Hacking math education with Python

Mathematics instruction has a bad reputation, especially with people (like me) who’ve had trouble with the traditional approach, which emphasizes rote memorization and theory that seems far removed from students’ real world.

While teaching a student who was baffled by his math lessons, Peter Farrell, a Python developer and mathematics teacher, decided to try using Python to teach the boy the math concepts he was having trouble learning.

Peter was inspired by the work of Seymour Papert, the father of the Logo programming language, which lives on in Python’s Turtle module. The Turtle metaphor hooked Peter on Python and using it to teach math, much like I was drawn to Python.

Peter shares his approach in his new book, Math Adventures with Python: An Illustrated Guide to Exploring Math with Code. And, I recently interviewed him to learn more about it.

Don Watkins: What is your background?

Peter Farrell: I was a math teacher for eight years, and I tutored math for 10 years after that. When I was a teacher, I read Papert’s Mindstorms and was inspired to introduce all my math classes to Logo and Turtles.

DW: Why did you start using Python?

PF: I was working with a homeschooled boy on a very dry, textbook-driven math curriculum, which at the time seemed like a curse to me. But I found ways to sneak in the Logo Turtles, and he was a programming fan, so he liked that. Once we got into functions and real programming, he asked if we could continue in Python. I didn’t know any Python but it didn’t seem that different from Logo, so I agreed. And I never looked back!

I was also looking for a 3D graphics package I could use to model a solar system and lead students through making planets move and get pulled by the force of attraction between the bodies, according to Newton’s formula. Many graphics packages required programming in C or something hard, but I found an excellent package called Visual Python that was very easy to use. I used VPython for years after that.

So, I was introduced to Python in the context of working with a student on math. For some time after that, he was my programming tutor while I was his math tutor!

DW: What got you interested in math?

PF: I learned it the old-fashioned way: by hand, on paper and blackboards. I was good at manipulating symbols, so algebra was never a problem, and I liked drawing and graphing, so geometry and trig could be fun, too. I did some programming in BASIC and Fortran in college, but it never inspired me. Later on, programming inspired me greatly! I’m still tickled by the way programming makes easy work of the laborious stuff you have to do in math class, freeing you up to do the more fun of exploring, graphing, tweaking, and discovering.

DW: What inspired you to consider your Python approach to math?

PF: When I was teaching the homeschooled student, I was amazed at what we could do by writing a simple function and then calling it a bunch of times with different values using a loop. That would take a half an hour by hand, but the computer spit it out instantly! Then we could look for patterns (which is what a math student should be doing), express the pattern as a function, and extend it further.

DW: How does your approach to teaching help students—especially those who struggle with math? How does it make math more relevant?

PF: Students, especially high-schoolers, question the need to be doing all this calculating, graphing, and solving by hand in the 21st century, and I don’t disagree with them. Learning to use Excel, for example, to crunch numbers should be seen as a basic necessity to work in an office. Learning to code, in any language, is becoming a very valuable skill to companies. So, there’s a real-world appeal to me.

But the idea of making art with code can revolutionize math class. Just putting a shape on a screen requires math—the position (x-y coordinates), the dimensions, and even the color are all numbers. If you want something to move or change, you’ll need to use variables, and not the “guess what x equals” kind of variable. You’ll vary the position using a variable or, more efficiently, using a vector. [This makes] math topics like vectors and matrices seen as helpful tools you can use, rather than required information you’ll never use.

Students who struggle with math might just be turned off to “school math,” which is heavy on memorization and following rules and light on creativity and real applications. They might find they’re actually good at math, just not the way it was taught in school. I’ve had parents see the cool graphics their kids have created with code and say, “I never knew that’s what sines and cosines were used for!”

DW: How do you see your approach to math and programming encouraging STEM in schools?

PF: I love the idea of combining previously separated topics into an idea like STEM or STEAM! Unfortunately for us math folks, the “M” is very often neglected. I see lots of fun projects being done in STEM labs, even by very young children, and they’re obviously getting an education in technology, engineering, and science. But I see precious little math material in the projects. STEM/mechatronics teacher extraordinaire Ken Hawthorn and I are creating projects to try to remedy that.

Hopefully, my book helps encourage students, girls and boys, to get creative with technology, real and virtual. There are a lot of beautiful graphics in the book, which I hope will inspire people to go through the coding adventure and make them. All the software I use (Python Processing) is available for free and can be easily installed, or is already installed, on the Raspberry Pi. Entry into the STEM world should not be cost-prohibitive to schools or individuals.

DW: What would you like to share with other math teachers?

PF: If the math establishment is really serious about teaching students the standards they have agreed upon, like numerical reasoning, logic, analysis, modeling, geometry, interpreting data, and so on, they’re going to have to admit that coding can help with every single one of those goals. My approach was born, as I said before, from just trying to enrich a dry, traditional approach, and I think any teacher can do that. They just need somebody who can show them how to do everything they’re already doing, just using code to automate the laborious stuff.

My graphics-heavy approach is made possible by the availability of free graphics software. Folks might need to be shown where to find these packages and how to get started. But a math teacher can soon be leading students through solving problems using 21st-century technology and visualizing progress or results and finding more patterns to pursue.

Source

CentOS Reboot – Linux Hint

For every system, rebooting is an essential part. Reboot is essentially turning off the computer completely and then, starting the system from scratch. In certain situations, rebooting is a must. For example, on the Linux system, the kernel update or other critical updates/patches. In short, rebooting is a very important thing to do in today’s modern computing age.

Are you on CentOS? CentOS is the playground for new RHEL users as it offers the same experience and feel of the enterprise environment where there are lots of works ongoing every single second. For such a busy system, sometimes, rebooting becomes a must. In this tutorial, we’ll be checking out the reboot methods for your CentOS.

  • Reboot

The simplest thing to do is fire up the terminal and run the following command –

This command will reboot the entire system. It can take some time for rebooting as there might be other users and processes running and the system will wait for them to terminate.

If you’re in need of a forced reboot, then add the “-f” flag.

  • Shutdown

Rebooting is also possible using the “shutdown” command. For that purpose, use the “-r” flag with “shutdown” –

sudo shutdown -r +10 “Restart in 10 minutes”

Note – the command requires “root” privilege to run.

Here, you’ll notice a couple additional parts in the command. Let’s discuss them.

  • +10 : Gives the system users 10 min time for performing all the pending actions as the system is going to restart itself after 10 minutes starting the count from running the command.
  • “<string” : This part is for showing a warning message to all the users.

Source

Excuse me, sir. You can’t store your things there. Those 7 gigabytes are reserved for Windows 10 • The Register

Buying a new PC in 2019? You may have a bit less disk space than you were expecting

Restaurant table reserved

Microsoft has announced that it is formalising the arrangement whereby Windows 10 inexplicably swipes a chunk of disk space for its own purposes in the form of Reserved Storage.

The theory goes like this – temporary files get generated all the time in Windows, either by the OS or apps running on the thing. As a user’s disk fills up, things start getting sticky as space for this flighty data becomes short and reliability suffers.

Microsoft has tried a few ways over the years to help users manage disk space – Windows will start to whinge as disks reach capacity and built-in tools exist to clear unwanted files. The latest, Storage Sense, will quietly “dehydrate” OneDrive files to free up space.

It appears that such tools aren’t enough.

In 2019, Microsoft is throwing in the towel. New installs of Windows 10 1903 (currently with Windows Insiders in the form of 19H1 and expected in the hands of users sometime in April) will feature “Reserved Storage”.

Reserved Storage effectively blocks out a chunk of disk for temporary files generated by the likes of apps or OS updates. The gang at Redmond reckons that 7GB of sacrificial space will be a good starting point, but the total might vary over time. You can also shrink the reservation, but never remove it from the OS entirely.

Reserved Storage (credit: Microsoft)

Never mind the Reserved Storage, how has Microsoft got system files down to 5.5GB? Pic: Microsoft

Users can then cheerfully use their PCs without worrying about their free space suddenly disappearing as a colossal Windows update gets silently downloaded in the background. That space has been pre-nabbed ready for all those temporary(ish) files.

Unless, of course, the Reserved Storage fills up. In which case it will be business as normal as Windows temporarily consumes space outside the reservation, thus somewhat defeating the point of the thing.

Slightly ominously, Microsoft also said: “We may adjust the size of reserved storage in the future based on diagnostic data or feedback.”

Windows Insiders will be able to experience the functionality for themselves if they are willing to fiddle with a Registry setting before the next build drops.

So, if you’re buying a PC in 2019 and considering disk space, remember that as well as all the recovery and system partitions which will adorn your system, Windows 10 will want its own piece of the action as well. ®

Source

WP2Social Auto Publish Powered By : XYZScripts.com