Industry-Scale Collaboration at The Linux Foundation

 

Learn about the principles required to achieve a successful industry pivot to open source.

Linux and open source have changed the computer industry (among many others) forever. Today, there are tens of millions of open source projects. A valid question is “Why?” How can it possibly make sense to hire developers that work on code that is given away for free to anyone who cares to take it? I know of many answers to this question, but for the communities that I work in, I’ve come to recognize the following as the common thread.

An Industry Pivot

Software has become the most important component in many industries, and it is needed in very large quantities. When an entire industry needs to make a technology “pivot,” they often do as much of that as possible in software. For example, the telecommunications industry must make such a pivot in order to support 5G, the next generation of mobile phone network. Not only will the bandwidth and throughput be increased with 5G, but an entirely new set of services will be enabled, including autonomous cars, billions of Internet-connected sensors and other devices (aka IoT), etc. To do that, telecom operators need to entirely redo their networks distributing millions of compute and storage instances very, very close to those devices/users.

Given the drastic changing usage of the network, operators need to be able to deploy, move and/or tear-down services near instantaneously running them on those far-flung compute resources and route the network traffic to and through those service applications in a fully automated fashion. That’s a tremendous amount of software. In the “old” model of complete competition, each vendor would build their solution to this customer need from the ground up and sell it to their telecom operator customers. It would take forever, cost a huge amount of money, and the customers would be nearly assured that one vendor’s system wouldn’t interoperate with another vendor’s solution. The market demands solutions that don’t take that long or cost that much and, if they don’t work together, their value is much less for the customer.

So, instead, all the members of the telecom industry, both vendors and customers are collaborating to build a large portion of the foundational platform software together, just once. Then, each vendor and operator will take that foundation of code and add whatever functionality they feel is differentiating for their customers, test it, harden it, and turn it into a full solution. This way, everyone gets to a solution much more quickly and with much less expense than would otherwise be possible. The mutual benefit of this is obvious. But how can they work together? How can they ensure that each participant in this community can get out of it what they need to be successful? These companies have never worked together before. Worse yet, they are fierce lifelong competitors with the only prior goal of putting the other out of business.

A Level Playing Field

This is what my team does at The Linux Foundation. We create and maintain that level playing field. We are both referee and janitor. We teach what we have learned from the long-term success of the Linux project, among others. Stay tuned for more blog posts detailing those principles and my experiences living those principles both as a participant in open source projects and as the referee.

So, bringing dozens of very large, fierce competitors, both vendors and customers, together and seeding the development effort with several million lines of code that usually only come from one or two companies is the task at hand. That’s never been done before by anyone. The set of projects under the Linux Foundation Networking umbrella is one large experiment in corporate collaborative development. Take ONAP as an example; its successful outcome is not assured in any way. Don’t get me wrong. The project has had an excellent start with three releases under its belt, and in general, things are going very well. However, there is much work to do and many ways for this community, and the organizations behind it, to become more efficient, and get to our end goal faster. Again, such a huge industry pivot has not been done as an open source collaboration before. To get there, we are applying the principles of fairness, technical excellence, and transparency that are the cornerstone of truly collaborative open source development ecosystems. As such, I am optimistic that we will succeed.

This industry-wide technology pivot is not isolated to the telecom sector. We are seeing it in many others. My goal in writing these articles on open source collaborative development principals, best practices, and experiences is to better explain to those new to this model, how it works, why these principals are in place and what to expect when things are working well, and when they are not. There are a variety of non-obvious behaviors that organizational leaders need to adopt and instill in their workforce to be successful in one of these open source efforts. I hope these articles will give you the tools and insight to help you facilitate this culture shift within your organization.
Source

DNS (Domain Name Service): A Detailed, High-level Overview | Linux.com

DNS (Domain Name Service): A Detailed, High-level Overview

How’s that for a confuding title?  In a recent email discussion, a colleague compared the Decentralized Identifier framework to DNS …suggesting they were similar.  I cautiously tended to agree but felt I had an overly simplistic understanding of DNS at a protocol level.  That email discussion led me to learn more about the deeper details of how DNS actually works – and hence, this article.

On the surface, I think most people understand DNS to be a service that you can pass a domain name to and have it resolved to an IP address (in the familiar nnn.ooo.ppp.qqq format).

domain name => nnn.ooo.ppp.qqq

Examples:

  1. If you click on Google DNS Query for microsoft.com, you’ll get a list of IP addresses associated with the Microsoft’s corporate domain name microsoft.com.
  2. If you click on Google DNS Query for www.microsoft.com, you’ll get a list of IP addresses associated with the Microsoft’s corporate web site www.microsoft.com.

NOTE: The Google DNS Query page returns the DNS results in JSON format. This isn’t particular or specific to DNS. It’s just how the Google DNS Query page chooses to format and display the query results.

DNS is actually much more than a domain name to IP address mapping.  Read on…

DNS Resource Records

There is more to the DNS Service database than these simple (default) IP addresses.  The DNS database stores and is able to return many different types of service-specific IP addresses for a particular domain.  These are called DNS Resource Records. Here’s a partial list from http://dns-record-viewer.online-domain-tools.com:

Most APIs only support the retrieval of one Resource Record type at a time (which may return multiple IP addresses of that type). Some APIs default to returning A records; while some APIs will only return A records. Caveat emptor.

To see a complete set of DNS Resource Records for microsoft.com, click on DNSQuery.org query results for microsoft.com and scroll down to the bottom of the results page …to see the complete response (aka authoritative result). It will look something like this:

DNSQUery-org1

Figure 1. DNS Resource Records for microsoft.com: Authoritative Result

NOTE: The Resource Record type is listed in the fourth column: TXT, SOA, NS, MX, A, AAAA, etc.

DNS Protocol

The most interesting new information/learnings is about the DNS protocol.  It’s request/response …nothing new here.  It’s entirely binary …to be expected given its age and the state of technology at that time. Given how frequently DNS is used by every computer on the planet, the efficientcy of a binary protocol also makes sense. The IETF published the original specifications in RFC 882 and RFC 883 in November 1983.

The new part (for me) is that an API typically doesn’t “download” the entire authoritative set of DNS Resource Records all at once for a particular domain, the most common API approach is to request the list of IP addresses (or relevant data) for a particular Resource Record type for a particular domain.

The format of a sample DNS request is illustrated in the following figure:

messages-requestFigure 2. Sample DNS Request [CODEPROJECT]

It’s binary. The QTYPE (purple cells on the right side) defines the type of query. In this case 0x0F is a request for an MX record; hence, this is a request for the data that describes microsoft.com’s external email server interface.

NOTE: The “relevant data” isn’t always an IP address or a list of IP addresses. For example, response may include another domain name, subdomain name, or, in some cases, simply some unstructured text (as far as the DNS specification is concerned).

Here is a typical response for the above sample request:

messages-responseFigure 3. Sample DNS Response [CODEPROJECT]

The response in turn is also binary. In this case, DNS has responded with 3 answers; that is, 3 subdomain names: mailc, maila, and mailb – each with a numerical preference (weight).

The ANY Resource Record Type

There is also a “meta” Resource Record Type called ANY that, as you might guess, requests a collection of all of the different Resource Record type records.  This is illustrated in Figure 1 above.

Source

Want to Learn Python – Starter Pack (AIXpert Blog)

Want to Learn Python – Starter Pack

I am not going to cover actual Python coding here (well, may be, a little at the end) but the good and bad places to start and things to avoid.

First used at the Hollywood, Florida and Rome, Italy IBM Technical University conferences – we call them TechU

Alternatives

  • You could just search Google, YouTube, and many other places and find 10 billion hits
  • You will quickly get totally swamped with options
  • This is Nigel’s starter pack for  a quick start.
  • This is what I found very useful – You, of course, may be different !!!

What is Python good for?

  • Data Scientist job & serious mega-bucks – You can double your already large salary!
  • New new technology areas line PowerAI, Artificial Intelligence , Machine Learning, Deep Learning, etc.
  • Data manipulation fixing a file format and restructuring the data
  • Web 2.0 web pages + REST API

How to develop code & run Python

  1. Edit file and run file
  2. IDE (integrated development environment)
  • Initially IDE is a pain in the backside
    • As you have to learn both the IDE and Language together
    • This sets you back 1 month!
    • But good for a full time developer
  • I recommend: edit and run but also you can run the python in console mode to try things out.
  • Having Programmed in Python for about a year I think I am ready to try a IDE for slicker editing and debugging.
    • Probably the PyCharm Community Edition IDE for a start.

Environments

  • Windows = yuck!
  • Tablet – you can run PyCharms IDE but get yourself a Keyboard for typing.
  • OSX = if you really have too!  Sorry never really got on with the Mac
  • Linux = this is the natural home of Python.
    • I am using a 160 CPU, 256 GB RAM, POWER8 S922LC – rather overkill but it is fast 🙂
    • I also use a Raspberry Pi – that is pretty quick too if the data files are not about a few 1/2 GB. The Raspberry Pi memory is limited.
  • AIX
    • it is in the AIX Open Source toolbox for downloading
    • take care with exotic modules as might have to use git & compile them yourself

How does Python actually run?

  • Compiled – No like say C
  • Interpreted – Yes but highly optimised, cached and parallelised.  I have had some code that finishes so fast I assumed it crashed but it actually work.

Which Python version 2.7 or 3.x ?

  • 3.<latest> – at the rime of writing 3.5 to 3.7 depending on how current your OS is!
  • No one is writing 2.7 any more
  • But there is lots of it in use today but declining over time
  • Not a massive difference but best to learn Python 3

Quick Assumption: You have in the past done at least some of these?

  • C, Korn or bash shell script writing – excellent
  • C programming – brilliant
  • JavaScript programs – very good
  • Python Programming – why are you reading this???

Then you are already done the heavy lifting

Everyone can write a simple program!

A=42
print "The number is " $A

if [[ $A == 42 ]]
then
        print "eureka"
fi

Plus For loop & Functions

What is this? Well is work on my Korn Shell OK on AIX.

Mega Tip 1:  If you know any of the languages above then Python is going to be very simple

image

  1. Data types:
  • string,
  • integers & float,
  • tuples,
  • lists,
  • dictionary
  1. Converting between them
  2. Conditionals:  if, then, else
  3. Loops:  for, while
  4. Functions
  5. User input
  6. String manipulation
  7. File I/O: read and write
  8. Classes and objects
  9. Inheritance            <– IMHO very advanced and for class module developers
  10. Polymorphism       <– IMHO very advanced and for class module developers

Mega Tip 2: Socratica videos on YouTube

We looks at many training course, Online content and YouTube series’ and these are by far the best and absolutely free.

  • Python Programming Tutorials (Computer Science)
  • Concise with dry humour and some computer jokes – see recursion
  • Mostly with worked example
  • Excellent style
  • Caltech grads
  • 33 videos (Don’t watch the two or three for Python2)
  • Most ~8 minutes
  • Total 3.5 hours
  • 15 million views
  • YouTube Socratica Playlist Videos
  • A Geek person told me Socraticia is the female for of Socrates – I think the creators are female. They also cover maths.
  • image
  • I have watched all of these twice – about 6 months apart
  • They are short but to consolidate what your learn try to have a quick go yourself on each topic

Mega Tip 3:  python.org = This is the Python Mother Ship!!

image

  • Also if you are stuck for the syntax of a statement or the details of some module or function then use then Google: python3 <your questions spelt out in full>
  • Often you get http://Python3.org but http://stackoverflow.com answer with worked examples is very good but scan down the answer a bit (the first might not be the best answer or exactly what you want)

Mega Tip 4: Get yourself a project to force you to code and work though problems and new features

  • Something simple
  • Something you are interested in
  • Specially web focused
  • Python strong at
    • Website interaction
    • REST API to an online service
    • Data manipulation/transformation
    • File conversion / filtering

Mega Idiot: My first project was the REST API to a HMC to extract Temp, Watts + performance stats for SSP, server & VM

  • It was a BIG mistake
  • The bad news was the API was so badly documented it was actually impossible to use!
  • With totally unnecessarily complicated XML – using features that are very rarely used by anyone.
  • I had to interview the developers in the end to workout the hidden details of the REST API
  • In simple terms it was the “REST API from Hell!”
  • But I learnt a lot
  • In the end wrote a Python class module to hid the horrible REST API from Python programmers – its 1100 line of code.
  • It returns simple to use Python data structures
  • So in simply  ~40 lines of Python to extract, manipulate & save in:
    • CSV file,
    • .html with GoogleChart graphs
    • Insert into an influxDB database

Mega Tip 5: JSON format files are exactly the same as the Python native data type called Dictionaries

  • So when learning Python concentrate on Dictionaries
  • These are (very simple)   { “some label”: data, more here }
  • and the data can be
    • “Strings” in double or single quote
    • Integers like 12345 or -42
    • Floating point numbers 123.456 (note the decimal point)
  • Often we have a list of dictionaries – lists look like [ item, item, item, . . . ]

JSON file example of stats called “mydata.json”:

[               # list of samples
{               # 1st sample = Python dictionary
"datetime": "2018-04-16T00:06:32",
"cpus_active": 32,
"mhz": 3521,
"cpus_description": "PowerPC_POWER9.,
"cpu_util": {
          "user": 50.4,
          .sys": 9.0,
          "idle": 40.4,
          "wait": 0.2
          }
},              .# end of 1st sample
{ . . . }       # 2nd sample = Python dictionary
]

Python Program to load the data file above –  NEW  Fixed a few Typos here, due to Cut’n’paste issues i.e. double quotes became full stops.

# Read the file as plain text

f = open("mydata.json","r")
text = f.read()
f.close()

# convert to Dictionary
import json         #module to handle JSON format
jdata = json.loads(text)
  • That json.loads() function converts a string (text) to the dictionary called jdata at 10’s of MBs of JSON a second.
  • Now lets extract certain fields using a natural Python syntax
# get the Mhz from the first record (numbers zero)

print("MHz=%d"%(jdata[0]["mhz"]))

# Loop through all the records pulling out the MHz numbers and the CPU utilisation user mode percent (its in sub dictionary called cpu_util)

for sample in jdata:
    print("MHz=%d"%(sample["mhz"]))
    print("User percent=%d"%(sample["cpu_util"]["user"]))

Latest project using Python is njmon for AIX and Linux – the new turbo nmon. 

  • The J is for JSON and we use Python to make data handling very easy

  • For AIX uses libperfstat C library – if you want details see: man libperfstat on AIX or vi /usr/include/libperfstat.h
    • Or find the worked example C code in KnowledgeCenter
  • Status quirky but usable for expert C programmer
  • Vast quantity of perf stats running in to 1000 stats for AIX and VIOS  (if you have many disks, nets or ask for processes stats then that grows rapidly)
  • And for a bonus libperfstat gives us the current CPU MHz
  • Similar for Linux
  • njmon written in C to use C function into the UNIX kernel generates JSON data. Then we use Python to accept the data and inject it live in to a Time Series Data fro graphing in real-time

Stand by for something strange

  • Well known programming problem = swamping the values of two variables a and b. Classic solution is using a temporary variable.
temp = a
a = b
b = temp
  • But can you do that without the temp variable?
  • No in C – I have known this to 40 years!!
  • Python answer
a,b = b,a
  • It is using a native data structure called a tuple.  As its a common programming task they built it into the language.
  • Warning weirdness next:
  • How about this?
  • a = a + b
    b = a - b
    a = a - b
  • Wow! I thought it was impossible!

Next a tiny Web grabbing Python example

  • Lots of websites and web services keep stats that you can download with your browser.
  • I have used sourcfogre.net (used below) and youtube.com for examples.
  • They are most often in JSON and Python has a requests module that makes “talking” to website very simple
  • As an example bung this in your browser ( NOT Internet Explorer )
  • https://sourceforge.net/projects/nmon/files/stats/json?start_date=2000-10-29&end_date=2020-12-31&os_by_country=false
  • And you should get a load of JSON date back that Firefox and Chrome will organise and make pretty.
  • Using Python, requests module and one of my own for graphing we can draw t downloads from the nmon project on SourceForge over time
  • We also need to change the date format from which shows of some of Pythons simple data manipulation
  • ,[2018-09-17 00:00:00],2
  • to
  • ,[‘Date(2018,9,17,00,00,00)’, 2]
  • Below is the source code – with many extra print lines and comments so if you run it would will see the data structures.
  •  NEW  Changes the code here to NOT relying on my  nchart Python Module
  • Green bits a debug but useful it you run it to see the data
  • Red bits are the webpage preable and postamble to setup the Googlecahert library graph.
#!/usr/bin/python3
#--------------------------------- Get the data using REST API from sourceforge.net
import requests
URL='https://sourceforge.net/projects/nmon/files/stats/json?start_date=2000-10-29&end_date=2020-12-04&os_by_country=false'
ret = requests.get(URL)
print(ret.status_code)
#print("return code was %d"%(ret.status_code))
#print("characters returned %d"%(len(ret.text)))
#---------------------------------- Create dictionay
import json
jdata = json.loads(ret.text)
#print(jdata)
months=0
count=0
for row in jdata['downloads']:
#    print(row)
    months=months+1
    count=count+row[1]
print("months=%d"%(months))
print("count =%d"%(count))
#---------------------------------- Create web page+graph using Googlechart library
file = open("downloads.html","w")
file.write('<html>\n'
'  <head>\n'
'    <script type="text/javascript" src="https://www.gstatic.com/charts/loader.js"></script>\n'
'    <script type="text/javascript">\n'
'      google.charts.load("current", {"packages":["corechart"]});\n'
'      google.charts.setOnLoadCallback(drawChart);\n'
'      function drawChart() {\n'
'        var data = google.visualization.arrayToDataTable([\n'
'[{type: "datetime", label: "Date"},"Files"]\n' )

for row in jdata['downloads']:
    str=row[0]
    str = str.replace("-",",")
    str = str.replace(" ",",")
    str = str.replace(":",",")
    file.write(",['Date(%s)',%d]\n"%(str,row[1]))

file.write('        ]);\n'
'        var options = {title: "nmon Downloads", vAxis: {minValue: 0}};\n'
'        var chart = new google.visualization.AreaChart(document.getElementById("chart_div"));\n'
'        chart.draw(data, options);\n'
'      }\n'
'    </script>\n'
'  </head>\n'
'  <body>\n'
'    <div id="chart_div" style="width: 100%; height: 500px;"></div>\n'
'  </body>\n'
'</html>\n')
file.close()
  • The output – skipping the dump of the JSON and the 105 rows of monthly stats looks like this
['2018-05-01 00:00:00', 14153]
['2018-06-01 00:00:00', 12794]
['2018-07-01 00:00:00', 12422]
['2018-08-01 00:00:00', 13127]
['2018-09-01 00:00:00', 11872]
['2018-10-01 00:00:00', 13628]
['2018-11-01 00:00:00', 12805]
['2018-12-01 00:00:00', 15611]
months=114
count =686634

  • So that was captured in Jan  2019 and so far 686,634 downloads of nmon and its tools and the monthly download generated graph looks like this:
  •  NEW  The generated downloads.html file has the following contents – note I removed a few 100 lines of data in the middle. Colours are from the vim editor – see later comments.
  • image
  • So that was captured in Jan  2019 and so far 686,634 downloads of nmon and its tools and the monthly download generated graph looks like this:
  •  NEW  Simpler graph
  • image

C Programmers be aware:

I keep making the same mistakes in writing Python.

  1. On Linux with the right export TERM=linux setting and using vi (actually vim) then you have syntax highlighting which reduces errors a lot – go for a white background or comments in dark blue are unreadable. See the picture below – I have not done that colouring – it is all vim.
  2. vim also helps with auto indentation.
  3. If, for and while statements have a “:” at the end of the line.
  4. In Python it is print and in C it is printf – I had to teach my fingers to miss out the final “f”
  5. Those maddening 4 stop indentations have to be exactly right!
  6. Anything I missed?

image

– – – The End – – –

Source

Deploy Citrix Virtual Apps and Desktops Service on AWS with New Quick Start

Posted On: Jan 7, 2019

This Quick Start automatically deploys Citrix Virtual Apps and Desktops on the Amazon Web Services (AWS) Cloud in about 90 minutes. The deployment includes a hosted shared desktop and two sample published applications.

Using the Citrix Virtual Apps and Desktops service, you can deliver secure virtual apps and desktops to any device, and leave most of the product installation, setup, configuration, upgrades, and monitoring to Citrix. You maintain complete control over applications, policies, and users while delivering a high-quality user experience.

The Quick Start is intended for users who want to set up a trial deployment or want to accelerate a production implementation by automating the foundation setup.

To get started:

For additional AWS Quick Start reference deployments, see our complete catalog.

Quick Starts are automated reference deployments that use AWS CloudFormation templates to deploy key technologies on AWS, following AWS best practices.

This Quick Start was built in collaboration with Citrix Systems, Inc., an AWS Partner Network (APN) Partner.

Source

Linux Today – Using the SSH Config File

If you are regularly connecting to multiple remote systems over SSH on a daily basis, you’ll find that remembering all of the remote IP addresses, different usernames, non standard ports and various command line options is difficult, if not impossible.

One option would be to create a bash alias for each remote server connection. However, there is another, much better and more simpler solution to this problem. OpenSSH allows you to set up per-user configuration file where you can store different SSH options for each remote machine you connect to.

This guide covers the basics of the SSH client configuration file and explains some of the most common configuration options.

We are assuming that you are using a Linux or a macOS system with OpenSSH client installed.

OpenSSH client-side configuration file is named config and it is stored in .ssh directory under user’s home directory. The ~/.ssh directory is automatically created when the user runs the ssh command for the first time.+

If you have never used the ssh command first you’ll need to create the directory using:

mkdir -p ~/.ssh && chmod 700 ~/.ssh

By default the SSH configuration file may not exist so you may need to create it using the touch command:

touch ~/.ssh/config && chmod 600 ~/.ssh/config

This file must be readable and writable only by the user, and not accessible by others:

chmod 700 ~/.ssh/config

The SSH Config File takes the following structure:

Host hostname1
    SSH_OPTION value
    SSH_OPTION value

Host hostname2
    SSH_OPTION value

Host *
    SSH_OPTION value

The contents of the SSH client config file is organized into stanzas (sections). Each stanza starts with the Host directive and contain specific SSH options that are used when establish connection with the remote SSH server.

Indentation is not required, but is recommended since it will make the file easier to read.

The Host directive can contain one pattern or a whitespace-separated list of patterns. Each pattern can contain zero or more non-whitespace character or one of the following pattern specifiers:

  • * – matches zero or more characters. For example, Host * will match all host, while 192.168.0.* will match all hosts in the 192.168.0.0/24 subnet.
  • ? – matches exactly one character. The pattern, Host 10.10.0.? will match all hosts in 10.10.0.[0-9] range.
  • ! – at the start of a pattern will negate its match For example, Host 10.10.0.* !10.10.0.5will match any host in the 10.10.0.0/24 subnet except 10.10.0.5.

The SSH client reads the configuration file stanza by stanza and if more than one patterns match, the options from the first matching stanza takes precedence. Therefore more host-specific declarations should be given at the beginning of the file, and more general overrides at the end of the file.

You can find a full list of available ssh options by typing man ssh_config in your terminal or by visiting the ssh_config man page.

The SSH config file is also read by other programs such as scpsftp and rsync.

Now that we’ve covered the basic of the SSH configuration file let’s look at the following example.

Usually, when you connect to a remote server via SSH you would specify the remote user name, hostname and post. For example, to connect as a user named john to a host called dev.example.com on port 2322 from the command line, you would type:

ssh john@dev.example.com -p 2322

If you like to connect to the server using the same options as provided in the command above simply by typing named ssh dev you’ll need to put the following lines to your "~/.ssh/config file:

~/.ssh/config
Host dev
    HostName dev.example.com
    User john
    Port 2322

Now if you type:

ssh dev

the ssh client will read the configuration file and it will use the connection details that are specified for the dev host,

This example gives more detailed information about the host patterns and option precedence.

Let’s take the following example file:

Host targaryen
    HostName 192.168.1.10
    User daenerys
    Port 7654
    IdentityFile ~/.ssh/targaryen.key

Host tyrell
    HostName 192.168.10.20

Host martell
    HostName 192.168.10.50

Host *ell
    user oberyn

Host * !martell
    LogLevel INFO

Host *
    User root
    Compression yes
  • If you type ssh targaryen the ssh client will read the file and will apply the options from the first match which is Host targaryen. Then it will check the next stanzas one by one for matching pattern. The next matching one is Host * !martell which means all hosts except martell and it will apply the connection option from this stanza. Finally the last definition Host * also mathes but the ssh client will take only the Compression option because the User option is already defined in the Host targaryen stanza. The full list of options used in this case is as follows:
    HostName 192.168.1.10
    User daenerys
    Port 7654
    IdentityFile ~/.ssh/targaryen.key
    LogLevel INFO
    Compression yes
  • When running ssh tyrell the matching host patterns are: Host tyrellHost *ellHost * !martell and Host *. The options used in this case are:
    HostName 192.168.10.20
    User oberyn
    LogLevel INFO
    Compression yes
  • If you run ssh martell the matching host patterns are: Host martellHost *ell and Host *. The options used in this case are:
    HostName 192.168.10.50
    User oberyn
    Compression yes
  • For all other connections options specified in the Host * !martell and Host * sections will be used.

The ssh client receives its configuration in the following precedence order:

  1. Options specified from the command line
  2. Options defined in the ~/.ssh/config
  3. Options defined in the /etc/ssh/ssh_config

If you want to override a single option you can specify it on the command line. For example if you have the following definition:

Host dev
    HostName dev.example.com
    User john
    Port 2322

and you want to use all other options but to connect as user root instead of john simply specify the user on the command line:

ssh -o "User=root" dev

The -F (configfile) switch allows you to specify an alternative per-user configuration file.

If you want your ssh client to ignore all of the options specified in your ssh configuration file, you can use:

ssh -F /dev/null user@example.com

You have learned how to configure your user ssh config file. You may also want to setup a SSH key-based authentication and connect to your Linux servers without entering a password.

Source

Best Linux Distros 2019 | Linux Distros Introduction

Here you find the best linux distros

2019 is finally here folks! and what a better way to shed light on some of the best Linux distributions at your disposal. Even we have hundreds of distributions, we have created a list of distro based popularity, features and ease of usage.

In this article, we shall focus on the best Linux distributions for 2019. But remember each distro have their unique features and you should select based on your requirement.

Best Linux Distribution 2019 for Desktop/Laptops

# Ubuntu

best linux distributions 2019 Ubuntu 18.10

Codenamed Cosmic Cuttlefish, Ubuntu 18.10 takes over from Ubuntu 18.04 Bionic Beaver LTS whose long-term support has now been extended to 10 years. On the other hand, Ubuntu 18.10 will only have 9 months support lasting up to July 2019. You can also look forward for Ubuntu 19.04 (named the ‘Disco Dingo’) release date is scheduled for April 18, 2019.

Nonetheless, Ubuntu 18.10 comes packed with an array of new features that improve the user experience. Among the new features are:

  • GNOME 3.30
  • Improved Battery life for laptops
  • Fingerprint Scanner support
  • Linux Kernel 4.18
  • Faster installation and booting times

Before proceeding to install Ubuntu 18.10, ensure that your system meets the following requirements

  • 2 GB RAM
  • 2Ghz Dual core processor
  • 25 GB of Free hard disk spaces
  • 1024×768 screen resolution.
  • A DVD drive or USB port for connecting to the installer media

Read Also: How to Install Ubuntu 18.04 Dual Boot with Windows 10

# Elementary

elementary 5.0 best linux distributions 2019

If you have been a long-term Linux user, ElementaryOS should top the favorite list. The latest version gives the user a vibrant feeling as it also helps when working with its intuitive smartphone interface. The OS (Elementary 5.0) is codenamed “Juno” and offers the most refined desktop version. Here are some of its specific features for Juno and any other ElementaryOS.

  • Built-in Night Light
  • The picture in Picture Mode
  • Image Resizing Made Easy
  • Easier App Funding
  • Simple App Launcher
  • Adaptive Panel
  • Easily Available Keyboard Shortcut
  • Bold Use of Color
  • Easy Web Integration
  • Transparent Readable Updates

Minimum Requirements for Installation

  • Intel Core i3 or compatible dual core 64-bit processor
  • 4GB RAM
  • At least 15GB of SSD free space
  • Internet access
  • 1024 x 768
  • CD/ DVD/ USB drive for installation
Based on Ubuntu, Ubuntu in term based on Debian
Desktop Environment Pantheon, Pantheon built on top of GNOME
Package Management dpkg, Eddy Gui tool
General Purpose Desktop
Download Link https://elementary.io/

Read Also: How to Install Elementary OS 5.0 Juno with Windows 10

# Solus

Solus is one of the newcomers to the scene and is already making serious breakthroughs. The distribution will give you a clean and polished experience. The robust repositories include any software that you can imagine that are always get updates with every release.

The desktop environment is also known as ‘Budgie’ is attractive, simple and clean and offers a similar experience to Chrome OS without the need to purchase a Chromebook. You will get all the software that you need from the GNOME desktop environment, which is light and fast.

Budgie is clean and has a visually appealing user interface giving a wonderful and spectacular user experience. A single button opens the main menu as you have seen in a typical Windows 10 environment.

  • A dual-core of at least 2GH and above
  • 4GB RAM
  • Direct X11 / GeForce 460 or higher
  • 10GB available disk space
Based on A distribution built from scratch
Desktop Environment Budgie, Uses GNOME technologies
Package Management PiSi package manager maintained as eopkg
General Purpose Desktop
Download Link https://getsol.us/download/

Read Also: How to Install Latest Solus from USB

# Fedora

fedora 29 best linux distribution

Fedora 29 is the distro that defines new technologies that integrate into the Operating System resulting in some innovative features of any distribution. The only downside is the short support cycles that last for only a month after the release of the next version, which is usually six months.

  • Gnome 3.30
  • Fedora silverblue
  • TLS 1.3
  • Python 3.7
  • Perl 5.28
  • ZRAM support for ARM images
  • New notification area

Minimum Requirements for Installation

  • 6GB free hard disk space
  • 2GB RAM
  • Intel 64-bit processor
  • AMD processor with AMD-V and AMD64 extensions
Based on Redhat
Desktop Environment GNOME (default)
Package Management RPM, Built ‘dnf’ on top of it
General Purpose Desktop
Download Link https://getfedora.org/

# Mint

linux mint best linux distribution

If you are moving from a Windows or Mac OS platform, you may want to use the simple Linux Mint as you try to find your way into the world of Linux. Mint is loaded fully packed with software you need to get back on track. Mint gives you a choice of four desktop environments with Cinnamon being the most closet to the Windows environment; however, MATE is still the popular choice because it is light on resources and loads faster using minimal memory.

Mint is always synchronized with the latest Ubuntu LTS releases meaning once you are running on Mint; there is no chance of getting malware threats.

The default theme on Linux mint is the Mint-Y, a successor to Mint-X. Mint-Y is available in three flavors. The light and the dark theme.

Linux Mint has two software managers known as the synaptic and the Software Manager. They both use APT front end. Synaptic installation window is plain text unlike the software manager, which is a GUI.

Minimum Requirements for Installation

  • 64-bit x86 Processors
  • 2GB of RAM
  • 10GB of free hard disk space
  • A graphics card that supports at least 1024 x 720 resolution
  • CD/DVD/USB facilities
Based on Debian and Ubuntu
Desktop Environment Cinnamon (default), Mate and Xfce
Package Management dpkg
General Purpose Desktop
Download Link https://linuxmint.com/download.php

# Arch Linux

Arch-Linux

The Linux gaming platform has been a hot issue for many years, and gamers still cannot conclude whether Linux is a robust gaming platform or not. The verdict depends on strong support to make it a strong contender. Arch has many customization options that offer an opportunity for a gamer to free up system resources for gaming application and still be able to configure for general system performance.

The Operating System ships with the package management tool aptly referred to as Pacman that uses the command tar to package all installations. Pacman handles binary system packages and works with Arch Build System, which manages official Arch repositories, and own builds.

You only need to use pacman-syu to update all packages and to install group packages that come with the software run the command pacman-S gnome

Every time there is a rolling release update a large number of binary updates for the repositories. The timely releases are to make sure you do not need to re-install or update your OS at any given time. Instead, you need regular system updates to get the latest Arch Software.

Minimum Requirements for Installation

  • An i686 or x86-64 based processors
  • 2GB RAM (you can increase for better graphical performance)
  • 10GB hard disk free space
Based on Independent distribution relies on its own build system and repositories
Desktop Environment Cinnamon, GNOME, Budgie and more
Package Management pacman
General Purpose Desktop and Multipurpose
Download Link https://www.archlinux.org/download/

Read Also: Beginners Guide For Arch Linux Installation

# Antergos

Antergos best linux distributions 2019

The Antergos OS is one of the most underrated distributions in the Linux family. Antergos adheres to the arch principles of simplicity, modernity, versatility, centrality, and practicality. There is an option of using a GUI installer to make the task simple.

All the major desktop environments such as Gnome, Cinnamon, KDE, Openbox, Xfce, and Mate are supported. The icons on the interfaces provide superior that matches the beautiful theme.

Antergos is 100% functional straight from the box but with a limited number of packages. Additional packages are installed via the Pacman package manager, which contain new updates straight from the repos.

Minimum Requirements for Installation

  • An i686 or x86-64 based processors
  • 2GB RAM (you can increase for better graphical performance)
  • 10GB hard disk free space

Read Also: How to Install Antergos Lastest Version

# Manjaro

manjaro best linux distributions 2019

Manjaro is an easy and user-friendly Operating System based on Arch Linux. Key features of this distro include the intuitive installation process, automatic hardware detection, stable updates with every release, uses special Bash scripts for managing graphics and more options available in supporting desktop configurations.

Manjaro comes with different desktop flavors such as GNOME 3.26, Xfce 4.12, KDE 5.11, MATE 1.18, Cinnamon 3.6, and Budgie 10.4.

The software comes packed with software such as Firefox, LibreOffice, and Cantata for all your music and library activities. Right click on the desktop to access several widgets that you can use to add icons on the desktop panel.
Use the Manjaro settings manager to give you an option of selecting the kernel version that you want to use as well as installing language packs and third-party drivers for specific hardware. The Manjaro settings are accessible on the M icon on the system tray under the Settings menu.

Cantatta is the default music app among other versions like clementine

Octopi package manager organizes packages and is easy to use

Minimum Requirements for Installation

  • Intel-based i686 or i386 processor
  • 1 GB RAM
  • 8 GB free space on the hard disk
Based on ArchLinux
Desktop Environment Cinnamon, GNOME, KDE Plasma 5, Xfce, Budgie, Deepin, Architect, and MATE
Package Management pacman
General Purpose Desktop and Multipurpose
Download Link https://manjaro.org/download/

# Pop Linux from System 76

pop linux best linux distributions

POP Linux is a new Linux distro designed to have minimal clutter on the desktop. The creators of Pop OS system 76 specialize in building custom Linux PCs, and they have managed to tweak the Pop with the necessary improvements on the graphical interface. Switching between integrated Intel graphics and a dedicated NVidia graphics with a single mouse click. However, you can install NVidia drivers when doing the first time installation instead of using the open source Noveau drivers that are present in most distributions.

The Pop! _ OS does not support true hybrid graphics, as it is Windows. Switching between the Intel and NVidia graphics solutions is easy when you compare it with other Linux distributions. Pop! _OS works on any PC and with functionalities expected from a Linux distro. Forbes earlier suggested Pop OS is giving good Desktop experience on Lenovo ThinkPad X1 Laptop.

Pop OS is still emerging as a convenient tool for managing dual graphics options.

Prominent Features

  • Ubuntu based
  • Built from scratch
  • Customized GNOME 3 as the preferred Desktop Environment
  • Better Support
  • Runs well on System 76 Laptops

Pop! Shop

pop-shop

This is an AppCenter, otherwise known as the Pop! _Shop is a project developed by the ElementaryOS team. The main purpose of this center is to make app organization easy to enable easy search experience.

Minimum Requirements for Installation

  • 2GB RAM though the recommended is 4GB
  • Minimum 16GB storage the recommended is 20GB
  • 64-bit processors
Based on Ubuntu
Desktop Environment GNOME (default), Budgie and more
Package Management dpkg
General Purpose Desktop and Multipurpose
Download Link https://system76.com/pop

Read Also: How to Install Pop!_OS from System76

Best Linux Distro 2019 for Security

Online privacy is a big issue in this era of mass surveillance by both the state and online marketers. If you are keen on keeping these surveillance operations at bay, you need an operating system that has been created from the deep inside with one key thing in mind – security or privacy.

Therefore, with this in mind here are what we have in mind for the distros that will work for hackers, pen-testers, and the terminally paranoid.

# Kali Linux

kali-linux best linux distribution 2019

Kali is becoming more popular in the cyber community for being Hackers number one priority. Kali has more than 300 tools that are applicable in different areas such as key-loggers, Wi-Fi scanners, scanning and exploiting targets, password crackers, probing and many other uses.

From the word go this is not a beginner friendly Operating System. Most of its courses are taught online on how to use it effectively, it the preferred choice for ethical hackers black hats, and penetration testers. Kali is relatable for its realism and attention to details. Kali Linux is Debian based OS that means all the software that you need can be installed the Debian commands.

The only available user on Kali is the root user, and all work within the OS works under this identity at all times. You can still add another account without the root privileges, but it will be at the logic of using Kali for security reasons.

Kali has many either penetration texting tools that can be GUI or CLI tools. Testing these applications means that you are aware that some commands may not work with your system or cause further problems to the network. When it comes to security applications ignorance is not an excuse.

If the software you want is not on the Debian packages within Kali, you can install but with the stern warning, that such additions only work to compromise system stability.

Minimum Requirements for Installation

  • 128 MB RAM
  • 2GB free disk space
  • CPU that supports AMD64, i386, armel, arm64, and armhf

With the Desktop environment

  • 2GB RAM
  • 20 GB disk space
  • CPU that supports AMD64, i386, armel, arm64, and armhf
Based on Debian
Desktop Environment GNOME (default)
Package Management dpkg
General Purpose Penetration testing / Cyber security
Download Link https://www.kali.org/downloads/

# Tails

tails-linux best linux distributions 2019

For anyone looking for the best online privacy, then Tails should be able to provide that security. Tails is another Debian based OS with privacy in mind; tails does not store any data by default that is why most developers refer to it as the amnesiac distribution.

Tails works with Tor browsers for all network connectivity and the OS can operate from a flash disk and can disguise itself to look like windows in public. Everything on Tails is encrypted this includes messaging, emails, and files.

The OS uses Gnome 3 classic as its Windows manager. Tails ships with multiple default software including an unsafe browser that you can use to access the internet without anonymity.

Once you boot into the system, a dialog box will pop up with the request to install more options. NO means that you will log in with no administrative privileges, YES will give a choice of setting up a password that allows the changing of network settings. Mac spoofing hides you on the network you just joined as well as giving you root access.

You can use the Onion Circuits to confirm connection details the moment you join the internet. The Tor browser is what any person obsessed with privacy and the latest updates use the Firefox 45.0.3 has new extensions that block any annoying ads.

KeyPassX saves all encrypted passwords, which are only unlocked using the master key. Installation of Tails is possible using a USB drive or installed on the computer and make it bootable

Minimum Requirements for Installation

  • CD/DVD/USB ports
  • 64 bit x86-64 processor
  • 2GB RAM

Best Lightweight distro 2019

Why should you risk running the old insecure Windows XP with no support yet there are tons of secure Linux applications that are lightweight and will work with machines of that era. The outdated machine hardware configurations should not box you into loading unpredictable and insecure Operating System. The Linux distributions are more than light, and they are fast and secure.

# Lubuntu

Lubuntu makes it to the list of the lightweight Linux distributions that can work well in netbooks and older PCs. Lubuntu is an official Ubuntu, therefore, giving its users an opportunity to share the same software on the official Ubuntu software store.

Starting from Lubuntu 18.10 all 32-bit images will no longer get support from its developers, and therefore anyone with old 32-bit hardware configurations will have to move to 64-bit processors.

Lubuntu is a fast Operating System for old desktops that use the LXQt desktop environment alongside a selection of light applications. Switching from the previous LXDE desktop to the current LXQt started with Lubuntu 18.10. Comparing the two LXQt is modern after the merging of LXDE and Razor-qt.

All the necessary software that you need ship with the OS. Lubuntu is even better for anyone who is familiar with Ubuntu and wants to upgrade the old laptop or PC.

Minimum Requirements for Installation

  • Pentium II or more
  • 256MB RAM
  • 5GB free disk space
Based on Ubuntu
Desktop Environment LXQt, LXDE
Package Management dkpg
General Purpose Desktop and Multipurpose
Download Link https://lubuntu.net/downloads/

# Linux Lite

linux-lite best linux distribution

The growth of Linux Lite has been quite rapid in the recent past because beginners find it easy to use, it is attractive, and of course lightweight. It is another Ubuntu-based Linux Operating System running on Long-term Software (LTS) and has powerful and popular applications.

Using Linux Lite means you have a functional Linux desktop experience because it uses a more or less similar menu interface on Windows XP. The Xfce desktop environment is making things comfortable for a Linux newbie.
Linux Lite will handle with ease what other distributions are struggling with even with its lightweight structure. Linux Lite offers all the tools that promise the best performance. The latest Linux Lite at the moment is Linux Lite 4.2 that has an auto screen adjustment feature, the Redshift that adjusts the screen temperature at night or day.

Minimum Requirements for Installation

  • 700MHz Processor
  • 512MB RAM
  • Screen Resolution of about 1024 x 768

# TinyCore

TinyCore best Linux distributions

TinyCore is an incredibly compact distribution that is available in three different sizes. The barebone TinyCore is by far the tiniest of Linux distros that allow users to build their own variations.

The lightest TinyCore is about 11MB and has no graphical interface with an option of adding one after the installation. An alternative is the TinyCore Version 9.0, which is 16MB in size and has the option of FLTK or FLWM desktop environments. The third option is the install CorePlus that is more than 106MB and has a choice of Lightweight Window Managers such as IceWM and FluxBox. The CorePlus has support for Wi-Fi and non-US keyboards.

TinyCore saves on storage space because you need a cabled network during initial installation; therefore, it does not have many applications other than the terminal, basic text editor, and network connection manager.

Use the TinyCore Control Panel for quick access to different system configurations. Use the graphical package manager to install more software such as multimedia codecs.

Minimum Requirements for Installation

  • 128MB RAM
  • 32-bit and 64-bit processors. The system also accepts other versions such as PiCore
  • 5GB disk space
Based on Busy Box, flwm, Tiny X , Fltk
Desktop Environment GNOME , i3 , IceWM , HackedBox
Package Management tce-load
General Purpose Desktop
Download Link http://tinycorelinux.net/downloads.html

# Puppy Linux

puppy-linux

Puppy Linux is a veteran when it comes to the world of lightweight Linux world, and it boasts of a vast range of applications in different versions. The Operating System uses the Xenial Pup edition that works with Ubuntu Repositories.

Being one of the oldest lightweight distributions in the market, project developers have been making efforts to make it slim and light for more than a decade now. The different versions are the Slacko Puppy 6.3.2 based on Slackware while the XenialPulp 7.5 based on Ubuntu 16.04 LTS.

Puppy Linux Is full of apps with the most unusual apps like the Homebank, which helps in financial management, or the Gwhere that manages disk catalogs. A graphical tool is also available for managing samba shares and firewall.

The XenialPulp uses the QuickPet utility that manages the installation of the most popular apps.

Minimum Requirements for Installation

  • 128MB RAM
  • 32-bit and 64-bit processors
  • 5GB disk space

Best Enterprise Server Distros 2019

In the Operating Systems server arena, Linux enjoys a bigger share because of things like stability, freedom, security, hardware support, and freedom. Linux servers are suitable for expert users and System Administrators as well as special users like programmers, gamers, programming, or ethical hackers.

These Operating Systems have special tools and enjoy long-term support. They give the user the best uptime, security, efficiency, and optimum performance. Let us look at the two of the most used Linux Server options.

# RedHat

redhat best linux distributions

Red Hat Enterprise Linux (RHEL) Server enjoys the same position held by Ubuntu in the world of Desktop Linux. Red Hat, the makers of RHEL, is a player that has been in the industry for a very long time. They have been able to refine this Server Operating system to ensure that most software packages and hardware get their certification support.

In addition, RHEL also has Long-term ongoing support that when it comes to Linux Servers Operating Systems.
The latest version comes in three disks that only need 38 minutes to install using the graphical interface. There is an option of having Desktop version installed for new users who want to try out the features. Installing the server version will only install software that supports server edition.

The Red Hat Server edition has two desktop environments the KDE 3.0 and GNOME 1.4. The Nautilus file manager and Ximian evolution will help you manage the system. You can comfortably use the Office Outlook environment with the presence of similar applications that support emails, calendar, contacts, and Palm OS integration.

The professional (paid) version offers the Sun Microsystems Star Office 5.2.

Minimum Requirements for Installation

  • Any Pentium class processor (X86, POWER Architecture, z/Architecture, S/390)
  • 5GB hard disk space
  • 32MB RAM – no graphics, 64MB RAM – with graphics
  • Network interface

# Suse Server

SUSE linux enterpirse server

SUSE Linux Enterprise Server (SLES) is an Operating System that opens new avenues for the transformation in the software-defined era. SLES makes the IT infrastructure efficient by providing engaging system developers to help in solving critical workloads in the organization.

Bridges all software-defined infrastructure by giving a common base that enables easy migration of applications, improve system management, and make it easy to adopt containers.

SUSE automatically installs the minimal requires server packages, use the YaST Control Centre to configure network and most of the system settings. The Zypper package manager is good for downloading and installing essential server such as the postfix.

Minimum Requirements for Installation

  • 4GB RAM
  • 16GB disk space
  • Network interface
Based on RPM and RedHat
Desktop Environment GNOME (default), GNOME classic, ICeWM, SLE classic
Package Management Zypper
General Purpose Desktop and Server
Download Link https://www.suse.com/products/

Best Linux Distribution 2019 for Programmers

Most developers use Linux Based Operating Systems to get their work done or to create something new. Programmers more than anybody else are concerned with things like power, stability, compatibility, and flexibility found in any OS.

Ubuntu and Debian seem to be the leading contenders but there are several others, and today we will focus on Debian.

# Debian

debian-9

The Debian GNU/Linux Distribution is the foundation many other Linux distros depend on. The latest version is otherwise known as the Stretch edition that also claims its position as the preferred programmers’ choice.

The infinite numbers of packages that are designed to be stable also have self-help tutorials that will help you solve issues as you work around your project. The Debian project has a testing branch where all the new software packages are stored. The testing site is a good site for advanced programmers and system administrators.

Linux Beginners will not find Debian to be friendly as its focus is on advanced programmers and users. The reason why you should consider Debian for your programming tasks is the ease of availability of resources, and you get to use the .deb package manager.

Version 9 of Debian uses GNOME3.22.2 and KDE 5.8. Other interfaces include the Budgie interface 10.2., Cinnamon, MATE, and LXQt. All are available from the software package manager.

Minimum Requirements for Installation

  • 512MB
  • 2GB free Hard disk space
Based on Debian
Desktop Environment GNOME3, KDE 5.8 Cinnamon, MATE, Budgie 10.2
Package Management dpkg
General Purpose Desktop and Multipurpose
Download Link http/www.debian.org/distrib

Our thoughts

The effort to look for the best Linux Distribution gave us few excellent options in different categories. We tried and experimented with most of them and listed what we saw as the most appropriate in all categories in this article. When doing this we were so much alert to the fact that changes do take place and new things always pop up, we encourage your participation by adding to the list what you think was left out through the comment section below.
Please do not forget to tell us which ones you like or find to be a better distro.

Source

Bash 5.0 Released with New Features

Bash 5.0 Released with New Features

Last updated January 9, 2019

The mailing list confirmed the release of Bash-5.0 recently. And, it is exciting to know that it comes baked with new features and variable.

Well, if you’ve been using Bash 4.4.XX, you will definitely love the fifth major release of Bash.

The fifth release focuses on new shell variables and a lot of major bug fixes with an overhaul. It also introduces a couple of new features along with some incompatible changes between bash-4.4 and bash-5.0.

Bash logo

What about the new features?

The mailing list explains the bug fixed in this new release:

This release fixes several outstanding bugs in bash-4.4 and introduces several new features. The most significant bug fixes are an overhaul of how nameref variables resolve and a number of potential out-of-bounds memory errors discovered via fuzzing. There are a number of changes to the expansion of [email protected] and $* in various contexts where word splitting is not performed to conform to a Posix standard interpretation, and additional changes to resolve corner cases for Posix conformance.

It also introduces some new features. As per the release note, these are the most notable new features are several new shell variables:

The BASH_ARGV0, EPOCHSECONDS, and EPOCHREALTIME. The ‘history’ builtin can remove ranges of history entries and understands negative arguments as offsets from the end of the history list. There is an option to allow local variables to inherit the value of a variable with the same name at a preceding scope. There is a new shell option that, when enabled, causes the shell to attempt to expand associative array subscripts only once (this is an issue when they are used in arithmetic expressions). The ‘globasciiranges‘ shell option is now enabled by default; it can be set to off by default at configuration time.

What about the changes between Bash-4.4 and Bash-5.0?

The update log mentioned about the incompatible changes and the supported readline version history. Here’s what it said:

There are a few incompatible changes between bash-4.4 and bash-5.0. The changes to how nameref variables are resolved means that some uses of namerefs will behave differently, though I have tried to minimize the compatibility issues. By default, the shell only sets BASH_ARGC and BASH_ARGV at startup if extended debugging mode is enabled; it was an oversight that it was set unconditionally and caused performance issues when scripts were passed large numbers of arguments.

Bash can be linked against an already-installed Readline library rather than the private version in lib/readline if desired. Only readline-8.0 and later versions are able to provide all of the symbols that bash-5.0 requires; earlier versions of the Readline library will not work correctly.

I believe some of the features/variables added are very useful. Some of my favorites are:

  • There is a new (disabled by default, undocumented) shell option to enable and disable sending history to syslog at runtime.
  • The shell doesn’t automatically set BASH_ARGC and BASH_ARGV at startup unless it’s in debugging mode, as the documentation has always said, but will dynamically create them if a script references them at the top level without having enabled debugging mode.
  • The ‘history’ can now delete ranges of history entries using ‘-d start-end’.
  • If a non-interactive shell with job control enabled detects that a foreground job died due to SIGINT, it acts as if it received the SIGINT.
  • BASH_ARGV0: a new variable that expands to $0 and sets $0 on assignment.

To check the complete list of changes and features you should refer to the Mailing list post.

Wrapping Up

You can check your current Bash version, using this command:

bash –version

It’s more likely that you’ll have Bash 4.4 installed. If you want to get the new version, I would advise waiting for your distribution to provide it.

With Bash-5.0 available, what do you think about it? Are you using any alternative to bash? If so, would this update change your mind?

Let us know your thoughts in the comments below.


Source

5 Useful Ways to Do Arithmetic in Linux Terminal

In this article, we will show you various useful ways of doing arithmetic’s in the Linux terminal. By the end of this article, you will learn basic different practical ways of doing mathematical calculations in the command line.

Let’s get started!

1. Using Bash Shell

The first and easiest way do basic math on the Linux CLI is a using double parenthesis. Here are some examples where we use values stored in variables:

$ ADD=$(( 1 + 2 ))
$ echo $ADD
$ MUL=$(( $ADD * 5 ))
$ echo $MUL
$ SUB=$(( $MUL - 5 ))
$ echo $SUB
$ DIV=$(( $SUB / 2 ))
$ echo $DIV
$ MOD=$(( $DIV % 2 ))
$ echo $MOD
Arithmetic in Linux Bash Shell

Arithmetic in Linux Bash Shell

2. Using expr Command

The expr command evaluates expressions and prints the value of provided expression to standard output. We will look at different ways of using expr for doing simple math, making comparison, incrementing the value of a variable and finding the length of a string.

The following are some examples of doing simple calculations using the expr command. Note that many operators need to be escaped or quoted for shells, for instance the * operator (we will look at more under comparison of expressions).

$ expr 3 + 5
$ expr 15 % 3
$ expr 5 \* 3
$ expr 5 – 3
$ expr 20 / 4
Basic Arithmetic Using expr Command in Linux

Basic Arithmetic Using expr Command in Linux

Next, we will cover how to make comparisons. When an expression evaluates to false, expr will print a value of 0, otherwise it prints 1.

Let’s look at some examples:

$ expr 5 = 3
$ expr 5 = 5
$ expr 8 != 5
$ expr 8 \> 5
$ expr 8 \< 5
$ expr 8 \<= 5
Comparing Arithmetic Expressions in Linux

Comparing Arithmetic Expressions in Linux

You can also use the expr command to increment the value of a variable. Take a look at the following example (in the same way, you can also decrease the value of a variable).

$ NUM=$(( 1 + 2))
$ echo $NUM
$ NUM=$(expr $NUM + 2)
$ echo $NUM
Increment Value of a Variable

Increment Value of a Variable

Let’s also look at how to find the length of a string using:

$ expr length "This is Tecmint.com"
Find Length of a String

Find Length of a String

For more information especially on the meaning of the above operators, see the expr man page:

$ man expr

3. Using bc Command

bc (Basic Calculator) is a command-line utility that provides all features you expect from a simple scientific or financial calculator. It is specifically useful for doing floating point math.

If bc command not installed, you can install it using:

$ sudo apt install bc   #Debian/Ubuntu
$ sudo yum install bc   #RHEL/CentOS
$ sudo dnf install bc   #Fedora 22+

Once installed, you can run it in interactive mode or non-interactively by passing arguments to it – we will look at both case. To run it interactively, type the command bc on command prompt and start doing some math, as shown.

$ bc 
Start bc in Non-Interactive Mode

Start bc in Non-Interactive Mode

The following examples show how to use bc non-interactively on the command-line.

$ echo '3+5' | bc
$ echo '15 % 2' | bc
$ echo '15 / 2' | bc
$ echo '(6 * 2) - 5' | bc
Do Math Using bc in Linux

Do Math Using bc in Linux

The -l flag is used to the default scale (digits after the decimal point) to 20, for example:

$ echo '12/5 | bc'
$ echo '12/5 | bc -l'
Do Math with Floating Numbers

Do Math with Floating Numbers

4. Using Awk Command

Awk is one of the most prominent text-processing programs in GNU/Linux. It supports the addition, subtraction, multiplication, division, and modulus arithmetic operators. It is also useful for doing floating point math.

You can use it to do basic math as shown.

$ awk 'BEGIN { a = 6; b = 2; print "(a + b) = ", (a + b) }'
$ awk 'BEGIN { a = 6; b = 2; print "(a - b) = ", (a - b) }'
$ awk 'BEGIN { a = 6; b = 2; print "(a *  b) = ", (a * b) }'
$ awk 'BEGIN { a = 6; b = 2; print "(a / b) = ", (a / b) }'
$ awk 'BEGIN { a = 6; b = 2; print "(a % b) = ", (a % b) }'
Do Basic Math Using Awk Command

Do Basic Math Using Awk Command

If you are new to Awk, we have a complete series of guides to get you started with learning it: Learn Awk Text Processing Tool.

5. Using factor Command

The factor command is use to decompose an integer into prime factors. For example:

$ factor 10
$ factor 127
$ factor 222
$ factor 110  
Factor a Number in Linux

Factor a Number in Linux

That’s all! In this article, we have explained various useful ways of doing arithmetic’s in the Linux terminal. Feel free to ask any questions or share any thoughts about this article via the feedback form below.

Source

Backup and Restore Ubuntu Applications using Aptik

How can Aptik Help?

With Aptik, you can do the following backups only through a click or two:

  • Launchpad PPAs from your current system and restore them to the new system
  • All installed software from your current system and restore them to the new system
  • Apt-cache downloaded packages from your current system and restore them to the new system
  • App configurations from your current system and restore them to the new system
  • Your home directory including the configuration files and restore them to the new system
  • Themes and icons from the /usr/share director and restore them to the new system
  • Backup selective items from your system through one-click and restore them to your new system

In this article, we will explain how you can install Aptik command line and Aptik GTK(UI tool) to your Ubuntu through the command line. We will then tell you how to backup your stuff from the old system and restore it to your new Ubuntu. In the end, we will also explain how you can uninstall Aptik from your system if you want to remove it from your new system after restoring your applications and other useful stuff.

We have run the commands and procedures mentioned in this article on a Ubuntu 18.04 LTS system.

Installing Aptik and Aptik GTK

We will be installing Aptik CLI and Aptik GTK through the Ubuntu command line, the Terminal. You can open the Terminal application either through the system Dash or the Ctrl+Alt+T shortcut.

First add the PPA repository, through which we will be installing Aptik, by using the following command:

$ sudo apt-add-repository -y ppa:teejee2008/ppa

Add Aptik Ubuntu Repository

Please note that only an authorized user can add/remove and update software on Ubuntu.

Now update your system’s repository index with that of the Internet by entering the following command as sudo:

$ sudo apt-get update

Update package lists

Finally, enter the following command in order to install Aptik:

$ sudo apt-get install aptik

Install Aptik

The system will prompt you with a Y/n option to confirm installation. Please enter Y and then hit Enter to continue after which Aptik will be installed on your system.

Once done, you can check which version of Aptik is installed on your system by running the following command:

$ aptik --version

Check Aptik version

Similarly, you can install the graphics utility of Aptik, Aptik GTK, through the following command as sudo:

$ sudo apt-get install aptik-gtk

Install aptik-gtk

Launch and Use Aptik GTK

If you want to launch the Aptik GT through the command line, simply enter the following command:

$ aptik-gtk

Run Aptik GTK

You can also launch it through the UI by either searching for it through the system Dash or access it from the Ubuntu Applications list.

Locate Aptik application

Every time you launch this application, you will be required to provide authentication for superuser as only an authorized user can run /bin/bash.

Authenticate as admin user

Provide the password for the superuser and then click the Authenticate button. This will open the Aptik application for you in the following view:

Configure Aprik backup mode and location

Backup

If you want to backup stuff from your current system, select the Backup option under Backup Mode. Then provide a valid path to which you want to backup your apps, PPA and other stuff.

Set backup Location

Next is to select the Backup tab from the left pane:

What shall be backed up

On this view, you can see a lot of stuff that you can back up. Select your choices one by one or click the Backup all Items button in order to backup all options that are mentioned.

Restore

From your new system, open Aptik GTK, select the Restore option under Backup Option. Then provide a valid path from where you want to restore stuff on your new system:

Restore Mode

Next is to select the Restore tab from the left pane:

Restore settings

From this view, select all the stuff you want to restore to your new computer or else click the Restore All Items button to restore everything that you backed up from your previous system.

Using Aptik CLI

If you want to backup or restore stuff through the command line, the Aptik help can be really useful. Use one of the following commands to list the detailed help on Aptik:

$ aptik
$ aptik --help

Aptik commandline options

Uninstall Aptik and Aptik GTK

When you no longer need Aptik, you can use the following apt-get commands to remove Aptik and Aptik GTK:

$ sudo apt-get remove aptik
$ sudo apt-get remove aptik-gtk

And

$ sudo apt-get autoremove

After reading this article, you are now capable of securely transporting useful applications, PPAs and some other application related data from your current Ubuntu system to your new one. Through the very simple installation procedure and then a few clicks for selecting what you want to backup/restore, you can save a lot of time and effort when switching to new systems.

Source

Linux Commands for Measuring Disk Activity | Linux.com

Linux commands for measuring disk activity

Linux systems provide a handy suite of commands for helping you see how busy your disks are, not just how full. In this post, we examine five very useful commands for looking into disk activity. Two of the commands (iostat and ioping) may have to be added to your system, and these same two commands require you to use sudo privileges, but all five commands provide useful ways to view disk activity.

Probably one of the easiest and most obvious of these commands is dstat.

dtstat

In spite of the fact that the dstat command begins with the letter “d”, it provides stats on a lot more than just disk activity. If you want to view just disk activity, you can use the -d option. As shown below, you’ll get a continuous list of disk read/write measurements until you stop the display with a ^c. Note that after the first report, each subsequent row in the display will report disk activity in the following time interval, and the default is only one second.

$ dstat -d
-dsk/total-
 read  writ
 949B   73k
  65k     0    <== first second
   0    24k    <== second second
   0    16k
   0	0 ^C

Including a number after the -d option will set the interval to that number of seconds.

$ dstat -d 10
-dsk/total-
 read  writ
 949B   73k
  65k   81M    <== first five seconds
   0    21k    <== second five second
   0  9011B ^C

Notice that the reported data may be shown in a number of different units — e.g., M (megabytes), k (kilobytes), and B (bytes).

Without options, the dstat command is going to show you a lot of other information as well — indicating how the CPU is spending its time, displaying network and paging activity, and reporting on interrupts and context switches.

$ dstat
You did not select any stats, using -cdngy by default.
--total-cpu-usage-- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai stl| read  writ| recv  send|  in   out | int   csw
  0   0 100   0   0| 949B   73k|   0     0 |   0     3B|  38    65
  0   0 100   0   0|   0     0 | 218B  932B|   0     0 |  53    68
  0   1  99   0   0|   0    16k|  64B  468B|   0     0 |  64    81 ^C

The dstat command provides valuable insights into overall Linux system performance, pretty much replacing a collection of older tools, such as vmstat, netstat, iostat, and ifstat, with a flexible and powerful command that combines their features. For more insight into the other information that the dstat command can provide, refer to this post on the dstat command.

iostat

The iostat command helps monitor system input/output device loading by observing the time the devices are active in relation to their average transfer rates. It’s sometimes used to evaluate the balance of activity between disks.

$ iostat
Linux 4.18.0-041800-generic (butterfly)         12/26/2018      _x86_64_       (2 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.07    0.01    0.03    0.05    0.00   99.85

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
loop0             0.00         0.00         0.00       1048          0
loop1             0.00         0.00         0.00        365          0
loop2             0.00         0.00         0.00       1056          0
loop3             0.00         0.01         0.00      16169          0
loop4             0.00         0.00         0.00        413          0
loop5             0.00         0.00         0.00       1184          0
loop6             0.00         0.00         0.00       1062          0
loop7             0.00         0.00         0.00       5261          0
sda               1.06         0.89        72.66    2837453  232735080
sdb               0.00         0.02         0.00      48669         40
loop8             0.00         0.00         0.00       1053          0
loop9             0.01         0.01         0.00      18949          0
loop10            0.00         0.00         0.00         56          0
loop11            0.00         0.00         0.00       7090          0
loop12            0.00         0.00         0.00       1160          0
loop13            0.00         0.00         0.00        108          0
loop14            0.00         0.00         0.00       3572          0
loop15            0.01         0.01         0.00      20026          0
loop16            0.00         0.00         0.00         24          0

Of course, all the stats provided on Linux loop devices can clutter the display when you want to focus solely on your disks. The command, however, does provide the -p option, which allows you to just look at your disks — as shown in the commands below.

$ iostat -p sda
Linux 4.18.0-041800-generic (butterfly)         12/26/2018      _x86_64_        (2 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.07    0.01    0.03    0.05    0.00   99.85

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda               1.06         0.89        72.54    2843737  232815784
sda1              1.04         0.88        72.54    2821733  232815784

Note that tps refers to transfers per second.

You can also get iostat to provide repeated reports. In the example below, we’re getting measurements every five seconds by using the -d option.

$ iostat -p sda -d 5
Linux 4.18.0-041800-generic (butterfly)         12/26/2018      _x86_64_        (2 CPU)

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda               1.06         0.89        72.51    2843749  232834048
sda1              1.04         0.88        72.51    2821745  232834048

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda               0.80         0.00        11.20          0         56
sda1              0.80         0.00        11.20          0         56

If you prefer to omit the first (stats since boot) report, add a -y to your command.

$ iostat -p sda -d 5 -y
Linux 4.18.0-041800-generic (butterfly)         12/26/2018      _x86_64_        (2 CPU)

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda               0.80         0.00        11.20          0         56
sda1              0.80         0.00        11.20          0         56

Next, we look at our second disk drive.

$ iostat -p sdb
Linux 4.18.0-041800-generic (butterfly)         12/26/2018      _x86_64_        (2 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.07    0.01    0.03    0.05    0.00   99.85

Device             tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sdb               0.00         0.02         0.00      48669         40
sdb2              0.00         0.00         0.00       4861         40
sdb1              0.00         0.01         0.00      35344          0

iotop

The iotop command is top-like utility for looking at disk I/O. It gathers I/O usage information provided by the Linux kernel so that you can get an idea which processes are most demanding in terms in disk I/O. In the example below, the loop time has been set to 5 seconds. The display will update itself, overwriting the previous output.

$ sudo iotop -d 5
Total DISK READ:         0.00 B/s | Total DISK WRITE:      1585.31 B/s
Current DISK READ:       0.00 B/s | Current DISK WRITE:      12.39 K/s
  TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
32492 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.12 % [kworker/u8:1-ev~_power_efficient]
  208 be/3 root        0.00 B/s 1585.31 B/s  0.00 %  0.11 % [jbd2/sda1-8]
    1 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % init splash
    2 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kthreadd]
    3 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [rcu_gp]
    4 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [rcu_par_gp]
    8 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [mm_percpu_wq]

ioping

The ioping command is an altogether different type of tool, but it can report disk latency — how long it takes a disk to respond to requests — and can be helpful in diagnosing disk problems.

$ sudo ioping /dev/sda1
4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=1 time=960.2 us (warmup)
4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=2 time=841.5 us
4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=3 time=831.0 us
4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=4 time=1.17 ms
^C
--- /dev/sda1 (block device 111.8 GiB) ioping statistics ---
3 requests completed in 2.84 ms, 12 KiB read, 1.05 k iops, 4.12 MiB/s
generated 4 requests in 3.37 s, 16 KiB, 1 iops, 4.75 KiB/s
min/avg/max/mdev = 831.0 us / 947.9 us / 1.17 ms / 158.0 us

atop

The atop command, like top provides a lot of information on system performance, including some stats on disk activity.

ATOP - butterfly      2018/12/26  17:24:19      37d3h13m------ 10ed
PRC | sys    0.03s | user   0.01s | #proc    179 | #zombie    0 | #exit      6 |
CPU | sys       1% | user      0% | irq       0% | idle    199% | wait      0% |
cpu | sys       1% | user      0% | irq       0% | idle     99% | cpu000 w  0% |
CPL | avg1    0.00 | avg5    0.00 | avg15   0.00 | csw      677 | intr     470 |
MEM | tot     5.8G | free  223.4M | cache   4.6G | buff  253.2M | slab  394.4M |
SWP | tot     2.0G | free    2.0G |              | vmcom   1.9G | vmlim   4.9G |
DSK |          sda | busy      0% | read       0 | write      7 | avio 1.14 ms |
NET | transport    | tcpi 4 | tcpo  stall      8 | udpi 1 | udpo 0swout   2255 |
NET | network      | ipi       10 | ipo 7 | ipfrw      0 | deliv      60.67 ms |
NET | enp0s25   0% | pcki      10 | pcko 8 | si    1 Kbps | so    3 Kbp0.73 ms |

  PID SYSCPU  USRCPU  VGROW   RGROW  ST EXC   THR  S CPUNR   CPU  CMD 1/1673e4 |
 3357  0.01s   0.00s   672K    824K  --   -     1  R     0    0%  atop
 3359  0.01s   0.00s     0K      0K  NE   0     0  E     -    0%  <ps>
 3361  0.00s   0.01s     0K      0K  NE   0     0  E     -    0%  <ps>
 3363  0.01s   0.00s     0K      0K  NE   0     0  E     -    0%  <ps>
31357  0.00s   0.00s     0K      0K  --   -     1  S     1    0%  bash
 3364  0.00s   0.00s  8032K    756K  N-   -     1  S     1    0%  sleep
 2931  0.00s   0.00s     0K      0K  --   -     1  I     1    0%  kworker/u8:2-e
 3356  0.00s   0.00s     0K      0K  -E   0     0  E     -    0%  <sleep>
 3360  0.00s   0.00s     0K      0K  NE   0     0  E     -    0%  <sleep>
 3362  0.00s   0.00s     0K      0K  NE   0     0  E     -    0%  <sleep>

If you want to look at just the disk stats, you can easily manage that with a command like this:

$ atop | grep DSK
$ atop | grep DSK
DSK |          sda | busy      0% | read  122901 | write 3318e3 | avio 0.67 ms |
DSK |          sdb | busy      0% | read    1168 | write    103 | avio 0.73 ms |
DSK |          sda | busy      2% | read       0 | write     92 | avio 2.39 ms |
DSK |          sda | busy      2% | read       0 | write     94 | avio 2.47 ms |
DSK |          sda | busy      2% | read       0 | write     99 | avio 2.26 ms |
DSK |          sda | busy      2% | read       0 | write     94 | avio 2.43 ms |
DSK |          sda | busy      2% | read       0 | write     94 | avio 2.43 ms |
DSK |          sda | busy      2% | read       0 | write     92 | avio 2.43 ms |
^C

Being in the know with disk I/O

Linux provides enough commands to give you good insights into how hard your disks are working and help you focus on potential problems or slowdowns. Hopefully, one of these commands will tell you just what you need to know when it’s time to question disk performance. Occasional use of these commands will help ensure that especially busy or slow disks will be obvious when you need to check them.

Source

WP2Social Auto Publish Powered By : XYZScripts.com