Understanding Debian GNU/Linux Releases – Linux Hint

The universe of the Debian GNU/Linux distribution comes with its own odds and ends. In this article we explain what a release of Debian is, how it is named, and what are the basic criteria for a software package to become part of a regular release.

What is a Debian release?

Debian GNU/Linux is a non-commercial Linux distribution that was started in 1993 by Ian Murdock. Currently, it consists of about 51,000 software packages that are available for a variety of architectures such as Intel (both 32 and 64 bit), ARM, PowerPC, and others [2]. Debian GNU/Linux is maintained freely by a large number of contributors from all over the world. This includes software developers and package maintainers – a single person or a group of people that takes care of a package as a whole [3].

A Debian release is a collection of stable software packages that follow the Debian Free Software Guidelines (DFSG) [4]. These packages are well-tested and fit together in such a way that all the dependencies between the packages are met and you can install und use the software without problems. This results in a reliable operating system needed for your every-day work. Originally targeted for server systems it has no more a specific target (“The Universal OS”) and is widely used on desktop systems as well as mobile devices, nowadays.

In contrast to other Linux distributions like Ubuntu or Linux Mint, the Debian GNU/Linux distribution does not have a release cycle with fixed dates. It rather follows the slogan “Release only when everything is ready” [1]. Nethertheless, a major release comes out about every two years [8]. For example, version 9 came out in 2017, and version 10 is expected to be available in mid-2019. Security updates for Debian stable releases are provided as soon as possible from a dedicated APT repository. Additionally, minor stable releases are published in between, and contain important non-security bug fixes as well as minor security updates. Both the general selection and the major version number of software packages do not change within a release.

In order to see which version of Debian GNU/Linux you are running on your system have a look at the file /etc/debian_version as follows:

cat /etc/debian_version
9.6
$

This shows that the command was run on Debian GNU/Linux 9.6. Having installed the package “lsb-release” [14], you can get more detailed information by running the command “lsb_release -a”:

$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 9.6 (stretch)
Release: 9.6
Codename: stretch
$

What about these funny release names?

This shows that the command was run on Debian GNU/Linux 9.6. Having installed the package “lsb-release” [14], you can get more detailed information by running the command “lsb_release -a”:

You may have noted that for every Debian GNU/Linux release there is a funny release name. This is called an alias name which is taken from a character of the film series Toy Story [5] released by Pixar [6]. When the first Debian 1.x release was due, the Debian Project Leader back then, Bruce Perens, worked for Pixar [9]. Up to now the following names have been used for releases:

  • Debian 1.0 was never published officially, because a CD vendor shipped a development version accidentially labeled as “1.0” [10], so Debian and the CD vendor jointly announced that “this release was screwed” and Debian released version 1.1 about half a year later, instead.
  • Debian 1.1 Buzz (17 June 1996) – named after Buzz Lightyear, the astronaut
  • Debian 1.2 Rex (12 December 1996) – named after Rex the plastic dinosaur
  • Debian 1.3 Bo (5 June 1997) – named after Bo Peep the shepherd
  • Debian 2.0 Hamm (24 July 1998) – named after Hamm the piggy bank
  • Debian 2.1 Slink (9 March 1999) – named after the dog Slinky Dog
  • Debian 2.2 Potato (15 August 2000) – named after the puppet Mr Potato Head
  • Debian 3.0 Woody (19 July 2002) – named after the cowboy Woody Pride who is the main character of the Toy Story film series
  • Debian 3.1 Sarge (6 June 2005) – named after the Seargeant of the green plastic soldiers
  • Debian 4.0 Etch (8 April 2007) – named after the writing board Etch-A-Sketch
  • Debian 5.0 Lenny (14 February 2009) – named after the pull-out binocular
  • Debian 6.0 Squeeze (6 February 2011) – named after the green three-eyed aliens
  • Debian 7 Wheezy (4 May 2013) – named after Wheezy the penguin with the red bow tie
  • Debian 8 Jessie (25 April 2015) – named after the cowgirl Jessica Jane “Jessie” Pride
  • Debian 9 Stretch (17 June 2017) – named after the lila octopus
  • Debian 10 Buster (no release date known so far) – named after the puppy dog from Toy Story 2

As of the beginning of 2019, the release names for two future releases are also already known [8]:

  • Debian 11 Bullseye – named after Bullseye, the horse of Woody Pride
  • Debian 12 Bookworm – named after Bookworm, the intelligent worm toy with a built-in flashlight from Toy Story 3.

Relation between alias name and development state

New or updated software packages are uploaded to the unstable branch, first. After some days a package migrates to the testing branch if it fulfills a number of criterias. This later becomes the basis for the next stable release. The release of a distribution contains stable packages, only, that are actually a snapshot of the current testing branch.

At the same moment as a new release is out the so-far stable release becomes oldstable, and an oldstable release becomes the oldoldstable release. The packages of any end-of-life release get removed from the normal APT repositories and mirrors, and are transferred to the Debian Archive [11], and are no longer maintained. Debian is currently developing a site to search through archived packages at Historical Packages Search [12]. This site is though still under development and known to be not yet fully functional.

As with the other releases, the unstable branch has the alias name Sid which is short for “still in development”. In Toy Story, Sid is the name of the evil neighbours child who always damages the toys. The name Sid accurately describes the condition of a package in the unstable branch.

Additionally, there is also the “experimental” branch which is not a complete distribution but an add-on repository for Debian Unstable. This branch contains packages which do not yet fulfill the quality expectations of Debian unstable. Furthermore, packages are placed there in order to prepare library transitions so that packages from Debian unstable can be checked for build issues with a new version of a library without breaking Debian unstable.

The exprimental branch of Debian also has a Toy Story name – “RC-Buggy”. On the one hand this is Andy’s remote-controlled car, and on the other hand it abbreviates the description “contains release-critical bugs” [13].

Parts of the Debian GNU/Linux Distribution

Debian software packages are categorized by their license as follows:

  • main: entirely free
  • contrib: entirely free but the packages depend on non-free packages
  • non-free: free software that does not conform to the Debian Free Software Guidelines (DFSG)

An official release of Debian GNU/Linux consists of packages from the main branch, only. The packages classified under contrib and non-free are not part of the release, and seen as additions that are just made available to you. Which packages you use on your system is defined in the file /etc/apt/sources.list as follows:

cat /etc/apt/sources.list deb
http://ftp.us.debian.org/debian/
stretch main contrib non-free
deb http://security.debian.org/
stretch/updates main contrib
non-free

# stretch-updates, previously
known as ‘volatile’ deb
http://ftp.us.debian.org/debian/
stretch-updates main contrib
non-free

# stretch-backports deb
http://ftp.debian.org/debian
stretch-backports main contrib
non-free

Debian Backports

From the listing above you may have noted the entry titled stretch-backports. This entry refers to software packages that are ported back from Debian testing to the current Debian stable release. The reason for this package repository is that the release cycle of a stable release of Debian GNU/Linux can be quite long, and sometimes a newer version of a software is required for a specific machine. Debian Backports [7] allows you to use packages from future releases in your current setup. Be aware that these packages might not be on par with the quality of Debian stable packages. Also, take into account that there might be the need to switch to a newer upstream release every once in a while even during a stable release cycle, as these packages follow Debian testing, which is a kind of a rolling release (similar to Debian unstable).Debian Backports

Further Reading

The story behind Debian GNU/Linux is amazing. We recommend you to have a closer look at the Debian History [15,16,17].

Source

How to Install NextCloud 15 on Ubuntu 18.04

Install NextCloud on Ubuntu

NextCloud is a free and open-source self-hosted file sharing and communication platform built using PHP. It is a great alternative to some of the popular services available on the market, such as Dropbox, Google Drive, OwnCloud, etc. With NextCloud, you can easily store your data on your Ubuntu 18.04 VPS, create and manage your contacts, calendars, to-do lists, and much more. In this tutorial, we will install NextCloud version 15 on an Ubuntu 18.04 VPS – version 15 is a major release that comes with a lot of new features and improvements.

Prerequisites:

– An Ubuntu 18.04 VPS
– A system user with root privileges
– MySQL or MariaDB database server version 5.5 or newer with InnoDB storage engine.
– Apache 2.4 with mod_php enabled
– PHP version 7.0 or newer

Log in and update the server:

Log in to your Ubuntu 18.04 VPS via SSH as user root:

ssh root@IP_Address -p Port_number

Don’t forget to replace ‘IP_Address’ and ‘Port_number’ with the actual IP address of your server and the SSH service port.

Run the following commands to make sure that all installed packages on your Ubuntu 18.04 VPS are updated to the latest available version:

apt update && apt upgrade

Install Apache and PHP:

We need to install the Apache web server in order to serve the NextCloud files. It can be done easily by using the following command:

apt -y install apache2

Once the web server is installed, enable it to automatically start after a server restart:

systemctl enable apache2

Verify that the web server is up and running on your server:

service apache2 status

This is what the output should look like:

apache2.service - The Apache HTTP Server
   Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
  Drop-In: /lib/systemd/system/apache2.service.d
           ââapache2-systemd.conf
   Active: active (running) since Thu 2018-12-27 05:13:26 CST; 12min ago

Since NextCloud is a PHP-based application, our next step is to install PHP and some PHP extensions required by NextCloud:

apt -y install php php-cli php-common php-curl php-xml php-gd php-mbstring php-zip php-mysql

Restart the Apache web server to load the PHP modules:

systemctl restart apache2

Now check the PHP version installed on your server:

php -v
PHP 7.2.10-0ubuntu0.18.04.1 (cli) (built: Sep 13 2018 13:45:02) ( NTS )
Copyright (c) 1997-2018 The PHP Group
Zend Engine v3.2.0, Copyright (c) 1998-2018 Zend Technologies

Install MariaDB and create a database:

NextCloud needs an SQL database to store information. For this purpose, we will install the MariaDB database server by executing the following command:

apt -y install mariadb-server

Just like with Apache, enable MariaDB to automatically start after server reboot:

systemctl enable mariadb

Next, run the ‘mysql_secure_installation’ post-installation script to set a password for the MariaDB root user and to further improve the security of your MariaDB server. Once all steps are completed, you can go ahead and log in to the MariaDB server as the root user. We will then create a new user and database – both of which are necessary for installing NextCloud.

mysql -u root -p

MariaDB [(none)]> CREATE DATABASE nextcloud;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nextcloud.* TO 'nextcloud_user'@'localhost' IDENTIFIED BY 'PASSWORD';
MariaDB [(none)]> FLUSH PRIVILEGES;
MariaDB [(none)]> exit;

Don’t forget to replace ‘PASSWORD’ with a strong password.

Download and install NextCloud:

Go to NextCloud’s official website and download the latest stable release of the application. At the time of this article being published, the latest version of NextCloud is version 15.0.0.

wget https://download.nextcloud.com/server/releases/nextcloud-15.0.0.zip

Once the zip archive is downloaded, unpack it to the document root directory on your server:

unzip nextcloud-15.0.0.zip -d /var/www/html/

All files will be stored under a directory named ‘nextcloud’.

Remove the zip archive and change the ownership of the NextCloud files:

rm -f nextcloud-15.0.0.zip
chow -R www-data:www-data /var/www/html/nextcloud

That was the last step of configuring your server and installing NextCloud through the command line. Now, you can open your preferred web browser and access http://Your_IP/nextcloud to continue with the setup. Make sure to replace “Your_IP” with your server’s IP address or domain name. If everything is properly configured, you will get the following screen:

Create an administrative account, set the data folder and enter the MariaDB details for the user and database we created earlier in this tutorial.

That’s all – if you followed the steps in the tutorial, you will have successfully installed NextCloud version 15 on your Ubuntu 18.04 VPS. For more details about its configuration and usage, please check their official documentation.


Of course, you don’t need to Install NextCloud 15 on Ubuntu 18.04 yourself if you use one of our NextCloud Hosting services, in which case you can simply ask our expert Linux admins to install and set this up for you. They are available 24×7 and will take care of your request immediately.

PS. If you liked this post on How To Install NextCloud 15 on Ubuntu 18.04, please share it with your friends on the social networks by using the buttons on the left, or simply leave a reply below. Thanks.

Source

Linux Today – Back to Basics: Sort and Uniq

Learn the fundamentals of sorting and de-duplicating text on the command line.

If you’ve been using the command line for a long time, it’s easy to take the commands you use every day for granted. But, if you’re new to the Linux command line, there are several commands that make your life easier that you may not stumble upon automatically. In this article, I cover the basics of two commands that are essential in anyone’s arsenal: sort and uniq.

The sort command does exactly what it says: it takes text data as input and outputs sorted data. There are many scenarios on the command line when you may need to sort output, such as the output from a command that doesn’t offer sorting options of its own (or the sort arguments are obscure enough that you just use the sort command instead). In other cases, you may have a text file full of data (perhaps generated with some other script), and you need a quick way to view it in a sorted form.

Let’s start with a file named “test” that contains three lines:


Foo
Bar
Baz

sort can operate either on STDIN redirection, the input from a pipe, or, in the case of a file, you also can just specify the file on the command. So, the three following commands all accomplish the same thing:


cat test | sort
sort < test
sort test

And the output that you get from all of these commands is:


Bar
Baz
Foo

Sorting Numerical Output

Now, let’s complicate the file by adding three more lines:


Foo
Bar
Baz
1. ZZZ
2. YYY
11. XXX

If you run one of the above sort commands again, this time, you’ll see different output:


11. XXX
1. ZZZ
2. YYY
Bar
Baz
Foo

This is likely not the output you wanted, but it points out an important fact about sort. By default, it sorts alphabetically, not numerically. This means that a line that starts with “11.” is sorted above a line that starts with “1.”, and all of the lines that start with numbers are sorted above lines that start with letters.

To sort numerically, pass sort the -n option:


sort -n test

Bar
Baz
Foo
1. ZZZ
2. YYY
11. XXX

Find the Largest Directories on a Filesystem

Numerical sorting comes in handy for a lot of command-line output—in particular, when your command contains a tally of some kind, and you want to see the largest or smallest in the tally. For instance, if you want to find out what files are using the most space in a particular directory and you want to dig down recursively, you would run a command like this:


du -ckx

This command dives recursively into the current directory and doesn’t traverse any other mountpoints inside that directory. It tallies the file sizes and then outputs each directory in the order it found them, preceded by the size of the files underneath it in kilobytes. Of course, if you’re running such a command, it’s probably because you want to know which directory is using the most space, and this is where sortcomes in:


du -ckx | sort -n

Now you’ll get a list of all of the directories underneath the current directory, but this time sorted by file size. If you want to get even fancier, pipe its output to the tail command to see the top ten. On the other hand, if you wanted the largest directories to be at the top of the output, not the bottom, you would add the-r option, which tells sort to reverse the order. So to get the top ten (well, top eight—the first line is the total, and the next line is the size of the current directory):


du -ckx | sort -rn | head

This works, but often people using the du command want to see sizes in more readable output than kilobytes. The du command offers the -h argument that provides “human-readable” output. So, you’ll see output like 9.6G instead of 10024764 with the -k option. When you pipe that human-readable output to sort though, you won’t get the results you expect by default, as it will sort 9.6G above 9.6K, which would be above 9.6M.

The sort command has a -h option of its own, and it acts like -n, but it’s able to parse standard human-readable numbers and sort them accordingly. So, to see the top ten largest directories in your current directory with human-readable output, you would type this:


du -chx | sort -rh | head

Removing Duplicates

The sort command isn’t limited to sorting one file. You might pipe multiple files into it or list multiple files as arguments on the command line, and it will combine them all and sort them. Unfortunately though, if those files contain some of the same information, you will end up with duplicates in the sorted output.

To remove duplicates, you need the uniq command, which by default removes any duplicate lines that are adjacent to each other from its input and outputs the results. So, let’s say you had two files that were different lists of names:


cat namelist1.txt
Jones, Bob
Smith, Mary
Babbage, Walter

cat namelist2.txt
Jones, Bob
Jones, Shawn
Smith, Cathy

You could remove the duplicates by piping to uniq:


sort namelist1.txt namelist2.txt | uniq
Babbage, Walter
Jones, Bob
Jones, Shawn
Smith, Cathy
Smith, Mary

The uniq command has more tricks up its sleeve than this. It also can output only the duplicated lines, so you can find duplicates in a set of files quickly by adding the -d option:


sort namelist1.txt namelist2.txt | uniq -d
Jones, Bob

You even can have uniq provide a tally of how many times it has found each entry with the -c option:


sort namelist1.txt namelist2.txt | uniq -c
1 Babbage, Walter
2 Jones, Bob
1 Jones, Shawn
1 Smith, Cathy
1 Smith, Mary

As you can see, “Jones, Bob” occurred the most times, but if you had a lot of lines, this sort of tally might be less useful for you, as you’d like the most duplicates to bubble up to the top. Fortunately, you have the sort command:


sort namelist1.txt namelist2.txt | uniq -c | sort -nr
2 Jones, Bob
1 Smith, Mary
1 Smith, Cathy
1 Jones, Shawn
1 Babbage, Walter

Conclusion

I hope these cases of using sort and uniq with realistic examples show you how powerful these simple command-line tools are. Half the secret with these foundational command-line tools is to discover (and remember) they exist so that they’ll be at your command the next time you run into a problem they can solve.

Source

Linus Torvalds Welcomes 2019 with Linux 5.x » Linux Magazine

Better support for GPUs and CPUs.

Linus Torvalds has announced the release of Linux 5.0-rc1. The kernel was supposed to be 4.21, but he decided to move to the 5.x series. Torvalds has made it clear that the numbering of the kernel doesn’t make much sense. So don’t get too excited about this release.

Torvalds explained in the LKML (Linux Kernel Mailing List), “The numbering change is not indicative of anything special. If you want to have an official reason, it’s that I ran out of fingers and numerology this time (we’re _about_ 6.5M objects in the git repo), and there isn’t any major particular feature that made for the release numbering either,” he said.

The release brings CPU and GPU improvements. In addition to support for AMD’s FreeSync display, it also comes with support for Raspberry Pi Touchscreen.

Talking about the ‘content’ of the kernel Torvalds wrote, “The stats look fairly normal. About 50% is drivers, 20% is architecture updates, 10% is tooling, and the remaining 20% is all over (documentation, networking, filesystems, header file updates, core kernel code..).”

Source

GitHub Offers Free Private Repositories » Linux Magazine

Popular source code collaboration site makes a major change to feature set.

GitHub has announced that it is now taking on players like GitLabs and offering free private repositories. Anyone could always set-up free repository on GitHub; the condition was that the code had to be public, which meant that projects and organizations could not set up private repositories. If they want private repository, they had to pay.

Now anyone can create a private repository for free. The only caveat is that there can be at most three collaborators to the project, which means big organizations can’t exploit the free service to manage their mega projects.

A private repository lets developers communities work on the code base internally, away from public. GitHub competitors like GitLab already offer free private repository.

Source

Industry-Scale Collaboration at The Linux Foundation

 

Learn about the principles required to achieve a successful industry pivot to open source.

Linux and open source have changed the computer industry (among many others) forever. Today, there are tens of millions of open source projects. A valid question is “Why?” How can it possibly make sense to hire developers that work on code that is given away for free to anyone who cares to take it? I know of many answers to this question, but for the communities that I work in, I’ve come to recognize the following as the common thread.

An Industry Pivot

Software has become the most important component in many industries, and it is needed in very large quantities. When an entire industry needs to make a technology “pivot,” they often do as much of that as possible in software. For example, the telecommunications industry must make such a pivot in order to support 5G, the next generation of mobile phone network. Not only will the bandwidth and throughput be increased with 5G, but an entirely new set of services will be enabled, including autonomous cars, billions of Internet-connected sensors and other devices (aka IoT), etc. To do that, telecom operators need to entirely redo their networks distributing millions of compute and storage instances very, very close to those devices/users.

Given the drastic changing usage of the network, operators need to be able to deploy, move and/or tear-down services near instantaneously running them on those far-flung compute resources and route the network traffic to and through those service applications in a fully automated fashion. That’s a tremendous amount of software. In the “old” model of complete competition, each vendor would build their solution to this customer need from the ground up and sell it to their telecom operator customers. It would take forever, cost a huge amount of money, and the customers would be nearly assured that one vendor’s system wouldn’t interoperate with another vendor’s solution. The market demands solutions that don’t take that long or cost that much and, if they don’t work together, their value is much less for the customer.

So, instead, all the members of the telecom industry, both vendors and customers are collaborating to build a large portion of the foundational platform software together, just once. Then, each vendor and operator will take that foundation of code and add whatever functionality they feel is differentiating for their customers, test it, harden it, and turn it into a full solution. This way, everyone gets to a solution much more quickly and with much less expense than would otherwise be possible. The mutual benefit of this is obvious. But how can they work together? How can they ensure that each participant in this community can get out of it what they need to be successful? These companies have never worked together before. Worse yet, they are fierce lifelong competitors with the only prior goal of putting the other out of business.

A Level Playing Field

This is what my team does at The Linux Foundation. We create and maintain that level playing field. We are both referee and janitor. We teach what we have learned from the long-term success of the Linux project, among others. Stay tuned for more blog posts detailing those principles and my experiences living those principles both as a participant in open source projects and as the referee.

So, bringing dozens of very large, fierce competitors, both vendors and customers, together and seeding the development effort with several million lines of code that usually only come from one or two companies is the task at hand. That’s never been done before by anyone. The set of projects under the Linux Foundation Networking umbrella is one large experiment in corporate collaborative development. Take ONAP as an example; its successful outcome is not assured in any way. Don’t get me wrong. The project has had an excellent start with three releases under its belt, and in general, things are going very well. However, there is much work to do and many ways for this community, and the organizations behind it, to become more efficient, and get to our end goal faster. Again, such a huge industry pivot has not been done as an open source collaboration before. To get there, we are applying the principles of fairness, technical excellence, and transparency that are the cornerstone of truly collaborative open source development ecosystems. As such, I am optimistic that we will succeed.

This industry-wide technology pivot is not isolated to the telecom sector. We are seeing it in many others. My goal in writing these articles on open source collaborative development principals, best practices, and experiences is to better explain to those new to this model, how it works, why these principals are in place and what to expect when things are working well, and when they are not. There are a variety of non-obvious behaviors that organizational leaders need to adopt and instill in their workforce to be successful in one of these open source efforts. I hope these articles will give you the tools and insight to help you facilitate this culture shift within your organization.
Source

DNS (Domain Name Service): A Detailed, High-level Overview | Linux.com

DNS (Domain Name Service): A Detailed, High-level Overview

How’s that for a confuding title?  In a recent email discussion, a colleague compared the Decentralized Identifier framework to DNS …suggesting they were similar.  I cautiously tended to agree but felt I had an overly simplistic understanding of DNS at a protocol level.  That email discussion led me to learn more about the deeper details of how DNS actually works – and hence, this article.

On the surface, I think most people understand DNS to be a service that you can pass a domain name to and have it resolved to an IP address (in the familiar nnn.ooo.ppp.qqq format).

domain name => nnn.ooo.ppp.qqq

Examples:

  1. If you click on Google DNS Query for microsoft.com, you’ll get a list of IP addresses associated with the Microsoft’s corporate domain name microsoft.com.
  2. If you click on Google DNS Query for www.microsoft.com, you’ll get a list of IP addresses associated with the Microsoft’s corporate web site www.microsoft.com.

NOTE: The Google DNS Query page returns the DNS results in JSON format. This isn’t particular or specific to DNS. It’s just how the Google DNS Query page chooses to format and display the query results.

DNS is actually much more than a domain name to IP address mapping.  Read on…

DNS Resource Records

There is more to the DNS Service database than these simple (default) IP addresses.  The DNS database stores and is able to return many different types of service-specific IP addresses for a particular domain.  These are called DNS Resource Records. Here’s a partial list from http://dns-record-viewer.online-domain-tools.com:

Most APIs only support the retrieval of one Resource Record type at a time (which may return multiple IP addresses of that type). Some APIs default to returning A records; while some APIs will only return A records. Caveat emptor.

To see a complete set of DNS Resource Records for microsoft.com, click on DNSQuery.org query results for microsoft.com and scroll down to the bottom of the results page …to see the complete response (aka authoritative result). It will look something like this:

DNSQUery-org1

Figure 1. DNS Resource Records for microsoft.com: Authoritative Result

NOTE: The Resource Record type is listed in the fourth column: TXT, SOA, NS, MX, A, AAAA, etc.

DNS Protocol

The most interesting new information/learnings is about the DNS protocol.  It’s request/response …nothing new here.  It’s entirely binary …to be expected given its age and the state of technology at that time. Given how frequently DNS is used by every computer on the planet, the efficientcy of a binary protocol also makes sense. The IETF published the original specifications in RFC 882 and RFC 883 in November 1983.

The new part (for me) is that an API typically doesn’t “download” the entire authoritative set of DNS Resource Records all at once for a particular domain, the most common API approach is to request the list of IP addresses (or relevant data) for a particular Resource Record type for a particular domain.

The format of a sample DNS request is illustrated in the following figure:

messages-requestFigure 2. Sample DNS Request [CODEPROJECT]

It’s binary. The QTYPE (purple cells on the right side) defines the type of query. In this case 0x0F is a request for an MX record; hence, this is a request for the data that describes microsoft.com’s external email server interface.

NOTE: The “relevant data” isn’t always an IP address or a list of IP addresses. For example, response may include another domain name, subdomain name, or, in some cases, simply some unstructured text (as far as the DNS specification is concerned).

Here is a typical response for the above sample request:

messages-responseFigure 3. Sample DNS Response [CODEPROJECT]

The response in turn is also binary. In this case, DNS has responded with 3 answers; that is, 3 subdomain names: mailc, maila, and mailb – each with a numerical preference (weight).

The ANY Resource Record Type

There is also a “meta” Resource Record Type called ANY that, as you might guess, requests a collection of all of the different Resource Record type records.  This is illustrated in Figure 1 above.

Source

Want to Learn Python – Starter Pack (AIXpert Blog)

Want to Learn Python – Starter Pack

I am not going to cover actual Python coding here (well, may be, a little at the end) but the good and bad places to start and things to avoid.

First used at the Hollywood, Florida and Rome, Italy IBM Technical University conferences – we call them TechU

Alternatives

  • You could just search Google, YouTube, and many other places and find 10 billion hits
  • You will quickly get totally swamped with options
  • This is Nigel’s starter pack for  a quick start.
  • This is what I found very useful – You, of course, may be different !!!

What is Python good for?

  • Data Scientist job & serious mega-bucks – You can double your already large salary!
  • New new technology areas line PowerAI, Artificial Intelligence , Machine Learning, Deep Learning, etc.
  • Data manipulation fixing a file format and restructuring the data
  • Web 2.0 web pages + REST API

How to develop code & run Python

  1. Edit file and run file
  2. IDE (integrated development environment)
  • Initially IDE is a pain in the backside
    • As you have to learn both the IDE and Language together
    • This sets you back 1 month!
    • But good for a full time developer
  • I recommend: edit and run but also you can run the python in console mode to try things out.
  • Having Programmed in Python for about a year I think I am ready to try a IDE for slicker editing and debugging.
    • Probably the PyCharm Community Edition IDE for a start.

Environments

  • Windows = yuck!
  • Tablet – you can run PyCharms IDE but get yourself a Keyboard for typing.
  • OSX = if you really have too!  Sorry never really got on with the Mac
  • Linux = this is the natural home of Python.
    • I am using a 160 CPU, 256 GB RAM, POWER8 S922LC – rather overkill but it is fast 🙂
    • I also use a Raspberry Pi – that is pretty quick too if the data files are not about a few 1/2 GB. The Raspberry Pi memory is limited.
  • AIX
    • it is in the AIX Open Source toolbox for downloading
    • take care with exotic modules as might have to use git & compile them yourself

How does Python actually run?

  • Compiled – No like say C
  • Interpreted – Yes but highly optimised, cached and parallelised.  I have had some code that finishes so fast I assumed it crashed but it actually work.

Which Python version 2.7 or 3.x ?

  • 3.<latest> – at the rime of writing 3.5 to 3.7 depending on how current your OS is!
  • No one is writing 2.7 any more
  • But there is lots of it in use today but declining over time
  • Not a massive difference but best to learn Python 3

Quick Assumption: You have in the past done at least some of these?

  • C, Korn or bash shell script writing – excellent
  • C programming – brilliant
  • JavaScript programs – very good
  • Python Programming – why are you reading this???

Then you are already done the heavy lifting

Everyone can write a simple program!

A=42
print "The number is " $A

if [[ $A == 42 ]]
then
        print "eureka"
fi

Plus For loop & Functions

What is this? Well is work on my Korn Shell OK on AIX.

Mega Tip 1:  If you know any of the languages above then Python is going to be very simple

image

  1. Data types:
  • string,
  • integers & float,
  • tuples,
  • lists,
  • dictionary
  1. Converting between them
  2. Conditionals:  if, then, else
  3. Loops:  for, while
  4. Functions
  5. User input
  6. String manipulation
  7. File I/O: read and write
  8. Classes and objects
  9. Inheritance            <– IMHO very advanced and for class module developers
  10. Polymorphism       <– IMHO very advanced and for class module developers

Mega Tip 2: Socratica videos on YouTube

We looks at many training course, Online content and YouTube series’ and these are by far the best and absolutely free.

  • Python Programming Tutorials (Computer Science)
  • Concise with dry humour and some computer jokes – see recursion
  • Mostly with worked example
  • Excellent style
  • Caltech grads
  • 33 videos (Don’t watch the two or three for Python2)
  • Most ~8 minutes
  • Total 3.5 hours
  • 15 million views
  • YouTube Socratica Playlist Videos
  • A Geek person told me Socraticia is the female for of Socrates – I think the creators are female. They also cover maths.
  • image
  • I have watched all of these twice – about 6 months apart
  • They are short but to consolidate what your learn try to have a quick go yourself on each topic

Mega Tip 3:  python.org = This is the Python Mother Ship!!

image

  • Also if you are stuck for the syntax of a statement or the details of some module or function then use then Google: python3 <your questions spelt out in full>
  • Often you get http://Python3.org but http://stackoverflow.com answer with worked examples is very good but scan down the answer a bit (the first might not be the best answer or exactly what you want)

Mega Tip 4: Get yourself a project to force you to code and work though problems and new features

  • Something simple
  • Something you are interested in
  • Specially web focused
  • Python strong at
    • Website interaction
    • REST API to an online service
    • Data manipulation/transformation
    • File conversion / filtering

Mega Idiot: My first project was the REST API to a HMC to extract Temp, Watts + performance stats for SSP, server & VM

  • It was a BIG mistake
  • The bad news was the API was so badly documented it was actually impossible to use!
  • With totally unnecessarily complicated XML – using features that are very rarely used by anyone.
  • I had to interview the developers in the end to workout the hidden details of the REST API
  • In simple terms it was the “REST API from Hell!”
  • But I learnt a lot
  • In the end wrote a Python class module to hid the horrible REST API from Python programmers – its 1100 line of code.
  • It returns simple to use Python data structures
  • So in simply  ~40 lines of Python to extract, manipulate & save in:
    • CSV file,
    • .html with GoogleChart graphs
    • Insert into an influxDB database

Mega Tip 5: JSON format files are exactly the same as the Python native data type called Dictionaries

  • So when learning Python concentrate on Dictionaries
  • These are (very simple)   { “some label”: data, more here }
  • and the data can be
    • “Strings” in double or single quote
    • Integers like 12345 or -42
    • Floating point numbers 123.456 (note the decimal point)
  • Often we have a list of dictionaries – lists look like [ item, item, item, . . . ]

JSON file example of stats called “mydata.json”:

[               # list of samples
{               # 1st sample = Python dictionary
"datetime": "2018-04-16T00:06:32",
"cpus_active": 32,
"mhz": 3521,
"cpus_description": "PowerPC_POWER9.,
"cpu_util": {
          "user": 50.4,
          .sys": 9.0,
          "idle": 40.4,
          "wait": 0.2
          }
},              .# end of 1st sample
{ . . . }       # 2nd sample = Python dictionary
]

Python Program to load the data file above –  NEW  Fixed a few Typos here, due to Cut’n’paste issues i.e. double quotes became full stops.

# Read the file as plain text

f = open("mydata.json","r")
text = f.read()
f.close()

# convert to Dictionary
import json         #module to handle JSON format
jdata = json.loads(text)
  • That json.loads() function converts a string (text) to the dictionary called jdata at 10’s of MBs of JSON a second.
  • Now lets extract certain fields using a natural Python syntax
# get the Mhz from the first record (numbers zero)

print("MHz=%d"%(jdata[0]["mhz"]))

# Loop through all the records pulling out the MHz numbers and the CPU utilisation user mode percent (its in sub dictionary called cpu_util)

for sample in jdata:
    print("MHz=%d"%(sample["mhz"]))
    print("User percent=%d"%(sample["cpu_util"]["user"]))

Latest project using Python is njmon for AIX and Linux – the new turbo nmon. 

  • The J is for JSON and we use Python to make data handling very easy

  • For AIX uses libperfstat C library – if you want details see: man libperfstat on AIX or vi /usr/include/libperfstat.h
    • Or find the worked example C code in KnowledgeCenter
  • Status quirky but usable for expert C programmer
  • Vast quantity of perf stats running in to 1000 stats for AIX and VIOS  (if you have many disks, nets or ask for processes stats then that grows rapidly)
  • And for a bonus libperfstat gives us the current CPU MHz
  • Similar for Linux
  • njmon written in C to use C function into the UNIX kernel generates JSON data. Then we use Python to accept the data and inject it live in to a Time Series Data fro graphing in real-time

Stand by for something strange

  • Well known programming problem = swamping the values of two variables a and b. Classic solution is using a temporary variable.
temp = a
a = b
b = temp
  • But can you do that without the temp variable?
  • No in C – I have known this to 40 years!!
  • Python answer
a,b = b,a
  • It is using a native data structure called a tuple.  As its a common programming task they built it into the language.
  • Warning weirdness next:
  • How about this?
  • a = a + b
    b = a - b
    a = a - b
  • Wow! I thought it was impossible!

Next a tiny Web grabbing Python example

  • Lots of websites and web services keep stats that you can download with your browser.
  • I have used sourcfogre.net (used below) and youtube.com for examples.
  • They are most often in JSON and Python has a requests module that makes “talking” to website very simple
  • As an example bung this in your browser ( NOT Internet Explorer )
  • https://sourceforge.net/projects/nmon/files/stats/json?start_date=2000-10-29&end_date=2020-12-31&os_by_country=false
  • And you should get a load of JSON date back that Firefox and Chrome will organise and make pretty.
  • Using Python, requests module and one of my own for graphing we can draw t downloads from the nmon project on SourceForge over time
  • We also need to change the date format from which shows of some of Pythons simple data manipulation
  • ,[2018-09-17 00:00:00],2
  • to
  • ,[‘Date(2018,9,17,00,00,00)’, 2]
  • Below is the source code – with many extra print lines and comments so if you run it would will see the data structures.
  •  NEW  Changes the code here to NOT relying on my  nchart Python Module
  • Green bits a debug but useful it you run it to see the data
  • Red bits are the webpage preable and postamble to setup the Googlecahert library graph.
#!/usr/bin/python3
#--------------------------------- Get the data using REST API from sourceforge.net
import requests
URL='https://sourceforge.net/projects/nmon/files/stats/json?start_date=2000-10-29&end_date=2020-12-04&os_by_country=false'
ret = requests.get(URL)
print(ret.status_code)
#print("return code was %d"%(ret.status_code))
#print("characters returned %d"%(len(ret.text)))
#---------------------------------- Create dictionay
import json
jdata = json.loads(ret.text)
#print(jdata)
months=0
count=0
for row in jdata['downloads']:
#    print(row)
    months=months+1
    count=count+row[1]
print("months=%d"%(months))
print("count =%d"%(count))
#---------------------------------- Create web page+graph using Googlechart library
file = open("downloads.html","w")
file.write('<html>\n'
'  <head>\n'
'    <script type="text/javascript" src="https://www.gstatic.com/charts/loader.js"></script>\n'
'    <script type="text/javascript">\n'
'      google.charts.load("current", {"packages":["corechart"]});\n'
'      google.charts.setOnLoadCallback(drawChart);\n'
'      function drawChart() {\n'
'        var data = google.visualization.arrayToDataTable([\n'
'[{type: "datetime", label: "Date"},"Files"]\n' )

for row in jdata['downloads']:
    str=row[0]
    str = str.replace("-",",")
    str = str.replace(" ",",")
    str = str.replace(":",",")
    file.write(",['Date(%s)',%d]\n"%(str,row[1]))

file.write('        ]);\n'
'        var options = {title: "nmon Downloads", vAxis: {minValue: 0}};\n'
'        var chart = new google.visualization.AreaChart(document.getElementById("chart_div"));\n'
'        chart.draw(data, options);\n'
'      }\n'
'    </script>\n'
'  </head>\n'
'  <body>\n'
'    <div id="chart_div" style="width: 100%; height: 500px;"></div>\n'
'  </body>\n'
'</html>\n')
file.close()
  • The output – skipping the dump of the JSON and the 105 rows of monthly stats looks like this
['2018-05-01 00:00:00', 14153]
['2018-06-01 00:00:00', 12794]
['2018-07-01 00:00:00', 12422]
['2018-08-01 00:00:00', 13127]
['2018-09-01 00:00:00', 11872]
['2018-10-01 00:00:00', 13628]
['2018-11-01 00:00:00', 12805]
['2018-12-01 00:00:00', 15611]
months=114
count =686634

  • So that was captured in Jan  2019 and so far 686,634 downloads of nmon and its tools and the monthly download generated graph looks like this:
  •  NEW  The generated downloads.html file has the following contents – note I removed a few 100 lines of data in the middle. Colours are from the vim editor – see later comments.
  • image
  • So that was captured in Jan  2019 and so far 686,634 downloads of nmon and its tools and the monthly download generated graph looks like this:
  •  NEW  Simpler graph
  • image

C Programmers be aware:

I keep making the same mistakes in writing Python.

  1. On Linux with the right export TERM=linux setting and using vi (actually vim) then you have syntax highlighting which reduces errors a lot – go for a white background or comments in dark blue are unreadable. See the picture below – I have not done that colouring – it is all vim.
  2. vim also helps with auto indentation.
  3. If, for and while statements have a “:” at the end of the line.
  4. In Python it is print and in C it is printf – I had to teach my fingers to miss out the final “f”
  5. Those maddening 4 stop indentations have to be exactly right!
  6. Anything I missed?

image

– – – The End – – –

Source

Deploy Citrix Virtual Apps and Desktops Service on AWS with New Quick Start

Posted On: Jan 7, 2019

This Quick Start automatically deploys Citrix Virtual Apps and Desktops on the Amazon Web Services (AWS) Cloud in about 90 minutes. The deployment includes a hosted shared desktop and two sample published applications.

Using the Citrix Virtual Apps and Desktops service, you can deliver secure virtual apps and desktops to any device, and leave most of the product installation, setup, configuration, upgrades, and monitoring to Citrix. You maintain complete control over applications, policies, and users while delivering a high-quality user experience.

The Quick Start is intended for users who want to set up a trial deployment or want to accelerate a production implementation by automating the foundation setup.

To get started:

For additional AWS Quick Start reference deployments, see our complete catalog.

Quick Starts are automated reference deployments that use AWS CloudFormation templates to deploy key technologies on AWS, following AWS best practices.

This Quick Start was built in collaboration with Citrix Systems, Inc., an AWS Partner Network (APN) Partner.

Source

Linux Today – Using the SSH Config File

If you are regularly connecting to multiple remote systems over SSH on a daily basis, you’ll find that remembering all of the remote IP addresses, different usernames, non standard ports and various command line options is difficult, if not impossible.

One option would be to create a bash alias for each remote server connection. However, there is another, much better and more simpler solution to this problem. OpenSSH allows you to set up per-user configuration file where you can store different SSH options for each remote machine you connect to.

This guide covers the basics of the SSH client configuration file and explains some of the most common configuration options.

We are assuming that you are using a Linux or a macOS system with OpenSSH client installed.

OpenSSH client-side configuration file is named config and it is stored in .ssh directory under user’s home directory. The ~/.ssh directory is automatically created when the user runs the ssh command for the first time.+

If you have never used the ssh command first you’ll need to create the directory using:

mkdir -p ~/.ssh && chmod 700 ~/.ssh

By default the SSH configuration file may not exist so you may need to create it using the touch command:

touch ~/.ssh/config && chmod 600 ~/.ssh/config

This file must be readable and writable only by the user, and not accessible by others:

chmod 700 ~/.ssh/config

The SSH Config File takes the following structure:

Host hostname1
    SSH_OPTION value
    SSH_OPTION value

Host hostname2
    SSH_OPTION value

Host *
    SSH_OPTION value

The contents of the SSH client config file is organized into stanzas (sections). Each stanza starts with the Host directive and contain specific SSH options that are used when establish connection with the remote SSH server.

Indentation is not required, but is recommended since it will make the file easier to read.

The Host directive can contain one pattern or a whitespace-separated list of patterns. Each pattern can contain zero or more non-whitespace character or one of the following pattern specifiers:

  • * – matches zero or more characters. For example, Host * will match all host, while 192.168.0.* will match all hosts in the 192.168.0.0/24 subnet.
  • ? – matches exactly one character. The pattern, Host 10.10.0.? will match all hosts in 10.10.0.[0-9] range.
  • ! – at the start of a pattern will negate its match For example, Host 10.10.0.* !10.10.0.5will match any host in the 10.10.0.0/24 subnet except 10.10.0.5.

The SSH client reads the configuration file stanza by stanza and if more than one patterns match, the options from the first matching stanza takes precedence. Therefore more host-specific declarations should be given at the beginning of the file, and more general overrides at the end of the file.

You can find a full list of available ssh options by typing man ssh_config in your terminal or by visiting the ssh_config man page.

The SSH config file is also read by other programs such as scpsftp and rsync.

Now that we’ve covered the basic of the SSH configuration file let’s look at the following example.

Usually, when you connect to a remote server via SSH you would specify the remote user name, hostname and post. For example, to connect as a user named john to a host called dev.example.com on port 2322 from the command line, you would type:

ssh john@dev.example.com -p 2322

If you like to connect to the server using the same options as provided in the command above simply by typing named ssh dev you’ll need to put the following lines to your "~/.ssh/config file:

~/.ssh/config
Host dev
    HostName dev.example.com
    User john
    Port 2322

Now if you type:

ssh dev

the ssh client will read the configuration file and it will use the connection details that are specified for the dev host,

This example gives more detailed information about the host patterns and option precedence.

Let’s take the following example file:

Host targaryen
    HostName 192.168.1.10
    User daenerys
    Port 7654
    IdentityFile ~/.ssh/targaryen.key

Host tyrell
    HostName 192.168.10.20

Host martell
    HostName 192.168.10.50

Host *ell
    user oberyn

Host * !martell
    LogLevel INFO

Host *
    User root
    Compression yes
  • If you type ssh targaryen the ssh client will read the file and will apply the options from the first match which is Host targaryen. Then it will check the next stanzas one by one for matching pattern. The next matching one is Host * !martell which means all hosts except martell and it will apply the connection option from this stanza. Finally the last definition Host * also mathes but the ssh client will take only the Compression option because the User option is already defined in the Host targaryen stanza. The full list of options used in this case is as follows:
    HostName 192.168.1.10
    User daenerys
    Port 7654
    IdentityFile ~/.ssh/targaryen.key
    LogLevel INFO
    Compression yes
  • When running ssh tyrell the matching host patterns are: Host tyrellHost *ellHost * !martell and Host *. The options used in this case are:
    HostName 192.168.10.20
    User oberyn
    LogLevel INFO
    Compression yes
  • If you run ssh martell the matching host patterns are: Host martellHost *ell and Host *. The options used in this case are:
    HostName 192.168.10.50
    User oberyn
    Compression yes
  • For all other connections options specified in the Host * !martell and Host * sections will be used.

The ssh client receives its configuration in the following precedence order:

  1. Options specified from the command line
  2. Options defined in the ~/.ssh/config
  3. Options defined in the /etc/ssh/ssh_config

If you want to override a single option you can specify it on the command line. For example if you have the following definition:

Host dev
    HostName dev.example.com
    User john
    Port 2322

and you want to use all other options but to connect as user root instead of john simply specify the user on the command line:

ssh -o "User=root" dev

The -F (configfile) switch allows you to specify an alternative per-user configuration file.

If you want your ssh client to ignore all of the options specified in your ssh configuration file, you can use:

ssh -F /dev/null user@example.com

You have learned how to configure your user ssh config file. You may also want to setup a SSH key-based authentication and connect to your Linux servers without entering a password.

Source

WP2Social Auto Publish Powered By : XYZScripts.com