How to Install NextCloud 15 on Ubuntu 18.04

Install NextCloud on Ubuntu

NextCloud is a free and open-source self-hosted file sharing and communication platform built using PHP. It is a great alternative to some of the popular services available on the market, such as Dropbox, Google Drive, OwnCloud, etc. With NextCloud, you can easily store your data on your Ubuntu 18.04 VPS, create and manage your contacts, calendars, to-do lists, and much more. In this tutorial, we will install NextCloud version 15 on an Ubuntu 18.04 VPS – version 15 is a major release that comes with a lot of new features and improvements.

Prerequisites:

– An Ubuntu 18.04 VPS
– A system user with root privileges
– MySQL or MariaDB database server version 5.5 or newer with InnoDB storage engine.
– Apache 2.4 with mod_php enabled
– PHP version 7.0 or newer

Log in and update the server:

Log in to your Ubuntu 18.04 VPS via SSH as user root:

ssh root@IP_Address -p Port_number

Don’t forget to replace ‘IP_Address’ and ‘Port_number’ with the actual IP address of your server and the SSH service port.

Run the following commands to make sure that all installed packages on your Ubuntu 18.04 VPS are updated to the latest available version:

apt update && apt upgrade

Install Apache and PHP:

We need to install the Apache web server in order to serve the NextCloud files. It can be done easily by using the following command:

apt -y install apache2

Once the web server is installed, enable it to automatically start after a server restart:

systemctl enable apache2

Verify that the web server is up and running on your server:

service apache2 status

This is what the output should look like:

apache2.service - The Apache HTTP Server
   Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
  Drop-In: /lib/systemd/system/apache2.service.d
           ââapache2-systemd.conf
   Active: active (running) since Thu 2018-12-27 05:13:26 CST; 12min ago

Since NextCloud is a PHP-based application, our next step is to install PHP and some PHP extensions required by NextCloud:

apt -y install php php-cli php-common php-curl php-xml php-gd php-mbstring php-zip php-mysql

Restart the Apache web server to load the PHP modules:

systemctl restart apache2

Now check the PHP version installed on your server:

php -v
PHP 7.2.10-0ubuntu0.18.04.1 (cli) (built: Sep 13 2018 13:45:02) ( NTS )
Copyright (c) 1997-2018 The PHP Group
Zend Engine v3.2.0, Copyright (c) 1998-2018 Zend Technologies

Install MariaDB and create a database:

NextCloud needs an SQL database to store information. For this purpose, we will install the MariaDB database server by executing the following command:

apt -y install mariadb-server

Just like with Apache, enable MariaDB to automatically start after server reboot:

systemctl enable mariadb

Next, run the ‘mysql_secure_installation’ post-installation script to set a password for the MariaDB root user and to further improve the security of your MariaDB server. Once all steps are completed, you can go ahead and log in to the MariaDB server as the root user. We will then create a new user and database – both of which are necessary for installing NextCloud.

mysql -u root -p

MariaDB [(none)]> CREATE DATABASE nextcloud;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nextcloud.* TO 'nextcloud_user'@'localhost' IDENTIFIED BY 'PASSWORD';
MariaDB [(none)]> FLUSH PRIVILEGES;
MariaDB [(none)]> exit;

Don’t forget to replace ‘PASSWORD’ with a strong password.

Download and install NextCloud:

Go to NextCloud’s official website and download the latest stable release of the application. At the time of this article being published, the latest version of NextCloud is version 15.0.0.

wget https://download.nextcloud.com/server/releases/nextcloud-15.0.0.zip

Once the zip archive is downloaded, unpack it to the document root directory on your server:

unzip nextcloud-15.0.0.zip -d /var/www/html/

All files will be stored under a directory named ‘nextcloud’.

Remove the zip archive and change the ownership of the NextCloud files:

rm -f nextcloud-15.0.0.zip
chow -R www-data:www-data /var/www/html/nextcloud

That was the last step of configuring your server and installing NextCloud through the command line. Now, you can open your preferred web browser and access http://Your_IP/nextcloud to continue with the setup. Make sure to replace “Your_IP” with your server’s IP address or domain name. If everything is properly configured, you will get the following screen:

Create an administrative account, set the data folder and enter the MariaDB details for the user and database we created earlier in this tutorial.

That’s all – if you followed the steps in the tutorial, you will have successfully installed NextCloud version 15 on your Ubuntu 18.04 VPS. For more details about its configuration and usage, please check their official documentation.


Of course, you don’t need to Install NextCloud 15 on Ubuntu 18.04 yourself if you use one of our NextCloud Hosting services, in which case you can simply ask our expert Linux admins to install and set this up for you. They are available 24×7 and will take care of your request immediately.

PS. If you liked this post on How To Install NextCloud 15 on Ubuntu 18.04, please share it with your friends on the social networks by using the buttons on the left, or simply leave a reply below. Thanks.

Source

Linux Today – Back to Basics: Sort and Uniq

Learn the fundamentals of sorting and de-duplicating text on the command line.

If you’ve been using the command line for a long time, it’s easy to take the commands you use every day for granted. But, if you’re new to the Linux command line, there are several commands that make your life easier that you may not stumble upon automatically. In this article, I cover the basics of two commands that are essential in anyone’s arsenal: sort and uniq.

The sort command does exactly what it says: it takes text data as input and outputs sorted data. There are many scenarios on the command line when you may need to sort output, such as the output from a command that doesn’t offer sorting options of its own (or the sort arguments are obscure enough that you just use the sort command instead). In other cases, you may have a text file full of data (perhaps generated with some other script), and you need a quick way to view it in a sorted form.

Let’s start with a file named “test” that contains three lines:


Foo
Bar
Baz

sort can operate either on STDIN redirection, the input from a pipe, or, in the case of a file, you also can just specify the file on the command. So, the three following commands all accomplish the same thing:


cat test | sort
sort < test
sort test

And the output that you get from all of these commands is:


Bar
Baz
Foo

Sorting Numerical Output

Now, let’s complicate the file by adding three more lines:


Foo
Bar
Baz
1. ZZZ
2. YYY
11. XXX

If you run one of the above sort commands again, this time, you’ll see different output:


11. XXX
1. ZZZ
2. YYY
Bar
Baz
Foo

This is likely not the output you wanted, but it points out an important fact about sort. By default, it sorts alphabetically, not numerically. This means that a line that starts with “11.” is sorted above a line that starts with “1.”, and all of the lines that start with numbers are sorted above lines that start with letters.

To sort numerically, pass sort the -n option:


sort -n test

Bar
Baz
Foo
1. ZZZ
2. YYY
11. XXX

Find the Largest Directories on a Filesystem

Numerical sorting comes in handy for a lot of command-line output—in particular, when your command contains a tally of some kind, and you want to see the largest or smallest in the tally. For instance, if you want to find out what files are using the most space in a particular directory and you want to dig down recursively, you would run a command like this:


du -ckx

This command dives recursively into the current directory and doesn’t traverse any other mountpoints inside that directory. It tallies the file sizes and then outputs each directory in the order it found them, preceded by the size of the files underneath it in kilobytes. Of course, if you’re running such a command, it’s probably because you want to know which directory is using the most space, and this is where sortcomes in:


du -ckx | sort -n

Now you’ll get a list of all of the directories underneath the current directory, but this time sorted by file size. If you want to get even fancier, pipe its output to the tail command to see the top ten. On the other hand, if you wanted the largest directories to be at the top of the output, not the bottom, you would add the-r option, which tells sort to reverse the order. So to get the top ten (well, top eight—the first line is the total, and the next line is the size of the current directory):


du -ckx | sort -rn | head

This works, but often people using the du command want to see sizes in more readable output than kilobytes. The du command offers the -h argument that provides “human-readable” output. So, you’ll see output like 9.6G instead of 10024764 with the -k option. When you pipe that human-readable output to sort though, you won’t get the results you expect by default, as it will sort 9.6G above 9.6K, which would be above 9.6M.

The sort command has a -h option of its own, and it acts like -n, but it’s able to parse standard human-readable numbers and sort them accordingly. So, to see the top ten largest directories in your current directory with human-readable output, you would type this:


du -chx | sort -rh | head

Removing Duplicates

The sort command isn’t limited to sorting one file. You might pipe multiple files into it or list multiple files as arguments on the command line, and it will combine them all and sort them. Unfortunately though, if those files contain some of the same information, you will end up with duplicates in the sorted output.

To remove duplicates, you need the uniq command, which by default removes any duplicate lines that are adjacent to each other from its input and outputs the results. So, let’s say you had two files that were different lists of names:


cat namelist1.txt
Jones, Bob
Smith, Mary
Babbage, Walter

cat namelist2.txt
Jones, Bob
Jones, Shawn
Smith, Cathy

You could remove the duplicates by piping to uniq:


sort namelist1.txt namelist2.txt | uniq
Babbage, Walter
Jones, Bob
Jones, Shawn
Smith, Cathy
Smith, Mary

The uniq command has more tricks up its sleeve than this. It also can output only the duplicated lines, so you can find duplicates in a set of files quickly by adding the -d option:


sort namelist1.txt namelist2.txt | uniq -d
Jones, Bob

You even can have uniq provide a tally of how many times it has found each entry with the -c option:


sort namelist1.txt namelist2.txt | uniq -c
1 Babbage, Walter
2 Jones, Bob
1 Jones, Shawn
1 Smith, Cathy
1 Smith, Mary

As you can see, “Jones, Bob” occurred the most times, but if you had a lot of lines, this sort of tally might be less useful for you, as you’d like the most duplicates to bubble up to the top. Fortunately, you have the sort command:


sort namelist1.txt namelist2.txt | uniq -c | sort -nr
2 Jones, Bob
1 Smith, Mary
1 Smith, Cathy
1 Jones, Shawn
1 Babbage, Walter

Conclusion

I hope these cases of using sort and uniq with realistic examples show you how powerful these simple command-line tools are. Half the secret with these foundational command-line tools is to discover (and remember) they exist so that they’ll be at your command the next time you run into a problem they can solve.

Source

Linus Torvalds Welcomes 2019 with Linux 5.x » Linux Magazine

Better support for GPUs and CPUs.

Linus Torvalds has announced the release of Linux 5.0-rc1. The kernel was supposed to be 4.21, but he decided to move to the 5.x series. Torvalds has made it clear that the numbering of the kernel doesn’t make much sense. So don’t get too excited about this release.

Torvalds explained in the LKML (Linux Kernel Mailing List), “The numbering change is not indicative of anything special. If you want to have an official reason, it’s that I ran out of fingers and numerology this time (we’re _about_ 6.5M objects in the git repo), and there isn’t any major particular feature that made for the release numbering either,” he said.

The release brings CPU and GPU improvements. In addition to support for AMD’s FreeSync display, it also comes with support for Raspberry Pi Touchscreen.

Talking about the ‘content’ of the kernel Torvalds wrote, “The stats look fairly normal. About 50% is drivers, 20% is architecture updates, 10% is tooling, and the remaining 20% is all over (documentation, networking, filesystems, header file updates, core kernel code..).”

Source

GitHub Offers Free Private Repositories » Linux Magazine

Popular source code collaboration site makes a major change to feature set.

GitHub has announced that it is now taking on players like GitLabs and offering free private repositories. Anyone could always set-up free repository on GitHub; the condition was that the code had to be public, which meant that projects and organizations could not set up private repositories. If they want private repository, they had to pay.

Now anyone can create a private repository for free. The only caveat is that there can be at most three collaborators to the project, which means big organizations can’t exploit the free service to manage their mega projects.

A private repository lets developers communities work on the code base internally, away from public. GitHub competitors like GitLab already offer free private repository.

Source

Industry-Scale Collaboration at The Linux Foundation

 

Learn about the principles required to achieve a successful industry pivot to open source.

Linux and open source have changed the computer industry (among many others) forever. Today, there are tens of millions of open source projects. A valid question is “Why?” How can it possibly make sense to hire developers that work on code that is given away for free to anyone who cares to take it? I know of many answers to this question, but for the communities that I work in, I’ve come to recognize the following as the common thread.

An Industry Pivot

Software has become the most important component in many industries, and it is needed in very large quantities. When an entire industry needs to make a technology “pivot,” they often do as much of that as possible in software. For example, the telecommunications industry must make such a pivot in order to support 5G, the next generation of mobile phone network. Not only will the bandwidth and throughput be increased with 5G, but an entirely new set of services will be enabled, including autonomous cars, billions of Internet-connected sensors and other devices (aka IoT), etc. To do that, telecom operators need to entirely redo their networks distributing millions of compute and storage instances very, very close to those devices/users.

Given the drastic changing usage of the network, operators need to be able to deploy, move and/or tear-down services near instantaneously running them on those far-flung compute resources and route the network traffic to and through those service applications in a fully automated fashion. That’s a tremendous amount of software. In the “old” model of complete competition, each vendor would build their solution to this customer need from the ground up and sell it to their telecom operator customers. It would take forever, cost a huge amount of money, and the customers would be nearly assured that one vendor’s system wouldn’t interoperate with another vendor’s solution. The market demands solutions that don’t take that long or cost that much and, if they don’t work together, their value is much less for the customer.

So, instead, all the members of the telecom industry, both vendors and customers are collaborating to build a large portion of the foundational platform software together, just once. Then, each vendor and operator will take that foundation of code and add whatever functionality they feel is differentiating for their customers, test it, harden it, and turn it into a full solution. This way, everyone gets to a solution much more quickly and with much less expense than would otherwise be possible. The mutual benefit of this is obvious. But how can they work together? How can they ensure that each participant in this community can get out of it what they need to be successful? These companies have never worked together before. Worse yet, they are fierce lifelong competitors with the only prior goal of putting the other out of business.

A Level Playing Field

This is what my team does at The Linux Foundation. We create and maintain that level playing field. We are both referee and janitor. We teach what we have learned from the long-term success of the Linux project, among others. Stay tuned for more blog posts detailing those principles and my experiences living those principles both as a participant in open source projects and as the referee.

So, bringing dozens of very large, fierce competitors, both vendors and customers, together and seeding the development effort with several million lines of code that usually only come from one or two companies is the task at hand. That’s never been done before by anyone. The set of projects under the Linux Foundation Networking umbrella is one large experiment in corporate collaborative development. Take ONAP as an example; its successful outcome is not assured in any way. Don’t get me wrong. The project has had an excellent start with three releases under its belt, and in general, things are going very well. However, there is much work to do and many ways for this community, and the organizations behind it, to become more efficient, and get to our end goal faster. Again, such a huge industry pivot has not been done as an open source collaboration before. To get there, we are applying the principles of fairness, technical excellence, and transparency that are the cornerstone of truly collaborative open source development ecosystems. As such, I am optimistic that we will succeed.

This industry-wide technology pivot is not isolated to the telecom sector. We are seeing it in many others. My goal in writing these articles on open source collaborative development principals, best practices, and experiences is to better explain to those new to this model, how it works, why these principals are in place and what to expect when things are working well, and when they are not. There are a variety of non-obvious behaviors that organizational leaders need to adopt and instill in their workforce to be successful in one of these open source efforts. I hope these articles will give you the tools and insight to help you facilitate this culture shift within your organization.
Source

DNS (Domain Name Service): A Detailed, High-level Overview | Linux.com

DNS (Domain Name Service): A Detailed, High-level Overview

How’s that for a confuding title?  In a recent email discussion, a colleague compared the Decentralized Identifier framework to DNS …suggesting they were similar.  I cautiously tended to agree but felt I had an overly simplistic understanding of DNS at a protocol level.  That email discussion led me to learn more about the deeper details of how DNS actually works – and hence, this article.

On the surface, I think most people understand DNS to be a service that you can pass a domain name to and have it resolved to an IP address (in the familiar nnn.ooo.ppp.qqq format).

domain name => nnn.ooo.ppp.qqq

Examples:

  1. If you click on Google DNS Query for microsoft.com, you’ll get a list of IP addresses associated with the Microsoft’s corporate domain name microsoft.com.
  2. If you click on Google DNS Query for www.microsoft.com, you’ll get a list of IP addresses associated with the Microsoft’s corporate web site www.microsoft.com.

NOTE: The Google DNS Query page returns the DNS results in JSON format. This isn’t particular or specific to DNS. It’s just how the Google DNS Query page chooses to format and display the query results.

DNS is actually much more than a domain name to IP address mapping.  Read on…

DNS Resource Records

There is more to the DNS Service database than these simple (default) IP addresses.  The DNS database stores and is able to return many different types of service-specific IP addresses for a particular domain.  These are called DNS Resource Records. Here’s a partial list from http://dns-record-viewer.online-domain-tools.com:

Most APIs only support the retrieval of one Resource Record type at a time (which may return multiple IP addresses of that type). Some APIs default to returning A records; while some APIs will only return A records. Caveat emptor.

To see a complete set of DNS Resource Records for microsoft.com, click on DNSQuery.org query results for microsoft.com and scroll down to the bottom of the results page …to see the complete response (aka authoritative result). It will look something like this:

DNSQUery-org1

Figure 1. DNS Resource Records for microsoft.com: Authoritative Result

NOTE: The Resource Record type is listed in the fourth column: TXT, SOA, NS, MX, A, AAAA, etc.

DNS Protocol

The most interesting new information/learnings is about the DNS protocol.  It’s request/response …nothing new here.  It’s entirely binary …to be expected given its age and the state of technology at that time. Given how frequently DNS is used by every computer on the planet, the efficientcy of a binary protocol also makes sense. The IETF published the original specifications in RFC 882 and RFC 883 in November 1983.

The new part (for me) is that an API typically doesn’t “download” the entire authoritative set of DNS Resource Records all at once for a particular domain, the most common API approach is to request the list of IP addresses (or relevant data) for a particular Resource Record type for a particular domain.

The format of a sample DNS request is illustrated in the following figure:

messages-requestFigure 2. Sample DNS Request [CODEPROJECT]

It’s binary. The QTYPE (purple cells on the right side) defines the type of query. In this case 0x0F is a request for an MX record; hence, this is a request for the data that describes microsoft.com’s external email server interface.

NOTE: The “relevant data” isn’t always an IP address or a list of IP addresses. For example, response may include another domain name, subdomain name, or, in some cases, simply some unstructured text (as far as the DNS specification is concerned).

Here is a typical response for the above sample request:

messages-responseFigure 3. Sample DNS Response [CODEPROJECT]

The response in turn is also binary. In this case, DNS has responded with 3 answers; that is, 3 subdomain names: mailc, maila, and mailb – each with a numerical preference (weight).

The ANY Resource Record Type

There is also a “meta” Resource Record Type called ANY that, as you might guess, requests a collection of all of the different Resource Record type records.  This is illustrated in Figure 1 above.

Source

Want to Learn Python – Starter Pack (AIXpert Blog)

Want to Learn Python – Starter Pack

I am not going to cover actual Python coding here (well, may be, a little at the end) but the good and bad places to start and things to avoid.

First used at the Hollywood, Florida and Rome, Italy IBM Technical University conferences – we call them TechU

Alternatives

  • You could just search Google, YouTube, and many other places and find 10 billion hits
  • You will quickly get totally swamped with options
  • This is Nigel’s starter pack for  a quick start.
  • This is what I found very useful – You, of course, may be different !!!

What is Python good for?

  • Data Scientist job & serious mega-bucks – You can double your already large salary!
  • New new technology areas line PowerAI, Artificial Intelligence , Machine Learning, Deep Learning, etc.
  • Data manipulation fixing a file format and restructuring the data
  • Web 2.0 web pages + REST API

How to develop code & run Python

  1. Edit file and run file
  2. IDE (integrated development environment)
  • Initially IDE is a pain in the backside
    • As you have to learn both the IDE and Language together
    • This sets you back 1 month!
    • But good for a full time developer
  • I recommend: edit and run but also you can run the python in console mode to try things out.
  • Having Programmed in Python for about a year I think I am ready to try a IDE for slicker editing and debugging.
    • Probably the PyCharm Community Edition IDE for a start.

Environments

  • Windows = yuck!
  • Tablet – you can run PyCharms IDE but get yourself a Keyboard for typing.
  • OSX = if you really have too!  Sorry never really got on with the Mac
  • Linux = this is the natural home of Python.
    • I am using a 160 CPU, 256 GB RAM, POWER8 S922LC – rather overkill but it is fast 🙂
    • I also use a Raspberry Pi – that is pretty quick too if the data files are not about a few 1/2 GB. The Raspberry Pi memory is limited.
  • AIX
    • it is in the AIX Open Source toolbox for downloading
    • take care with exotic modules as might have to use git & compile them yourself

How does Python actually run?

  • Compiled – No like say C
  • Interpreted – Yes but highly optimised, cached and parallelised.  I have had some code that finishes so fast I assumed it crashed but it actually work.

Which Python version 2.7 or 3.x ?

  • 3.<latest> – at the rime of writing 3.5 to 3.7 depending on how current your OS is!
  • No one is writing 2.7 any more
  • But there is lots of it in use today but declining over time
  • Not a massive difference but best to learn Python 3

Quick Assumption: You have in the past done at least some of these?

  • C, Korn or bash shell script writing – excellent
  • C programming – brilliant
  • JavaScript programs – very good
  • Python Programming – why are you reading this???

Then you are already done the heavy lifting

Everyone can write a simple program!

A=42
print "The number is " $A

if [[ $A == 42 ]]
then
        print "eureka"
fi

Plus For loop & Functions

What is this? Well is work on my Korn Shell OK on AIX.

Mega Tip 1:  If you know any of the languages above then Python is going to be very simple

image

  1. Data types:
  • string,
  • integers & float,
  • tuples,
  • lists,
  • dictionary
  1. Converting between them
  2. Conditionals:  if, then, else
  3. Loops:  for, while
  4. Functions
  5. User input
  6. String manipulation
  7. File I/O: read and write
  8. Classes and objects
  9. Inheritance            <– IMHO very advanced and for class module developers
  10. Polymorphism       <– IMHO very advanced and for class module developers

Mega Tip 2: Socratica videos on YouTube

We looks at many training course, Online content and YouTube series’ and these are by far the best and absolutely free.

  • Python Programming Tutorials (Computer Science)
  • Concise with dry humour and some computer jokes – see recursion
  • Mostly with worked example
  • Excellent style
  • Caltech grads
  • 33 videos (Don’t watch the two or three for Python2)
  • Most ~8 minutes
  • Total 3.5 hours
  • 15 million views
  • YouTube Socratica Playlist Videos
  • A Geek person told me Socraticia is the female for of Socrates – I think the creators are female. They also cover maths.
  • image
  • I have watched all of these twice – about 6 months apart
  • They are short but to consolidate what your learn try to have a quick go yourself on each topic

Mega Tip 3:  python.org = This is the Python Mother Ship!!

image

  • Also if you are stuck for the syntax of a statement or the details of some module or function then use then Google: python3 <your questions spelt out in full>
  • Often you get http://Python3.org but http://stackoverflow.com answer with worked examples is very good but scan down the answer a bit (the first might not be the best answer or exactly what you want)

Mega Tip 4: Get yourself a project to force you to code and work though problems and new features

  • Something simple
  • Something you are interested in
  • Specially web focused
  • Python strong at
    • Website interaction
    • REST API to an online service
    • Data manipulation/transformation
    • File conversion / filtering

Mega Idiot: My first project was the REST API to a HMC to extract Temp, Watts + performance stats for SSP, server & VM

  • It was a BIG mistake
  • The bad news was the API was so badly documented it was actually impossible to use!
  • With totally unnecessarily complicated XML – using features that are very rarely used by anyone.
  • I had to interview the developers in the end to workout the hidden details of the REST API
  • In simple terms it was the “REST API from Hell!”
  • But I learnt a lot
  • In the end wrote a Python class module to hid the horrible REST API from Python programmers – its 1100 line of code.
  • It returns simple to use Python data structures
  • So in simply  ~40 lines of Python to extract, manipulate & save in:
    • CSV file,
    • .html with GoogleChart graphs
    • Insert into an influxDB database

Mega Tip 5: JSON format files are exactly the same as the Python native data type called Dictionaries

  • So when learning Python concentrate on Dictionaries
  • These are (very simple)   { “some label”: data, more here }
  • and the data can be
    • “Strings” in double or single quote
    • Integers like 12345 or -42
    • Floating point numbers 123.456 (note the decimal point)
  • Often we have a list of dictionaries – lists look like [ item, item, item, . . . ]

JSON file example of stats called “mydata.json”:

[               # list of samples
{               # 1st sample = Python dictionary
"datetime": "2018-04-16T00:06:32",
"cpus_active": 32,
"mhz": 3521,
"cpus_description": "PowerPC_POWER9.,
"cpu_util": {
          "user": 50.4,
          .sys": 9.0,
          "idle": 40.4,
          "wait": 0.2
          }
},              .# end of 1st sample
{ . . . }       # 2nd sample = Python dictionary
]

Python Program to load the data file above –  NEW  Fixed a few Typos here, due to Cut’n’paste issues i.e. double quotes became full stops.

# Read the file as plain text

f = open("mydata.json","r")
text = f.read()
f.close()

# convert to Dictionary
import json         #module to handle JSON format
jdata = json.loads(text)
  • That json.loads() function converts a string (text) to the dictionary called jdata at 10’s of MBs of JSON a second.
  • Now lets extract certain fields using a natural Python syntax
# get the Mhz from the first record (numbers zero)

print("MHz=%d"%(jdata[0]["mhz"]))

# Loop through all the records pulling out the MHz numbers and the CPU utilisation user mode percent (its in sub dictionary called cpu_util)

for sample in jdata:
    print("MHz=%d"%(sample["mhz"]))
    print("User percent=%d"%(sample["cpu_util"]["user"]))

Latest project using Python is njmon for AIX and Linux – the new turbo nmon. 

  • The J is for JSON and we use Python to make data handling very easy

  • For AIX uses libperfstat C library – if you want details see: man libperfstat on AIX or vi /usr/include/libperfstat.h
    • Or find the worked example C code in KnowledgeCenter
  • Status quirky but usable for expert C programmer
  • Vast quantity of perf stats running in to 1000 stats for AIX and VIOS  (if you have many disks, nets or ask for processes stats then that grows rapidly)
  • And for a bonus libperfstat gives us the current CPU MHz
  • Similar for Linux
  • njmon written in C to use C function into the UNIX kernel generates JSON data. Then we use Python to accept the data and inject it live in to a Time Series Data fro graphing in real-time

Stand by for something strange

  • Well known programming problem = swamping the values of two variables a and b. Classic solution is using a temporary variable.
temp = a
a = b
b = temp
  • But can you do that without the temp variable?
  • No in C – I have known this to 40 years!!
  • Python answer
a,b = b,a
  • It is using a native data structure called a tuple.  As its a common programming task they built it into the language.
  • Warning weirdness next:
  • How about this?
  • a = a + b
    b = a - b
    a = a - b
  • Wow! I thought it was impossible!

Next a tiny Web grabbing Python example

  • Lots of websites and web services keep stats that you can download with your browser.
  • I have used sourcfogre.net (used below) and youtube.com for examples.
  • They are most often in JSON and Python has a requests module that makes “talking” to website very simple
  • As an example bung this in your browser ( NOT Internet Explorer )
  • https://sourceforge.net/projects/nmon/files/stats/json?start_date=2000-10-29&end_date=2020-12-31&os_by_country=false
  • And you should get a load of JSON date back that Firefox and Chrome will organise and make pretty.
  • Using Python, requests module and one of my own for graphing we can draw t downloads from the nmon project on SourceForge over time
  • We also need to change the date format from which shows of some of Pythons simple data manipulation
  • ,[2018-09-17 00:00:00],2
  • to
  • ,[‘Date(2018,9,17,00,00,00)’, 2]
  • Below is the source code – with many extra print lines and comments so if you run it would will see the data structures.
  •  NEW  Changes the code here to NOT relying on my  nchart Python Module
  • Green bits a debug but useful it you run it to see the data
  • Red bits are the webpage preable and postamble to setup the Googlecahert library graph.
#!/usr/bin/python3
#--------------------------------- Get the data using REST API from sourceforge.net
import requests
URL='https://sourceforge.net/projects/nmon/files/stats/json?start_date=2000-10-29&end_date=2020-12-04&os_by_country=false'
ret = requests.get(URL)
print(ret.status_code)
#print("return code was %d"%(ret.status_code))
#print("characters returned %d"%(len(ret.text)))
#---------------------------------- Create dictionay
import json
jdata = json.loads(ret.text)
#print(jdata)
months=0
count=0
for row in jdata['downloads']:
#    print(row)
    months=months+1
    count=count+row[1]
print("months=%d"%(months))
print("count =%d"%(count))
#---------------------------------- Create web page+graph using Googlechart library
file = open("downloads.html","w")
file.write('<html>\n'
'  <head>\n'
'    <script type="text/javascript" src="https://www.gstatic.com/charts/loader.js"></script>\n'
'    <script type="text/javascript">\n'
'      google.charts.load("current", {"packages":["corechart"]});\n'
'      google.charts.setOnLoadCallback(drawChart);\n'
'      function drawChart() {\n'
'        var data = google.visualization.arrayToDataTable([\n'
'[{type: "datetime", label: "Date"},"Files"]\n' )

for row in jdata['downloads']:
    str=row[0]
    str = str.replace("-",",")
    str = str.replace(" ",",")
    str = str.replace(":",",")
    file.write(",['Date(%s)',%d]\n"%(str,row[1]))

file.write('        ]);\n'
'        var options = {title: "nmon Downloads", vAxis: {minValue: 0}};\n'
'        var chart = new google.visualization.AreaChart(document.getElementById("chart_div"));\n'
'        chart.draw(data, options);\n'
'      }\n'
'    </script>\n'
'  </head>\n'
'  <body>\n'
'    <div id="chart_div" style="width: 100%; height: 500px;"></div>\n'
'  </body>\n'
'</html>\n')
file.close()
  • The output – skipping the dump of the JSON and the 105 rows of monthly stats looks like this
['2018-05-01 00:00:00', 14153]
['2018-06-01 00:00:00', 12794]
['2018-07-01 00:00:00', 12422]
['2018-08-01 00:00:00', 13127]
['2018-09-01 00:00:00', 11872]
['2018-10-01 00:00:00', 13628]
['2018-11-01 00:00:00', 12805]
['2018-12-01 00:00:00', 15611]
months=114
count =686634

  • So that was captured in Jan  2019 and so far 686,634 downloads of nmon and its tools and the monthly download generated graph looks like this:
  •  NEW  The generated downloads.html file has the following contents – note I removed a few 100 lines of data in the middle. Colours are from the vim editor – see later comments.
  • image
  • So that was captured in Jan  2019 and so far 686,634 downloads of nmon and its tools and the monthly download generated graph looks like this:
  •  NEW  Simpler graph
  • image

C Programmers be aware:

I keep making the same mistakes in writing Python.

  1. On Linux with the right export TERM=linux setting and using vi (actually vim) then you have syntax highlighting which reduces errors a lot – go for a white background or comments in dark blue are unreadable. See the picture below – I have not done that colouring – it is all vim.
  2. vim also helps with auto indentation.
  3. If, for and while statements have a “:” at the end of the line.
  4. In Python it is print and in C it is printf – I had to teach my fingers to miss out the final “f”
  5. Those maddening 4 stop indentations have to be exactly right!
  6. Anything I missed?

image

– – – The End – – –

Source

Deploy Citrix Virtual Apps and Desktops Service on AWS with New Quick Start

Posted On: Jan 7, 2019

This Quick Start automatically deploys Citrix Virtual Apps and Desktops on the Amazon Web Services (AWS) Cloud in about 90 minutes. The deployment includes a hosted shared desktop and two sample published applications.

Using the Citrix Virtual Apps and Desktops service, you can deliver secure virtual apps and desktops to any device, and leave most of the product installation, setup, configuration, upgrades, and monitoring to Citrix. You maintain complete control over applications, policies, and users while delivering a high-quality user experience.

The Quick Start is intended for users who want to set up a trial deployment or want to accelerate a production implementation by automating the foundation setup.

To get started:

For additional AWS Quick Start reference deployments, see our complete catalog.

Quick Starts are automated reference deployments that use AWS CloudFormation templates to deploy key technologies on AWS, following AWS best practices.

This Quick Start was built in collaboration with Citrix Systems, Inc., an AWS Partner Network (APN) Partner.

Source

Linux Today – Using the SSH Config File

If you are regularly connecting to multiple remote systems over SSH on a daily basis, you’ll find that remembering all of the remote IP addresses, different usernames, non standard ports and various command line options is difficult, if not impossible.

One option would be to create a bash alias for each remote server connection. However, there is another, much better and more simpler solution to this problem. OpenSSH allows you to set up per-user configuration file where you can store different SSH options for each remote machine you connect to.

This guide covers the basics of the SSH client configuration file and explains some of the most common configuration options.

We are assuming that you are using a Linux or a macOS system with OpenSSH client installed.

OpenSSH client-side configuration file is named config and it is stored in .ssh directory under user’s home directory. The ~/.ssh directory is automatically created when the user runs the ssh command for the first time.+

If you have never used the ssh command first you’ll need to create the directory using:

mkdir -p ~/.ssh && chmod 700 ~/.ssh

By default the SSH configuration file may not exist so you may need to create it using the touch command:

touch ~/.ssh/config && chmod 600 ~/.ssh/config

This file must be readable and writable only by the user, and not accessible by others:

chmod 700 ~/.ssh/config

The SSH Config File takes the following structure:

Host hostname1
    SSH_OPTION value
    SSH_OPTION value

Host hostname2
    SSH_OPTION value

Host *
    SSH_OPTION value

The contents of the SSH client config file is organized into stanzas (sections). Each stanza starts with the Host directive and contain specific SSH options that are used when establish connection with the remote SSH server.

Indentation is not required, but is recommended since it will make the file easier to read.

The Host directive can contain one pattern or a whitespace-separated list of patterns. Each pattern can contain zero or more non-whitespace character or one of the following pattern specifiers:

  • * – matches zero or more characters. For example, Host * will match all host, while 192.168.0.* will match all hosts in the 192.168.0.0/24 subnet.
  • ? – matches exactly one character. The pattern, Host 10.10.0.? will match all hosts in 10.10.0.[0-9] range.
  • ! – at the start of a pattern will negate its match For example, Host 10.10.0.* !10.10.0.5will match any host in the 10.10.0.0/24 subnet except 10.10.0.5.

The SSH client reads the configuration file stanza by stanza and if more than one patterns match, the options from the first matching stanza takes precedence. Therefore more host-specific declarations should be given at the beginning of the file, and more general overrides at the end of the file.

You can find a full list of available ssh options by typing man ssh_config in your terminal or by visiting the ssh_config man page.

The SSH config file is also read by other programs such as scpsftp and rsync.

Now that we’ve covered the basic of the SSH configuration file let’s look at the following example.

Usually, when you connect to a remote server via SSH you would specify the remote user name, hostname and post. For example, to connect as a user named john to a host called dev.example.com on port 2322 from the command line, you would type:

ssh john@dev.example.com -p 2322

If you like to connect to the server using the same options as provided in the command above simply by typing named ssh dev you’ll need to put the following lines to your "~/.ssh/config file:

~/.ssh/config
Host dev
    HostName dev.example.com
    User john
    Port 2322

Now if you type:

ssh dev

the ssh client will read the configuration file and it will use the connection details that are specified for the dev host,

This example gives more detailed information about the host patterns and option precedence.

Let’s take the following example file:

Host targaryen
    HostName 192.168.1.10
    User daenerys
    Port 7654
    IdentityFile ~/.ssh/targaryen.key

Host tyrell
    HostName 192.168.10.20

Host martell
    HostName 192.168.10.50

Host *ell
    user oberyn

Host * !martell
    LogLevel INFO

Host *
    User root
    Compression yes
  • If you type ssh targaryen the ssh client will read the file and will apply the options from the first match which is Host targaryen. Then it will check the next stanzas one by one for matching pattern. The next matching one is Host * !martell which means all hosts except martell and it will apply the connection option from this stanza. Finally the last definition Host * also mathes but the ssh client will take only the Compression option because the User option is already defined in the Host targaryen stanza. The full list of options used in this case is as follows:
    HostName 192.168.1.10
    User daenerys
    Port 7654
    IdentityFile ~/.ssh/targaryen.key
    LogLevel INFO
    Compression yes
  • When running ssh tyrell the matching host patterns are: Host tyrellHost *ellHost * !martell and Host *. The options used in this case are:
    HostName 192.168.10.20
    User oberyn
    LogLevel INFO
    Compression yes
  • If you run ssh martell the matching host patterns are: Host martellHost *ell and Host *. The options used in this case are:
    HostName 192.168.10.50
    User oberyn
    Compression yes
  • For all other connections options specified in the Host * !martell and Host * sections will be used.

The ssh client receives its configuration in the following precedence order:

  1. Options specified from the command line
  2. Options defined in the ~/.ssh/config
  3. Options defined in the /etc/ssh/ssh_config

If you want to override a single option you can specify it on the command line. For example if you have the following definition:

Host dev
    HostName dev.example.com
    User john
    Port 2322

and you want to use all other options but to connect as user root instead of john simply specify the user on the command line:

ssh -o "User=root" dev

The -F (configfile) switch allows you to specify an alternative per-user configuration file.

If you want your ssh client to ignore all of the options specified in your ssh configuration file, you can use:

ssh -F /dev/null user@example.com

You have learned how to configure your user ssh config file. You may also want to setup a SSH key-based authentication and connect to your Linux servers without entering a password.

Source

Best Linux Distros 2019 | Linux Distros Introduction

Here you find the best linux distros

2019 is finally here folks! and what a better way to shed light on some of the best Linux distributions at your disposal. Even we have hundreds of distributions, we have created a list of distro based popularity, features and ease of usage.

In this article, we shall focus on the best Linux distributions for 2019. But remember each distro have their unique features and you should select based on your requirement.

Best Linux Distribution 2019 for Desktop/Laptops

# Ubuntu

best linux distributions 2019 Ubuntu 18.10

Codenamed Cosmic Cuttlefish, Ubuntu 18.10 takes over from Ubuntu 18.04 Bionic Beaver LTS whose long-term support has now been extended to 10 years. On the other hand, Ubuntu 18.10 will only have 9 months support lasting up to July 2019. You can also look forward for Ubuntu 19.04 (named the ‘Disco Dingo’) release date is scheduled for April 18, 2019.

Nonetheless, Ubuntu 18.10 comes packed with an array of new features that improve the user experience. Among the new features are:

  • GNOME 3.30
  • Improved Battery life for laptops
  • Fingerprint Scanner support
  • Linux Kernel 4.18
  • Faster installation and booting times

Before proceeding to install Ubuntu 18.10, ensure that your system meets the following requirements

  • 2 GB RAM
  • 2Ghz Dual core processor
  • 25 GB of Free hard disk spaces
  • 1024×768 screen resolution.
  • A DVD drive or USB port for connecting to the installer media

Read Also: How to Install Ubuntu 18.04 Dual Boot with Windows 10

# Elementary

elementary 5.0 best linux distributions 2019

If you have been a long-term Linux user, ElementaryOS should top the favorite list. The latest version gives the user a vibrant feeling as it also helps when working with its intuitive smartphone interface. The OS (Elementary 5.0) is codenamed “Juno” and offers the most refined desktop version. Here are some of its specific features for Juno and any other ElementaryOS.

  • Built-in Night Light
  • The picture in Picture Mode
  • Image Resizing Made Easy
  • Easier App Funding
  • Simple App Launcher
  • Adaptive Panel
  • Easily Available Keyboard Shortcut
  • Bold Use of Color
  • Easy Web Integration
  • Transparent Readable Updates

Minimum Requirements for Installation

  • Intel Core i3 or compatible dual core 64-bit processor
  • 4GB RAM
  • At least 15GB of SSD free space
  • Internet access
  • 1024 x 768
  • CD/ DVD/ USB drive for installation
Based on Ubuntu, Ubuntu in term based on Debian
Desktop Environment Pantheon, Pantheon built on top of GNOME
Package Management dpkg, Eddy Gui tool
General Purpose Desktop
Download Link https://elementary.io/

Read Also: How to Install Elementary OS 5.0 Juno with Windows 10

# Solus

Solus is one of the newcomers to the scene and is already making serious breakthroughs. The distribution will give you a clean and polished experience. The robust repositories include any software that you can imagine that are always get updates with every release.

The desktop environment is also known as ‘Budgie’ is attractive, simple and clean and offers a similar experience to Chrome OS without the need to purchase a Chromebook. You will get all the software that you need from the GNOME desktop environment, which is light and fast.

Budgie is clean and has a visually appealing user interface giving a wonderful and spectacular user experience. A single button opens the main menu as you have seen in a typical Windows 10 environment.

  • A dual-core of at least 2GH and above
  • 4GB RAM
  • Direct X11 / GeForce 460 or higher
  • 10GB available disk space
Based on A distribution built from scratch
Desktop Environment Budgie, Uses GNOME technologies
Package Management PiSi package manager maintained as eopkg
General Purpose Desktop
Download Link https://getsol.us/download/

Read Also: How to Install Latest Solus from USB

# Fedora

fedora 29 best linux distribution

Fedora 29 is the distro that defines new technologies that integrate into the Operating System resulting in some innovative features of any distribution. The only downside is the short support cycles that last for only a month after the release of the next version, which is usually six months.

  • Gnome 3.30
  • Fedora silverblue
  • TLS 1.3
  • Python 3.7
  • Perl 5.28
  • ZRAM support for ARM images
  • New notification area

Minimum Requirements for Installation

  • 6GB free hard disk space
  • 2GB RAM
  • Intel 64-bit processor
  • AMD processor with AMD-V and AMD64 extensions
Based on Redhat
Desktop Environment GNOME (default)
Package Management RPM, Built ‘dnf’ on top of it
General Purpose Desktop
Download Link https://getfedora.org/

# Mint

linux mint best linux distribution

If you are moving from a Windows or Mac OS platform, you may want to use the simple Linux Mint as you try to find your way into the world of Linux. Mint is loaded fully packed with software you need to get back on track. Mint gives you a choice of four desktop environments with Cinnamon being the most closet to the Windows environment; however, MATE is still the popular choice because it is light on resources and loads faster using minimal memory.

Mint is always synchronized with the latest Ubuntu LTS releases meaning once you are running on Mint; there is no chance of getting malware threats.

The default theme on Linux mint is the Mint-Y, a successor to Mint-X. Mint-Y is available in three flavors. The light and the dark theme.

Linux Mint has two software managers known as the synaptic and the Software Manager. They both use APT front end. Synaptic installation window is plain text unlike the software manager, which is a GUI.

Minimum Requirements for Installation

  • 64-bit x86 Processors
  • 2GB of RAM
  • 10GB of free hard disk space
  • A graphics card that supports at least 1024 x 720 resolution
  • CD/DVD/USB facilities
Based on Debian and Ubuntu
Desktop Environment Cinnamon (default), Mate and Xfce
Package Management dpkg
General Purpose Desktop
Download Link https://linuxmint.com/download.php

# Arch Linux

Arch-Linux

The Linux gaming platform has been a hot issue for many years, and gamers still cannot conclude whether Linux is a robust gaming platform or not. The verdict depends on strong support to make it a strong contender. Arch has many customization options that offer an opportunity for a gamer to free up system resources for gaming application and still be able to configure for general system performance.

The Operating System ships with the package management tool aptly referred to as Pacman that uses the command tar to package all installations. Pacman handles binary system packages and works with Arch Build System, which manages official Arch repositories, and own builds.

You only need to use pacman-syu to update all packages and to install group packages that come with the software run the command pacman-S gnome

Every time there is a rolling release update a large number of binary updates for the repositories. The timely releases are to make sure you do not need to re-install or update your OS at any given time. Instead, you need regular system updates to get the latest Arch Software.

Minimum Requirements for Installation

  • An i686 or x86-64 based processors
  • 2GB RAM (you can increase for better graphical performance)
  • 10GB hard disk free space
Based on Independent distribution relies on its own build system and repositories
Desktop Environment Cinnamon, GNOME, Budgie and more
Package Management pacman
General Purpose Desktop and Multipurpose
Download Link https://www.archlinux.org/download/

Read Also: Beginners Guide For Arch Linux Installation

# Antergos

Antergos best linux distributions 2019

The Antergos OS is one of the most underrated distributions in the Linux family. Antergos adheres to the arch principles of simplicity, modernity, versatility, centrality, and practicality. There is an option of using a GUI installer to make the task simple.

All the major desktop environments such as Gnome, Cinnamon, KDE, Openbox, Xfce, and Mate are supported. The icons on the interfaces provide superior that matches the beautiful theme.

Antergos is 100% functional straight from the box but with a limited number of packages. Additional packages are installed via the Pacman package manager, which contain new updates straight from the repos.

Minimum Requirements for Installation

  • An i686 or x86-64 based processors
  • 2GB RAM (you can increase for better graphical performance)
  • 10GB hard disk free space

Read Also: How to Install Antergos Lastest Version

# Manjaro

manjaro best linux distributions 2019

Manjaro is an easy and user-friendly Operating System based on Arch Linux. Key features of this distro include the intuitive installation process, automatic hardware detection, stable updates with every release, uses special Bash scripts for managing graphics and more options available in supporting desktop configurations.

Manjaro comes with different desktop flavors such as GNOME 3.26, Xfce 4.12, KDE 5.11, MATE 1.18, Cinnamon 3.6, and Budgie 10.4.

The software comes packed with software such as Firefox, LibreOffice, and Cantata for all your music and library activities. Right click on the desktop to access several widgets that you can use to add icons on the desktop panel.
Use the Manjaro settings manager to give you an option of selecting the kernel version that you want to use as well as installing language packs and third-party drivers for specific hardware. The Manjaro settings are accessible on the M icon on the system tray under the Settings menu.

Cantatta is the default music app among other versions like clementine

Octopi package manager organizes packages and is easy to use

Minimum Requirements for Installation

  • Intel-based i686 or i386 processor
  • 1 GB RAM
  • 8 GB free space on the hard disk
Based on ArchLinux
Desktop Environment Cinnamon, GNOME, KDE Plasma 5, Xfce, Budgie, Deepin, Architect, and MATE
Package Management pacman
General Purpose Desktop and Multipurpose
Download Link https://manjaro.org/download/

# Pop Linux from System 76

pop linux best linux distributions

POP Linux is a new Linux distro designed to have minimal clutter on the desktop. The creators of Pop OS system 76 specialize in building custom Linux PCs, and they have managed to tweak the Pop with the necessary improvements on the graphical interface. Switching between integrated Intel graphics and a dedicated NVidia graphics with a single mouse click. However, you can install NVidia drivers when doing the first time installation instead of using the open source Noveau drivers that are present in most distributions.

The Pop! _ OS does not support true hybrid graphics, as it is Windows. Switching between the Intel and NVidia graphics solutions is easy when you compare it with other Linux distributions. Pop! _OS works on any PC and with functionalities expected from a Linux distro. Forbes earlier suggested Pop OS is giving good Desktop experience on Lenovo ThinkPad X1 Laptop.

Pop OS is still emerging as a convenient tool for managing dual graphics options.

Prominent Features

  • Ubuntu based
  • Built from scratch
  • Customized GNOME 3 as the preferred Desktop Environment
  • Better Support
  • Runs well on System 76 Laptops

Pop! Shop

pop-shop

This is an AppCenter, otherwise known as the Pop! _Shop is a project developed by the ElementaryOS team. The main purpose of this center is to make app organization easy to enable easy search experience.

Minimum Requirements for Installation

  • 2GB RAM though the recommended is 4GB
  • Minimum 16GB storage the recommended is 20GB
  • 64-bit processors
Based on Ubuntu
Desktop Environment GNOME (default), Budgie and more
Package Management dpkg
General Purpose Desktop and Multipurpose
Download Link https://system76.com/pop

Read Also: How to Install Pop!_OS from System76

Best Linux Distro 2019 for Security

Online privacy is a big issue in this era of mass surveillance by both the state and online marketers. If you are keen on keeping these surveillance operations at bay, you need an operating system that has been created from the deep inside with one key thing in mind – security or privacy.

Therefore, with this in mind here are what we have in mind for the distros that will work for hackers, pen-testers, and the terminally paranoid.

# Kali Linux

kali-linux best linux distribution 2019

Kali is becoming more popular in the cyber community for being Hackers number one priority. Kali has more than 300 tools that are applicable in different areas such as key-loggers, Wi-Fi scanners, scanning and exploiting targets, password crackers, probing and many other uses.

From the word go this is not a beginner friendly Operating System. Most of its courses are taught online on how to use it effectively, it the preferred choice for ethical hackers black hats, and penetration testers. Kali is relatable for its realism and attention to details. Kali Linux is Debian based OS that means all the software that you need can be installed the Debian commands.

The only available user on Kali is the root user, and all work within the OS works under this identity at all times. You can still add another account without the root privileges, but it will be at the logic of using Kali for security reasons.

Kali has many either penetration texting tools that can be GUI or CLI tools. Testing these applications means that you are aware that some commands may not work with your system or cause further problems to the network. When it comes to security applications ignorance is not an excuse.

If the software you want is not on the Debian packages within Kali, you can install but with the stern warning, that such additions only work to compromise system stability.

Minimum Requirements for Installation

  • 128 MB RAM
  • 2GB free disk space
  • CPU that supports AMD64, i386, armel, arm64, and armhf

With the Desktop environment

  • 2GB RAM
  • 20 GB disk space
  • CPU that supports AMD64, i386, armel, arm64, and armhf
Based on Debian
Desktop Environment GNOME (default)
Package Management dpkg
General Purpose Penetration testing / Cyber security
Download Link https://www.kali.org/downloads/

# Tails

tails-linux best linux distributions 2019

For anyone looking for the best online privacy, then Tails should be able to provide that security. Tails is another Debian based OS with privacy in mind; tails does not store any data by default that is why most developers refer to it as the amnesiac distribution.

Tails works with Tor browsers for all network connectivity and the OS can operate from a flash disk and can disguise itself to look like windows in public. Everything on Tails is encrypted this includes messaging, emails, and files.

The OS uses Gnome 3 classic as its Windows manager. Tails ships with multiple default software including an unsafe browser that you can use to access the internet without anonymity.

Once you boot into the system, a dialog box will pop up with the request to install more options. NO means that you will log in with no administrative privileges, YES will give a choice of setting up a password that allows the changing of network settings. Mac spoofing hides you on the network you just joined as well as giving you root access.

You can use the Onion Circuits to confirm connection details the moment you join the internet. The Tor browser is what any person obsessed with privacy and the latest updates use the Firefox 45.0.3 has new extensions that block any annoying ads.

KeyPassX saves all encrypted passwords, which are only unlocked using the master key. Installation of Tails is possible using a USB drive or installed on the computer and make it bootable

Minimum Requirements for Installation

  • CD/DVD/USB ports
  • 64 bit x86-64 processor
  • 2GB RAM

Best Lightweight distro 2019

Why should you risk running the old insecure Windows XP with no support yet there are tons of secure Linux applications that are lightweight and will work with machines of that era. The outdated machine hardware configurations should not box you into loading unpredictable and insecure Operating System. The Linux distributions are more than light, and they are fast and secure.

# Lubuntu

Lubuntu makes it to the list of the lightweight Linux distributions that can work well in netbooks and older PCs. Lubuntu is an official Ubuntu, therefore, giving its users an opportunity to share the same software on the official Ubuntu software store.

Starting from Lubuntu 18.10 all 32-bit images will no longer get support from its developers, and therefore anyone with old 32-bit hardware configurations will have to move to 64-bit processors.

Lubuntu is a fast Operating System for old desktops that use the LXQt desktop environment alongside a selection of light applications. Switching from the previous LXDE desktop to the current LXQt started with Lubuntu 18.10. Comparing the two LXQt is modern after the merging of LXDE and Razor-qt.

All the necessary software that you need ship with the OS. Lubuntu is even better for anyone who is familiar with Ubuntu and wants to upgrade the old laptop or PC.

Minimum Requirements for Installation

  • Pentium II or more
  • 256MB RAM
  • 5GB free disk space
Based on Ubuntu
Desktop Environment LXQt, LXDE
Package Management dkpg
General Purpose Desktop and Multipurpose
Download Link https://lubuntu.net/downloads/

# Linux Lite

linux-lite best linux distribution

The growth of Linux Lite has been quite rapid in the recent past because beginners find it easy to use, it is attractive, and of course lightweight. It is another Ubuntu-based Linux Operating System running on Long-term Software (LTS) and has powerful and popular applications.

Using Linux Lite means you have a functional Linux desktop experience because it uses a more or less similar menu interface on Windows XP. The Xfce desktop environment is making things comfortable for a Linux newbie.
Linux Lite will handle with ease what other distributions are struggling with even with its lightweight structure. Linux Lite offers all the tools that promise the best performance. The latest Linux Lite at the moment is Linux Lite 4.2 that has an auto screen adjustment feature, the Redshift that adjusts the screen temperature at night or day.

Minimum Requirements for Installation

  • 700MHz Processor
  • 512MB RAM
  • Screen Resolution of about 1024 x 768

# TinyCore

TinyCore best Linux distributions

TinyCore is an incredibly compact distribution that is available in three different sizes. The barebone TinyCore is by far the tiniest of Linux distros that allow users to build their own variations.

The lightest TinyCore is about 11MB and has no graphical interface with an option of adding one after the installation. An alternative is the TinyCore Version 9.0, which is 16MB in size and has the option of FLTK or FLWM desktop environments. The third option is the install CorePlus that is more than 106MB and has a choice of Lightweight Window Managers such as IceWM and FluxBox. The CorePlus has support for Wi-Fi and non-US keyboards.

TinyCore saves on storage space because you need a cabled network during initial installation; therefore, it does not have many applications other than the terminal, basic text editor, and network connection manager.

Use the TinyCore Control Panel for quick access to different system configurations. Use the graphical package manager to install more software such as multimedia codecs.

Minimum Requirements for Installation

  • 128MB RAM
  • 32-bit and 64-bit processors. The system also accepts other versions such as PiCore
  • 5GB disk space
Based on Busy Box, flwm, Tiny X , Fltk
Desktop Environment GNOME , i3 , IceWM , HackedBox
Package Management tce-load
General Purpose Desktop
Download Link http://tinycorelinux.net/downloads.html

# Puppy Linux

puppy-linux

Puppy Linux is a veteran when it comes to the world of lightweight Linux world, and it boasts of a vast range of applications in different versions. The Operating System uses the Xenial Pup edition that works with Ubuntu Repositories.

Being one of the oldest lightweight distributions in the market, project developers have been making efforts to make it slim and light for more than a decade now. The different versions are the Slacko Puppy 6.3.2 based on Slackware while the XenialPulp 7.5 based on Ubuntu 16.04 LTS.

Puppy Linux Is full of apps with the most unusual apps like the Homebank, which helps in financial management, or the Gwhere that manages disk catalogs. A graphical tool is also available for managing samba shares and firewall.

The XenialPulp uses the QuickPet utility that manages the installation of the most popular apps.

Minimum Requirements for Installation

  • 128MB RAM
  • 32-bit and 64-bit processors
  • 5GB disk space

Best Enterprise Server Distros 2019

In the Operating Systems server arena, Linux enjoys a bigger share because of things like stability, freedom, security, hardware support, and freedom. Linux servers are suitable for expert users and System Administrators as well as special users like programmers, gamers, programming, or ethical hackers.

These Operating Systems have special tools and enjoy long-term support. They give the user the best uptime, security, efficiency, and optimum performance. Let us look at the two of the most used Linux Server options.

# RedHat

redhat best linux distributions

Red Hat Enterprise Linux (RHEL) Server enjoys the same position held by Ubuntu in the world of Desktop Linux. Red Hat, the makers of RHEL, is a player that has been in the industry for a very long time. They have been able to refine this Server Operating system to ensure that most software packages and hardware get their certification support.

In addition, RHEL also has Long-term ongoing support that when it comes to Linux Servers Operating Systems.
The latest version comes in three disks that only need 38 minutes to install using the graphical interface. There is an option of having Desktop version installed for new users who want to try out the features. Installing the server version will only install software that supports server edition.

The Red Hat Server edition has two desktop environments the KDE 3.0 and GNOME 1.4. The Nautilus file manager and Ximian evolution will help you manage the system. You can comfortably use the Office Outlook environment with the presence of similar applications that support emails, calendar, contacts, and Palm OS integration.

The professional (paid) version offers the Sun Microsystems Star Office 5.2.

Minimum Requirements for Installation

  • Any Pentium class processor (X86, POWER Architecture, z/Architecture, S/390)
  • 5GB hard disk space
  • 32MB RAM – no graphics, 64MB RAM – with graphics
  • Network interface

# Suse Server

SUSE linux enterpirse server

SUSE Linux Enterprise Server (SLES) is an Operating System that opens new avenues for the transformation in the software-defined era. SLES makes the IT infrastructure efficient by providing engaging system developers to help in solving critical workloads in the organization.

Bridges all software-defined infrastructure by giving a common base that enables easy migration of applications, improve system management, and make it easy to adopt containers.

SUSE automatically installs the minimal requires server packages, use the YaST Control Centre to configure network and most of the system settings. The Zypper package manager is good for downloading and installing essential server such as the postfix.

Minimum Requirements for Installation

  • 4GB RAM
  • 16GB disk space
  • Network interface
Based on RPM and RedHat
Desktop Environment GNOME (default), GNOME classic, ICeWM, SLE classic
Package Management Zypper
General Purpose Desktop and Server
Download Link https://www.suse.com/products/

Best Linux Distribution 2019 for Programmers

Most developers use Linux Based Operating Systems to get their work done or to create something new. Programmers more than anybody else are concerned with things like power, stability, compatibility, and flexibility found in any OS.

Ubuntu and Debian seem to be the leading contenders but there are several others, and today we will focus on Debian.

# Debian

debian-9

The Debian GNU/Linux Distribution is the foundation many other Linux distros depend on. The latest version is otherwise known as the Stretch edition that also claims its position as the preferred programmers’ choice.

The infinite numbers of packages that are designed to be stable also have self-help tutorials that will help you solve issues as you work around your project. The Debian project has a testing branch where all the new software packages are stored. The testing site is a good site for advanced programmers and system administrators.

Linux Beginners will not find Debian to be friendly as its focus is on advanced programmers and users. The reason why you should consider Debian for your programming tasks is the ease of availability of resources, and you get to use the .deb package manager.

Version 9 of Debian uses GNOME3.22.2 and KDE 5.8. Other interfaces include the Budgie interface 10.2., Cinnamon, MATE, and LXQt. All are available from the software package manager.

Minimum Requirements for Installation

  • 512MB
  • 2GB free Hard disk space
Based on Debian
Desktop Environment GNOME3, KDE 5.8 Cinnamon, MATE, Budgie 10.2
Package Management dpkg
General Purpose Desktop and Multipurpose
Download Link http/www.debian.org/distrib

Our thoughts

The effort to look for the best Linux Distribution gave us few excellent options in different categories. We tried and experimented with most of them and listed what we saw as the most appropriate in all categories in this article. When doing this we were so much alert to the fact that changes do take place and new things always pop up, we encourage your participation by adding to the list what you think was left out through the comment section below.
Please do not forget to tell us which ones you like or find to be a better distro.

Source

WP2Social Auto Publish Powered By : XYZScripts.com