Excited About Application Modernization? Contain Yourself…

Introduction

For those of us who work with technologies every day, it’s important to remember one key thing: every topic is new to someone somewhere every day.

With that in mind, we are starting a series of posts here that will start from basics to help you build your knowledge of modern application delivery. Think of it as Containers 101.

To understand what containers are and how they benefit application developers, devops, and operations teams, let’s look at an essential change in the architecture of applications: the use of microservices.

What are Microservices?

Microservices are an evolution of a software architecture concept that developed in the 1990s and became widespread in the 2000s – service-oriented architecture (SOA). SOA defines an application as a collection of services. A service is an independent and self-contained function that is well-defined and stateless. Services act together as an application by taking input from each other (or, at one end of the application pipeline, from a user or other input source), performing some processing on the data, and passing it on to another service (or, at the other end of the pipeline, to some data store or to a user).

Services are reusable – that is, the same service can be connected to many different services, often from different applications with the same needs. Here’s a very simple example: whether it is a person, a command shell, or another program that needs to convert a domain name to an IP address, there can be a single Domain Name Service in the environment that resolves those requests.

Many of today’s developers were exposed to SOA in the form of web services, functions that could be exposed by web protocols such as HTTP, with their inputs and outputs composed into structured requests via REST APIs. These services communicate with each other over networks. Services can also use other communication mechanisms, for example, shared memory.

Microservices are a next step, where monolithic applications that traditionally run on a single server (or redundantly in a cluster) are decomposed – or new ones built – as a collection of small well-defined units of processing. Microservices may run on the same system or across nodes of a cluster.

The benefits of using microservice-based architecture include:

  • functions can be shared with other applications
  • functions can be updated without requiring rebuilding and updating entire applications (continuous delivery)
  • functions can be scaled up and down independently, making it easy to deploy resources where they are needed

Using microservices has become much simpler with the development of a relatively new architectural construct: containers.

What are Containers?

The adoption of virtual machines became widespread in the 1990s and 2000s for IT on industry-standard system architectures because they made it possible to do two very important things: to isolate an application from the behavior of other applications on the same system or cluster, and to package up all of the resources an application or set of applications require into an easily deployable, easily portable format. But virtual machines can be a solution for these issues that is too resource-intensive for the needs of many applications, and especially, of microservices. Each virtual machine needs to not only carry with it the application or service and all of its dependencies, but also, an entire operating system environment, and the emulation of a software version of a standalone computer.

Containers are a “best of both worlds” architectural idea that attains many of the isolation and packaging benefits of virtualization, but by using lighter-weight mechanisms within a shared operating system, offers many benefits. Because containers don’t need to boot a new operating system environment, they can start and stop rapidly, often in less than a second – especially useful when scaling them up and down to accommodate changing demand. Because this makes them smaller than a virtual machine, more of them can be run on the same hardware simultaneously. For the same reason, they are especially well suited to microservices, of which a well decomposed application may have a large number. But containers still carry with them the libraries and commands that each application or service needs – making it possible for apps and services build on different OS releases to coexist on the same hardware.

What aren’t Containers?

Containers are not virtual machines. They do not offer the heavy-weight security and performance isolation that virtual machines can offer. (Though there are new container implementations in development that come close; we will discuss those in a future educational blog post.)

Containers are not installation packages – they take the place of software installation. Containers can be deployed on demand to specific servers and their deployment can replace the complex tasks of software installation.

Containers are not whole applications. Well, to be honest, some of them can be: there are certainly gains in flexibility of deployment and management that can be realized by just putting a monolithic application in a container, But the real gain comes from rearchitecting legacy applications into microservices, and designing new ones that way. Note that the journey to microservices and application modernization need not be all-or-nothing: many organizations start on their existing application by chipping away at them to break off reusable and scalable capabilities into microservices gradually.

Where Do I Go From Here?

If you’re new to containers and microservices, I hope this has given you a good introduction. The next post that builds on this knowledge will be available in about two weeks. If you want to read ahead, SUSE Linux Enterprise Server includes a containers module about which you can find information on our website and in the blog. And SUSE CaaS Platform includes containers and management capabilities for them in a purpose-built product. If you find the reading gets deep for you, though, stop back to the SUSE Blog for more of Containers 101 soon.

Source

Best Free Linux Computer Algebra Systems

graphics-multiedge

A computer algebra system (CAS) is mathematical software that can manipulate mathematical formulae in a way similar to the traditional manual computations of mathematicians and scientists. This type of system supports a wide range of mathematics including linear algebra, calculus, and algebraic and ordinary differential equations.

A CAS offers a rigorous environment for defining and working with structures such as groups, rings, fields, modules, algebras, schemes, curves, graphs, designs, codes and many others.

They have been extensively used in higher education.

The main features of a CAS include:

  • Numerical Computations: The software can determine numerical approximations of solutions, derivatives, integrals, differential equations, etc. Solve, manipulate, and plot functions without needing to generate numeric data. Often problems that cannot be solved explicitly can be solved numerically, and often only a numerical answer is sufficient.
  • Data Analysis: Having data is not sufficient; we need to extract useful information from it. There are many algorithms designed for data analysis, most of which involve too much work to be done by manual computations. CAS’s put these algorithms in one place, and offer an environment where the algorithms are easy to implement.
  • Data Visualization: CAS’s can graph 2D and 3D functions in a variety of ways. They are also designed to graph vector fields and solutions to differential equations.
  • Symbolic Computations: Most of the CAS’s can perform symbolic manipulation of expressions: reducing, expanding, simplifying, derivatives, antiderivatives, etc. Unlike numerical computations, which can exhibit floating-point errors, symbolic computations are determined exactly. They can therefore provide the exact answer to an equation (as opposed to a decimal approximation), and they can express results in terms of a wide variety of previously defined functions.

A CAS automates tedious and sometimes difficult algebraic manipulation tasks. The principal difference between a CAS and a traditional calculator is the ability to deal with equations symbolically rather than numerically.

The chart below offers our rating for each software. Some of the software is very specialized, designed to fill a particular niche. This makes comparisons difficult.

Computer Algebra Systems Chart

To provide an insight into the quality of software that is available, we have compiled a list of 13 impressive algebra systems. There’s general purposes systems as well as specialist software solutions. All of them are open source software.

Let’s explore the 13 algebra systems at hand. For each application we have compiled its own portal page, a full description with an in-depth analysis of its features, screenshots, together with links to relevant resources.

Computer Algebra Systems
Maxima System for the manipulation of symbolic and numerical expressions
PARI/GP Widely used algebra system designed for fast computations in number theory
SymPy Python library for symbolic mathematics
Scilab Numerical computational package
SageMath Open source alternative to Magma, Maple, Mathematica and Matlab
Octave Powerful programming language with built-in plotting and visualization tools
Axiom General purpose Computer Algebra system
SINGULAR Computer Algebra System for polynomial computations
GAP System for computational discrete algebra
CoCoA System for doing computations in commutative algebra
Cadabra Symbolic computer algebra system for field theory problems
Macaulay2 Software system for research in algebraic geometry
FriCAS Fork of Axiom

Source

Machine Learning for Operations | Linux.com

Managing infrastructure is a complex problem with a massive amount of signals and many actions that can be taken in response; that’s the classic definition of a situation where machine learning can help.

IT and operations is a natural home for machine learning and data science. According to Vivek Bhalla, until recently a Gartner research director covering AIOps and now director of product management at Moogsoft, if there isn’t a data science team in your organization the IT team will often become the “center of excellence”.

By 2022, Gartner predicts, 40 percent of all large enterprises will use machine learning to support or even partly replace monitoring, service desk and automation processes. That’s just starting to happen in smaller numbers.

In a recent Gartner survey, the most popular use of AI in IT and operations is analyzing big data (18 percent) and chatbots for IT service management — 15 percent are already using chatbots and a further 30 percent plan to do so by the end of 2019.

Read more at The New Stack

Source

5 key differences between MySQL and TiDB

As businesses adopt cloud-native architectures, conversations will naturally lead to what we can do to make the database horizontally scalable. The answer will likely be to take a closer look at TiDB.

TiDB is an open source NewSQL database released under the Apache 2.0 License. Because it speaks the MySQL protocol, your existing applications will be able to connect to it using any MySQL connector, and most SQL functionality remains identical (joins, subqueries, transactions, etc.).

Step under the covers, however, and there are differences. If your architecture is based on MySQL with Read Replicas, you’ll see things work a little bit differently with TiDB. In this post, I’ll go through the top five key differences I’ve found between TiDB and MySQL.

1. TiDB natively distributes query execution and storage

With MySQL, it is common to scale-out via replication. Typically you will have one MySQL master with many slaves, each with a complete copy of the data. Using either application logic or technology like ProxySQL, queries are routed to the appropriate server (offloading queries from the master to slaves whenever it is safe to do so).

Scale-out replication works very well for read-heavy workloads, as the query execution can be divided between replication slaves. However, it becomes a bottleneck for write-heavy workloads, since each replica must have a full copy of the data. Another way to look at this is that MySQL Replication scales out SQL processing, but it does not scale out the storage. (By the way, this is true for traditional replication as well as newer solutions such as Galera Cluster and Group Replication.)

TiDB works a little bit differently:

  • Query execution is handled via a layer of TiDB servers. Scaling out SQL processing is possible by adding new TiDB servers, which is very easy to do using Kubernetes ReplicaSets. This is because TiDB servers are stateless; its TiKV storage layer is responsible for all of the data persistence.
  • The data for tables is automatically sharded into small chunks and distributed among TiKV servers. Three copies of each data region (the TiKV name for a shard) are kept in the TiKV cluster, but no TiKV server requires a full copy of the data. To use MySQL terminology: Each TiKV server is both a master and a slave at the same time, since for some data regions it will contain the primary copy, and for others, it will be secondary.
  • TiDB supports queries across data regions or, in MySQL terminology, cross-shard queries. The metadata about where the different regions are located is maintained by the Placement Driver, the management server component of any TiDB Cluster. All operations are fully ACID compliant, and an operation that modifies data across two regions uses a two-phase commit.

For MySQL users learning TiDB, a simpler explanation is the TiDB servers are like an intelligent proxy that translates SQL into batched key-value requests to be sent to TiKV. TiKV servers store your tables with range-based partitioning. The ranges automatically balance to keep each partition at 96MB (by default, but configurable), and each range can be stored on a different TiKV server. The Placement Driver server keeps track of which ranges are located where and automatically rebalances a range if it becomes too large or too hot.

This design has several advantages of scale-out replication:

  • It independently scales the SQL Processing and Data Storage tiers. For many workloads, you will hit one bottleneck before the other.
  • It incrementally scales by adding nodes (for both SQL and Data Storage).
  • It utilizes hardware better. To scale out MySQL to one master and four replicas, you would have five copies of the data. TiDB would use only three replicas, with hotspots automatically rebalanced via the Placement Driver.

2. TiDB’s storage engine is RocksDB

MySQL’s default storage engine has been InnoDB since 2010. Internally, InnoDB uses a B+tree data structure, which is similar to what traditional commercial databases use.

By contrast, TiDB uses RocksDB as the storage engine with TiKV. RocksDB has advantages for large datasets because it can compress data more effectively and insert performance does not degrade when indexes can no longer fit in memory.

Note that both MySQL and TiDB support an API that allows new storage engines to be made available. For example, Percona Server and MariaDB both support RocksDB as an option.

3. TiDB gathers metrics in Prometheus/Grafana

Tracking key metrics is an important part of maintaining database health. MySQL centralizes these fast-changing metrics in Performance Schema. Performance Schema is a set of in-memory tables that can be queried via regular SQL queries.

With TiDB, rather than retaining the metrics inside the server, a strategic choice was made to ship the information to a best-of-breed service. Prometheus+Grafana is a common technology stack among operations teams today, and the included graphs make it easy to create your own or configure thresholds for alarms.

4. TiDB handles DDL significantly better

If we ignore for a second that not all data definition language (DDL) changes in MySQL are online, a larger challenge when running a distributed MySQL system is externalizing schema changes on all nodes at the same time. Think about a scenario where you have 10 shards and add a column, but each shard takes a different length of time to complete the modification. This challenge still exists without sharding, since replicas will process DDL after a master.

TiDB implements online DDL using the protocol introduced by the Google F1 paper. In short, DDL changes are broken up into smaller transition stages so they can prevent data corruption scenarios, and the system tolerates an individual node being behind up to one DDL version at a time.

5. TiDB is designed for HTAP workloads

The MySQL team has traditionally focused its attention on optimizing performance for online transaction processing (OLTP) queries. That is, the MySQL team spends more time making simpler queries perform better instead of making all or complex queries perform better. There is nothing wrong with this approach since many applications only use simple queries.

TiDB is designed to perform well across hybrid transaction/analytical processing (HTAP) queries. This is a major selling point for those who want real-time analytics on their data because it eliminates the need for batch loads between their MySQL database and an analytics database.

Conclusion

These are my top five observations based on 15 years in the MySQL world and coming to TiDB. While many of them refer to internal differences, I recommend checking out the TiDB documentation on MySQL Compatibility. It describes some of the finer points about any differences that may affect your applications.

Source

deepin 15.8 Linux distribution available for download — replace Windows 10 now!

As more and more people wake up to the fact that Windows 10 is a giant turd lately, computer users are exploring alternatives, such as Linux-based operating systems. First impressions can be everything, so when searching for a distribution, it is important that beginners aren’t scared off by bewildering installers or ugly and confusing interfaces.

Linux “n00bs” often opt for Ubuntu, and while that is a good choice, there are far more pretty and intuitive options these days. One such operating system that I recommend often is deepin. Why? It is drop-dead gorgeous and easy to use. It is guaranteed to delight the user, and its intuitive interface will certainly impress. Today, the newest version of the excellent Linux distro, deepin 15.8, becomes available for download.

ALSO READ: IBM gobbles up open source and Linux darling Red Hat in $34 billion deal

“Compared with deepin 15.7, the ISO size of deepin 15.8 has been reduced by 200MB. The new release is featured with newly designed control center, dock tray and boot theme, as well as improved deepin native applications, hoping to bring users a more beautiful and efficient experience,” says deepin developers.

ALSO READ: System76 Thelio computer is open source, Linux-powered, and made in the USA

The devs further say, “Prior to deepin official release, usually an internal test is implemented by a small number of community users, then we record their feedbacks and fix the bugs. Before this release, we test deepin 15.8 both from system upgrade and ISO installation. Thanks to the members of internal testing team. Their contributions are highly appreciated!”

As is typical with deepin, there are many eye candy changes to be found in the new release, including enhancements to the dock. The grub menu is now prettier, and the file manager has improved icons for the dark theme. It is not all about the superficial, however, as there is now an option for full disk encryption when installing the operating system — a very welcome addition.

The deepin developers share additional bug fixes and improvements below.

dde-session-ui

  • Optimized background drawing;
  • Optimized dual screen display;
  • Optimized the login process;
  • Optimized the notification animation;
  • Fixed the error message when switching to multi-user while verifying the password;
  • Fixed user login failure;
  • Fixed the setting failure of user’s keyboard layout;
  • Added the verification dialog for network password.

dde-dock

  • Fixed the identification error of connected network;
  • Fixed the high CPU usage of network when hotspot was enabled;
  • Fixed the issue that the network connecting animation did not disappear correctly;
  • Supported dragging and dropping any desktop file to the dock;
  • Recognized whether the preview window can be closed or not;
  • Supported transparency settings (set in Control Center);
  • Supported the new dock protocol (SNI);
  • Added “Show Desktop” button in efficient mode;
  • Redesigned the tray area in fashion mode;
  • Removed hot corner presets which can be customized by users.

Deepin Image Viewer

  • Removed the picture management function;
  • Fixed the distortion of high resolution pictures when zoom out.

Deepin Graphics Driver Manager

  • Fixed the identification error of Bumblebee solution;
  • Fixed the interface scaling problem on HiDPI screen;
  • Used glvnd series of drivers for PRIME solution;
  • Optimized error handling.

Ready to download deepin 15.8 and possibly replace Windows 10 with it? You can grab the ISO here. After you try it, please head to the comments and tell me what you think of the operating system — I suspect you will be pleasantly surprised.

Image Credit: HomeArt / Shutterstock

Source

An Introduction to Udev: The Linux Subsystem for Managing Device Events | Linux.com

Udev is the Linux subsystem that supplies your computer with device events. In plain English, that means it’s the code that detects when you have things plugged into your computer, like a network card, external hard drives (including USB thumb drives), mouses, keyboards, joysticks and gamepads, DVD-ROM drives, and so on. That makes it a potentially useful utility, and it’s well-enough exposed that a standard user can manually script it to do things like performing certain tasks when a certain hard drive is plugged in.

This article teaches you how to create a udev script triggered by some udev event, such as plugging in a specific thumb drive. Once you understand the process for working with udev, you can use it to do all manner of things, like loading a specific driver when a gamepad is attached, or performing an automatic backup when you attach your backup drive.

A basic script

The best way to work with udev is in small chunks. Don’t write the entire script upfront…

Read more at OpenSource.com

Source

Red Hat Enterprise Linux 8 Beta [LWN.net]

[Posted November 15, 2018 by ris]

Red Hat Enterprise Linux 8 Beta

Red Hat has announced the release of RHEL 8 Beta. “Red Hat Enterprise Linux 8 Beta introduces the concept of Application Streams to deliver userspace packages more simply and with greater flexibility. Userspace components can now update more quickly than core operating system packages and without having to wait for the next major version of the operating system. Multiple versions of the same package, for example, an interpreted language or a database, can also be made available for installation via an application stream. This helps to deliver greater agility and user-customized versions of Red Hat Enterprise Linux without impacting the underlying stability of the platform or specific deployments.”

Source

Use of the grep Command in Linux

Using grep Command in LinuxWhat is grep?

The grep utility that we will be getting a hold of today is a Unix tool that belongs to the same family as the egrep and fgrep utilities. These are all Unix tools designed for performing the repetitive searching task on your files and text. You can search for files and their contents for useful information fetching by specifying particular search criteria through the grep command.

So they say GREP stands for Global Regular Expression Print but where does this command ‘grep’ originate from? grep basically derives from a specific command for the very simple and venerable Unix text editor named ed. This is how the ed command goes:

g/re/p

The purpose of the command is pretty similar to what we mean by searching through grep. This command fetches all the lines in a file matching a certain text pattern.

Let us explore the grep command some more. In this article, we will explain the installation of the grep utility and present some examples through which you can learn exactly how and in which scenario you can use it.

We have run the commands and procedures mentioned in this article on an Ubuntu 18.04 LTS system.

Install grep

Although the grep utility comes by default with most Linux systems, if you do not have it installed on your system, here is the procedure:

Open your Ubuntu Terminal either through the Dash or the Ctrl+Alt+T shortcut. Then enter the following command as root in order to install grep through apt-get:

$ sudo apt-get install grep

Install grep command

Enter y when you are prompted with a y/n option during the installation procedure. After that, the grep utility will be installed on your system.

You can verify the installation by checking the grep version through the following command:

$ grep –version

Check grep command version

Use of the grep Command with Examples

The grep command can be best explained by presenting some scenarios where it can be made use of. Here are a few examples:

Search for Files

If you want to search for a filename that contains a specific keyword, you can filter your file list through the grep command as follows:

Syntax:

$ ls -l | grep -i “searchword

Examples:

$ ls -l | grep -i sample

This command will list all the files in the current directory with the name of the file containing the word “private”.

Search for files with grep

Search for a String in a File

You can fetch a sentence from a file that contains a specific string of text through the grep command.

Syntax:

grep “string” filename

Example:

$ grep “sample file” sampleFile.txt

Search for text in a file with grep

My sample file sampleFile.txt contains the sentence having the string “sample file” that you can see in the above output. The keyword and string appear in a colored form in the search results.

Search for a String in More Than One File

In case you want to search for sentences containing your text string from all the files of the same type, grep command is at your service.

Syntax 1:

$ grep “string” filenameKeyword*

Syntax 2:

$ grep “string” *.extension

Example1:

$ grep “sample file” sample*

Search for a String in More Than One File

This command will fetch all the sentences containing the string “sample file” from all the files with the filename containing the keyword “sample”.

Example 2:

$ grep “sample file” *.txt

Search for a String in More Than One File - Example 2

This command will fetch all the sentences containing the string “sample file” from all the files with .txt extension.

Search for a String in a File Without Taking in Account the Case of the String

In the above examples, my text string was luckily in the same case as that found in my sample text files. If I had entered the following command, my search result would be nil because the text in my file does not start with an upper-case word “Sample”

$ grep “Sample file” *.txt

Search with case sensitive string

Let us tell grep to ignore the case of the search string and print the search results based on the string through the -i option.

Syntax:

$ grep -i “string” filename

Example:

$ grep -i “Sample file” *.txt

Case insensitive search with grep command

This command will fetch all the sentences containing the string “sample file” from all the files with .txt extension. This will not take into account whether the search string was in upper or lower case.

Search on the basis of a regular expression

Through the grep command, you can specify a regular expression with a start and end keyword. The output will be the sentence containing the entire expression between your specified starting and ending keyword. This feature is very powerful as you do not need to write an entire expression in the search command.

Syntax:

$ grep “startingKeyword.*endingKeyword” filename

Example:

$ grep “starting.*.ending” sampleFile.txt

Use regular expressions in grep

This command will print the sentence containing the expression(starting from my startingKeyword and ending on my endingKeyword) from the file that I specified in the grep command.

Display a Specified Number of Lines After/Before the Search String

You can use the grep command to print N number of lines before/after a search string from a file. The search result also includes the line of text containing the search string.

The syntax for N number of lines after the key string:

$ grep -A <N> “string” filename

Example:

$ grep -A 3 -i “samplestring” sampleFile.txt

This is how my sample text file looks like:

sample text file

And this is how the output of the command looks like:

It displays 3 lines, including the one containing the searched string, from the file I specified in the grep command.

The syntax for N number of lines before the key string:

$ grep -B <N> “string” filename

You can also search for N number of lines ‘around’ a text string. That means N number of lines before and N after the text string.

The syntax for N number of lines around the key string:

$ grep -C <N> “string” filename

Through the simple examples described in this article, you can have a grip on the grep command. You can then use it to search filtered results that may include files or contents of the file. This saves a lot of time that was wasted on skimming through the entire search results before you mastered the grep command.

Source

Adding Linux To A PDP-11

The UNIBUS architecture for DEC’s PDPs and Vaxxen was a stroke of genius. If you wanted more memory in your minicomputer, just add another card. Need a drive? Plug it into the backplane. Of course, with all those weird cards, these old UNIBUS PDPs are hard to keep running. The UniBone is the solution to this problem. It puts Linux on a UNIBUS bridge, allowing this card to serve as a memory emulator, a test console, a disk emulator, or any other hardware you can think of.

The key to this build is the BeagleBone, everyone’s second-favorite single board computer that has one feature the other one doesn’t: PRUs, or a programmable real-time unit, that allows you to blink a lot of pins very, very fast. We’ve seen the BeagleBone be used as Linux in a terminal, as the rest of the computer for an old PDP-10 front panel and as the front end for a PDP-11/03.

In this build, the Beaglebone’s PRU takes care of interfacing to the UNIBUS backplane, sending everything to a device emulator running as an application. The UniBone can be configured as memory or something boring, but one of these can emulate four RL02 drives, giving a PDP-11 an amazing forty megabytes of storage. The real killer app of this implementation is giving these emulated drives a full complement of glowing buttons for load, ready, fault, and write protect, just like the front of a real RL02 drive. This panel is controlled over the I2C bus on the Beaglebone, and it’s a work of art. Of course, emulating the drive means you can’t use it as the world’s largest thumb drive, but that’s a small price to pay for saving these old computers.

Source

Wow! Ubuntu 18.04 LTS is getting a 10-Year Support (Instead of 5)

Last updated November 16, 2018

The long-term support (LTS) releases of Ubuntu used to get support for five years. This is changing now. Ubuntu 18.04 will now be supported for ten years. Other LTS releases might also get an extended support.

Ubuntu’s founder Mark Shuttleworth announced this news in a keynote at OpenStack Summit in Berlin.

I’m delighted to announce that Ubuntu 18.04 will be supported for a full 10 years.

Ubuntu 18.04 will get 10 years support

A Move to lead the Internet of Things (IoT)

We are living in a ‘connected world’. The smart devices are connected to the internet everywhere and these are not limited to just smartphones. Toys, cameras, TVs, Refrigerators, Microwaves, weighing scales, electric bulbs and what not.

Collectively, they are called Internet of Things (IoT) and Ubuntu is focusing heavily on it.

The 10-years support announcement for Ubuntu 18.04 is driven by the needs of the IoT market.

…in some of industries like financial services and telecommunications but also from IoT where manufacturing lines for example are being deployed that will be in production for at least a decade.

Ubuntu 16.04, scheduled to reach its end of life in April 2021, will also be given a longer support life span.

What is not clear to me at this moment is whether the extended support is free of cost and if it is, will it be available to all the users including the desktop ones.

Ubuntu has an Extended Security Maintenance (ESM) option for its corporate customers. With ESM, the customers get security fixes for the kernel and essential packages for a few more years even after the end of life of a certain LTS release.

Of course, ESM is a paid feature and it is one of the many ways Canonical, the company behind Ubuntu, generates revenue.

At the moment, it is not clear if the ten years support is for everyone or if it will be a paid service under Extended Security Maintenance. I have contacted Ubuntu for a clarification and I’ll update this article if I get an answer.

Ubuntu is not for sale…yet

After IBM bought Red Hat for $34 billion, people have started wondering if Ubuntu will be sold to a big player like Microsoft.

Shuttleworth has clarified that he has no plans of selling Ubuntu anytime soon. However, he ambiguously also said that he might consider it if its a gigantic offer and if he will be left in charge of Canonical and Ubuntu to realize his vision.

Source

WP2Social Auto Publish Powered By : XYZScripts.com