IBM Began Buying Red Hat 20 Years Ago

How Big Blue became an open-source company.

News that IBM
is buying Red Hat
is, of course, a significant moment for the
world of free software. It’s further proof, as if any were needed,
that open source has won, and that even the mighty Big Blue must
make its obeisance. Admittedly, the company is not quite the
behemoth it was back in the 20th century, when “nobody
ever got fired for buying IBM
“. But it remains a benchmark for
serious, mainstream—and yes, slightly boring—computing. Its
acquisition of Red Hat for the not inconsiderable sum of $34 billion,
therefore, proves that selling free stuff is now regarded as a
completely normal business model, acknowledged by even the most
conservative corporations.

Many interesting analyses have been and will be written about why
IBM bought Red Hat, and what it means for open source, Red Hat,
Ubuntu, cloud computing, IBM, Microsoft and Amazon, amongst other
things. But one aspect of the deal people may have missed is
that in an important sense, IBM actually began buying Red Hat 20
years ago. After all, $34 billion acquisitions do not spring
fully formed out of nowhere. Reaching the point where IBM’s
management agreed it was the right thing to do required a journey.
And, it was a particularly drawn-out and difficult journey, given IBM’s
starting point not just as the embodiment of traditional proprietary
computing, but its very inventor.

Even the longest journey begins with a single step, and for IBM, it
was taken on June 22, 1998. On that day, IBM announced it
would ship the Apache web server with the IBM WebSphere Application
Server, a key component of its WebSphere product family. Moreover,
in an unprecedented move for the company, it would offer “commercial,
enterprise-level support” for that free software.

When I was writing my book Rebel
Code: inside Linux and the open source revolution
in 2000, I
had the good fortune to interview the key IBM employees who made
that happen. The events of two years before still were fresh in
their minds, and they explained to me why they decided to push IBM
toward the bold strategy of adopting free software, which ultimately
led to the company buying Red Hat 20 years later.

One of those people was James Barry, who was brought in to look at IBM’s lineup in
the web server sector. He found a mess there; IBM had around 50
products at the time. During his evaluation of IBM’s strategy, he
realized the central nature of the web server to all the other
products. At that time, IBM’s offering was Internet Connection
Server, later re-branded to Domino Go. The problem was that IBM’s
web server held just 0.2% of the market; 90% of web servers came
from Netscape (the first internet company, best known for its
browser), Microsoft and Apache. Negligible market share meant it
was difficult and expensive to find staff who were trained to use
IBM’s solution. That, in its turn, meant it was hard to sell IBM’s
WebSphere product line.

Barry, therefore, realized that IBM needed to adopt one of the
mainstream web servers. IBM talked about buying Netscape. Had
that happened, the history of open source would have been very
different. As part of IBM, Netscape probably would not have released
its browser code as the free software that became Mozilla. No
Mozilla would have meant no Firefox, with all the knock-on effects
that implies. But for various reasons, the idea of buying Netscape
didn’t work out. Since Microsoft was too expensive to acquire,
that left only one possibility: Apache.

For Barry, coming to that realization was easy. The hard part was
convincing the rest of IBM that it was the right thing to do. He tried
twice, unsuccessfully, to get his proposal adopted. Barry succeeded
on the third occasion, in part because he teamed up with someone
else at IBM who had independently come to the conclusion that Apache
was the way forward for the company.

Shan Yen-Ping was working on IBM’s e-business strategy in 1998 and,
like Barry, realized that the web server was key in this space.
Ditching IBM’s own software in favor of open source was likely to
be a traumatic experience for the company’s engineers, who had
invested so much in their own code. Shan’s idea to request his
senior developers to analyze Apache in detail proved key to winning
their support. Shan says that when they started to dig deep into
the code, they were surprised by the elegance of the architecture.
As engineers, they had to admit that the open-source project was
producing high-quality software. To cement that view, Shan asked
Brian Behlendof, one of the creators and leaders of the Apache
project, to come in and talk with IBM’s top web server architects.
They too were impressed by him and his team’s work. With the quality
of Apache established, it was easier to win over IBM’s developers
for the move.

Shortly after the announcement that IBM would be adopting Apache
as its web server, the company took another small but key step
toward embracing open source more widely. It involved the Jikes Java compiler that
had been written by two of IBM’s researchers: Philippe Charles and
Dave Shields. After a binary version of the program for GNU/Linux
was released in July 1998, Shields started receiving requests for
the source code. For IBM to provide access to the underlying code
was unprecedented, but Shields said he would try to persuade his
bosses that it would be a good move for the company.

A Jikes user suggested he should talk to Brian Behlendorf, who put
him in touch with James Barry. IBM’s recent adoption of Apache
paved the way for Shield’s own efforts to release the company’s
code as open source. Shields wrote his proposal in August 1998,
and it was accepted in September. The hardest part was not convincing
management, but drawing up an open-source license. Shields said
this involved “research attorneys, the attorneys at the software
division who dealt with Java, the trademark attorneys, patents
attorneys, contract attorneys”. Everyone involved was aware that
they were writing IBM’s first open-source license, so getting
it right was vital. In fact, the original Jikes license of December
1998 was later generalized into the IBM Public License in June 1999.
It was a key moment, because it made releasing more IBM code as
open source much easier, smoothing the way for the company’s
continuing march into the world of free software.

Barry described IBM as being like “a big elephant: very, very
difficult to move an inch, but if you point the elephant toward the
right direction and get it moving, it’s also very difficult to stop
it.” The final nudge that set IBM moving inexorably toward the
embrace of open source occurred on January, 10, 2000, when the company
announced that it would make all of its server platforms “Linux-friendly”,
including the S/390 mainframe, the AS/400 minicomputer and the
RS/6000 workstation. IBM was supporting GNU/Linux across its entire
hardware range—a massive vote of confidence in freely available
software written by a distributed community of coders.

The man who was appointed at the time as what amounted to a Linux
Tsar for the company, Irving
Wladawsky-Berger
, said that there were three main strands to
that historic decision. One was simply that GNU/Linux was a platform
with a significant market share in the UNIX sector. Another was
the early use of GNU/Linux by the supercomputing community—something
that eventually led to every single one of
the world’s top 500 supercomputers
running some form of Linux
today.

The third strand of thinking within IBM is perhaps the most
interesting. Wladawsky-Berger pointed out how the rise of TCP/IP
as the de facto standard for networking had made interconnection
easy, and powered the rise of the internet and its astonishing
expansion. People within IBM realized that GNU/Linux could do the
same for application development. As he told me back in 2000:

The whole notion separating application development
from the underlying deployment platform has been a Holy Grail of
the industry because it would all of the sudden unshackle the
application developers from worrying about all that plumbing. I
think with Linux we now have the best opportunity to do that. The
fact that it’s not owned by any one company, and that it’s open
source, is a huge part of what enables us to do that. If the answer
had been, well, IBM has invented a new operating system, let’s get
everybody in the world to adopt it, you can imagine how far that
would go with our competitors.

Far from inventing a “new operating system”, with its purchase of
Red Hat, IBM has now fully embraced the only one matters any more—GNU/Linux. In doing so, it confirms Wladawsky-Berger’s prescient
analysis and completes that fascinating journey the company began
all those years ago.

Source

7 CI/CD Tools for Sysadmins | Linux.com

7 CI/CD tools for sysadmins

An easy guide to the top open source continuous integration, continuous delivery, and continuous deployment tools.

CICD with gears

Continuous integration, continuous delivery, and continuous deployment (CI/CD) have all existed in the developer community for many years. Some organizations have involved their operations counterparts, but many haven’t. For most organizations, it’s imperative for their operations teams to become just as familiar with CI/CD tools and practices as their development compatriots are.

CI/CD practices can equally apply to infrastructure and third-party applications and internally developed applications. Also, there are many different tools but all use similar models. And possibly most importantly, leading your company into this new practice will put you in a strong position within your company, and you’ll be a beacon for others to follow.

Some organizations have been using CI/CD practices on infrastructure, with tools like AnsibleChef, or Puppet, for several years. Other tools, like Test Kitchen, allow tests to be performed on infrastructure that will eventually host applications. In fact, those tests can even deploy the application into a production-like environment and execute application-level tests with production loads in more advanced configurations. However, just getting to the point of being able to test the infrastructure individually is a huge feat. Terraform can also use Test Kitchen for even more ephemeral and idempotent infrastructure configurations than some of the original configuration-management tools. Add in Linux containers and Kubernetes, and you can now test full infrastructure and application deployments with prod-like specs and resources that come and go in hours rather than months or years. Everything is wiped out before being deployed and tested again.

However, you can also focus on getting your network configurations or database data definition language (DDL) files into version control and start running small CI/CD pipelines on them. Maybe it just checks syntax or semantics or some best practices. Actually, this is how most development pipelines started. Once you get the scaffolding down, it will be easier to build on. You’ll start to find all kinds of use cases for pipelines once you get started.

For example, I regularly write a newsletter within my company, and I maintain it in version control using MJML. I needed to be able to host a web version, and some folks liked being able to get a PDF, so I built a pipeline. Now when I create a new newsletter, I submit it for a merge request in GitLab. This automatically creates an index.html with links to HTML and PDF versions of the newsletter. The HTML and PDF files are also created in the pipeline. None of this is published until someone comes and reviews these artifacts. Then, GitLab Pages publishes the website and I can pull down the HTML to send as a newsletter. In the future, I’ll automatically send the newsletter when the merge request is merged or after a special approval step. This seems simple, but it has saved me a lot of time. This is really at the core of what these tools can do for you. They will save you time.

The key is creating tools to work in the abstract so that they can apply to multiple problems with little change. I should also note that what I created required almost no code except some light HTML templating, some node to loop through the HTML files, and some more node to populate the index page with all the HTML pages and PDFs.

Some of this might look a little complex, but most of it was taken from the tutorials of the different tools I’m using. And many developers are happy to work with you on these types of things, as they might also find them useful when they’re done. The links I’ve provided are to a newsletter we plan to start for DevOps KC, and all the code for creating the site comes from the work I did on our internal newsletter.

Many of the tools listed below can offer this type of interaction, but some offer a slightly different model. The emerging model in this space is that of a declarative description of a pipeline in something like YAML with each stage being ephemeral and idempotent. Many of these systems also ensure correct sequencing by creating a directed acyclic graph (DAG) over the different stages of the pipeline.

These stages are often run in Linux containers and can do anything you can do in a container. Some tools, like Spinnaker, focus only on the deployment component and offer some operational features that others don’t normally include. Jenkins has generally kept pipelines in an XML format and most interactions occur within the GUI, but more recent implementations have used a domain specific language (DSL) using Groovy. Further, Jenkins jobs normally execute on nodes with a special Java agent installed and consist of a mix of plugins and pre-installed components.

Jenkins introduced pipelines in its tool, but they were a bit challenging to use and contained several caveats. Recently, the creator of Jenkins decided to move the community toward a couple different initiatives that will hopefully breathe new life into the project—which is the one that really brought CI/CD to the masses. I think its most interesting initiative is creating a Cloud Native Jenkins that can turn a Kubernetes cluster into a Jenkins CI/CD platform.

As you learn more about these tools and start bringing these practices into your company or your operations division, you’ll quickly gain followers. You will increase your own productivity as well as that of others. We all have years of backlog to get to—how much would your co-workers love if you could give them enough time to start tackling that backlog? Not only that, but your customers will start to see increased application reliability, and your management will see you as a force multiplier. That certainly can’t hurt during your next salary negotiation or when interviewing with all your new skills.

Let’s dig into the tools a bit more. We’ll briefly cover each one and share links to more information.

GitLab CI

GitLab CI

GitLab is a fairly new entrant to the CI/CD space, but it’s already achieved the top spot in the Forrester Wave for Continuous Integration Tools. That’s a huge achievement in such a crowded and highly qualified field. What makes GitLab CI so great? It uses a YAML file to describe the entire pipeline. It also has a functionality called Auto DevOps that allows for simpler projects to have a pipeline built automatically with multiple tests built-in. This system uses Herokuish buildpacksto determine the language and how to build the application. Some languages can also manage databases, which is a real game-changer for building new applications and getting them deployed to production from the beginning of the development process. The system has native integrations into Kubernetes and will deploy your application automatically into a Kubernetes cluster using one of several different deployment methodologies, like percentage-based rollouts and blue-green deployments.

In addition to its CI functionality, GitLab offers many complementary features like operations and monitoring with Prometheus deployed automatically with your application; portfolio and project management using GitLab Issues, Epics, and Milestones; security checks built into the pipeline with the results provided as an aggregate across multiple projects; and the ability to edit code right in GitLab using the WebIDE, which can even provide a preview or execute part of a pipeline for faster feedback.

GoCD

GoCD

GoCD comes from the great minds at Thoughtworks, which is testimony enough for its capabilities and efficiency. To me, GoCD’s main differentiator from the rest of the pack is its Value Stream Map (VSM) feature. In fact, pipelines can be chained together with one pipeline providing the “material” for the next pipeline. This allows for increased independence for different teams with different responsibilities in the deployment process. This may be a useful feature when introducing this type of system in older organizations that intend to keep these teams separate—but having everyone using the same tool will make it easier later to find bottlenecks in the VSM and reorganize the teams or work to increase efficiencies.

It’s incredibly valuable to have a VSM for each product in a company; that GoCD allows this to be described in JSON or YAML in version control and presented visually with all the data around wait times makes this tool even more valuable to an organization trying to understand itself better. Start by installing GoCD and mapping out your process with only manual approval gates. Then have each team use the manual approvals so you can start collecting data on where bottlenecks might exist.

Travis CI

Travis CI

Travis CI was my first experience with a Software as a Service (SaaS) CI system, and it’s pretty awesome. The pipelines are stored as YAML with your source code, and it integrates seamlessly with tools like GitHub. I don’t remember the last time a pipeline failed because of Travis CI or the integration—Travis CI has a very high uptime. Not only can it be used as SaaS, but it also has a version that can be hosted. I haven’t run that version—there were a lot of components, and it looked a bit daunting to install all of it. I’m guessing it would be much easier to deploy it all to Kubernetes with Helm charts provided by Travis CI. Those charts don’t deploy everything yet, but I’m sure it will grow even more in the future. There is also an enterprise version if you don’t want to deal with the hassle.

However, if you’re developing open source code, you can use the SaaS version of Travis CI for free. That is an awesome service provided by an awesome team! This alleviates a lot of overhead and allows you to use a fairly common platform for developing open source code without having to run anything.

Jenkins

Jenkins

Jenkins is the original, the venerable, de facto standard in CI/CD. If you haven’t already, you need to read “Jenkins: Shifting Gears” from Kohsuke, the creator of Jenkins and CTO of CloudBees. It sums up all of my feelings about Jenkins and the community from the last decade. What he describes is something that has been needed for several years, and I’m happy CloudBees is taking the lead on this transformation. Jenkins will be a bit overwhelming to most non-developers and has long been a burden on its administrators. However, these are items they’re aiming to fix.

Jenkins Configuration as Code (JCasC) should help fix the complex configuration issues that have plagued admins for years. This will allow for a zero-touch configuration of Jenkins masters through a YAML file, similar to other CI/CD systems. Jenkins Evergreen aims to make this process even easier by providing predefined Jenkins configurations based on different use cases. These distributions should be easier to maintain and upgrade than the normal Jenkins distribution.

Jenkins 2 introduced native pipeline functionality with two types of pipelines, which I discuss in a LISA17 presentation. Neither is as easy to navigate as YAML when you’re doing something simple, but they’re quite nice for doing more complex tasks.

Jenkins X is the full transformation of Jenkins and will likely be the implementation of Cloud Native Jenkins (or at least the thing most users see when using Cloud Native Jenkins). It will take JCasC and Evergreen and use them at their best natively on Kubernetes. These are exciting times for Jenkins, and I look forward to its innovation and continued leadership in this space.

Concourse CI

Concourse CI

I was first introduced to Concourse through folks at Pivotal Labs when it was an early beta version—there weren’t many tools like it at the time. The system is made of microservices, and each job runs within a container. One of its most useful features that other tools don’t have is the ability to run a job from your local system with your local changes. This means you can develop locally (assuming you have a connection to the Concourse server) and run your builds just as they’ll run in the real build pipeline. Also, you can rerun failed builds from your local system and inject specific changes to test your fixes.

Concourse also has a simple extension system that relies on the fundamental concept of resources. Basically, each new feature you want to provide to your pipeline can be implemented in a Docker image and included as a new resource type in your configuration. This keeps all functionality encapsulated in a single, immutable artifact that can be upgraded and modified independently, and breaking changes don’t necessarily have to break all your builds at the same time.

Spinnaker

Spinnaker

Spinnaker comes from Netflix and is more focused on continuous deployment than continuous integration. It can integrate with other tools, including Travis and Jenkins, to kick off test and deployment pipelines. It also has integrations with monitoring tools like Prometheus and Datadog to make decisions about deployments based on metrics provided by these systems. For example, the canary deployment uses a judge concept and the metrics being collected to determine if the latest canary deployment has caused any degradation in pertinent metrics and should be rolled back or if deployment can continue.

A couple of additional, unique features related to deployments cover an area that is often overlooked when discussing continuous deployment, and might even seem antithetical, but is critical to success: Spinnaker helps make continuous deployment a little less continuous. It will prevent a stage from running during certain times to prevent a deployment from occurring during a critical time in the application lifecycle. It can also enforce manual approvals to ensure the release occurs when the business will benefit the most from the change. In fact, the whole point of continuous integration and continuous deployment is to be ready to deploy changes as quickly as the business needs to change.

Screwdriver

Screwdriver

Screwdriver is an impressively simple piece of engineering. It uses a microservices approach and relies on tools like Nomad, Kubernetes, and Docker to act as its execution engine. There is a pretty good deployment tutorial for deploying to AWS and Kubernetes, but it could be improved once the in-progress Helm chart is completed.

Screwdriver also uses YAML for its pipeline descriptions and includes a lot of sensible defaults, so there’s less boilerplate configuration for each pipeline. The configuration describes an advanced workflow that can have complex dependencies among jobs. For example, a job can be guaranteed to run after or before another job. Jobs can run in parallel and be joined afterward. You can also use logical operators to run a job, for example, if any of its dependencies are successful or only if all are successful. Even better is that you can specify certain jobs to be triggered from a pull request. Also, dependent jobs won’t run when this occurs, which allows easy segregation of your pipeline for when an artifact should go to production and when it still needs to be reviewed.


This is only a brief description of these CI/CD tools—each has even more cool features and differentiators you can investigate. They are all open source and free to use, so go deploy them and see which one fits your needs best.


What to read next

Serverless computing is transforming traditional software development. These open source platforms…

Source

Intro to Git and GitHub for Linux – ls /blog

The Git distributed revision control system is a sweet step up from Subversion, CVS, Mercurial, and all those others we’ve tried and made do with. It’s great for distributed development, when you have multiple contributors working on the same project, and it is excellent for safely trying out all kinds of crazy changes. We’re going to use a free Github account for practice so we can jump right in and start doing stuff.

Conceptually Git is different from other revision control systems. Older RCS tracked changes to files, which you can see when you poke around in their configuration files. Git’s approach is more like filesystem snapshots, where each commit or saved state is a complete snapshot rather than a file full of diffs. Git is space-efficient because it stores only changes in each snapshot, and links to unchanged files. All changes are checksummed, so you are assured of data integrity, and always being able to reverse changes.

Git is very fast, because your work is all done on your local PC and then pushed to a remote repository. This makes everything you do totally safe, because nothing affects the remote repo until you push changes to it. And even then you have one more failsafe: branches. Git’s branching system is brilliant. Create a branch from your master branch, perform all manner of awful experiments, and then nuke it or push it upstream. When it’s upstream other contributors can work on it, or you can create a pull request to have it reviewed, and then after it passes muster merge it into the master branch.

So what if, after all this caution, it still blows up the master branch? No worries, because you can revert your merge.

Practice on Github

The quickest way to get some good hands-on Git practice is by opening a free Github account. Figure 1 shows my Github testbed, named playground. New Github accounts come with a prefab repo populated by a README file, license, and buttons for quickly creating bug reports, pull requests, Wikis, and other useful features.

Free Github accounts only allow public repositories. This allows anyone to see and download your files. However, no one can make commits unless they have a Github account and you have approved them as a collaborator. If you want a private repo hidden from the world you need a paid membership. Seven bucks a month gives you five private repos, and unlimited public repos with unlimited contributors.

Github kindly provides copy-and-paste URLs for cloning repositories. So you can create a directory on your computer for your repository, and then clone into it:

$ mkdir git-repos
$ cd git-repos
$ git clone https://github.com/AlracWebmaven/playground.git
Cloning into 'playground'...
remote: Counting objects: 4, done.
remote: Compressing objects: 100% (4/4), done.
remote: Total 4 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (4/4), done.
Checking connectivity... done.
$ ls playground/
LICENSE  README.md

All the files are copied to your computer, and you can read, edit, and delete them just like any other file. Let’s improve README.md and learn the wonderfulness of Git branching.

Branching

Git branches are gloriously excellent for safely making and testing changes. You can create and destroy them all you want. Let’s make one for editing README.md:

$ cd playground
$ git checkout -b test
Switched to a new branch 'test'

Run git status to see where you are:

$ git status
On branch test
nothing to commit, working directory clean

What branches have you created?

$ git branch
* test
  master

The asterisk indicates which branch you are on. master is your main branch, the one you never want to make any changes to until they have been tested in a branch. Now make some changes to README.md, and then check your status again:

$ git status
On branch test
Changes not staged for commit:
  (use "git add ..." to update what will be committed)
  (use "git checkout -- ..." to discard changes in working directory)
        modified:   README.md
no changes added to commit (use "git add" and/or "git commit -a")

Isn’t that nice, Git tells you what is going on, and gives hints. To discard your changes, run

$ git checkout README.md

Or you can delete the whole branch:

$ git checkout master
$ git branch -D test

Or you can have Git track the file:

$ git add README.md
$ git status
On branch test
Changes to be committed:
  (use "git reset HEAD ..." to unstage)
        modified:   README.md

At this stage Git is tracking README.md, and it is available to all of your branches. Git gives you a helpful hint– if you change your mind and don’t want Git to track this file, run git reset HEAD README.md. This, and all Git activity, is tracked in the .git directory in your repository. Everything is in plain text files: files, checksums, which user did what, remote and local repos– everything.

What if you have multiple files to add? You can list each one, for example git add file1 file2 file2, or add all files with git add *.

When there are deleted files, you can use git rm filename, which only un-stages them from Git and does not delete them from your system. If you have a lot of deleted files, use git add -u.

Committing Files

Now let’s commit our changed file. This adds it to our branch and it is no longer available to other branches:

$ git commit README.md
[test 5badf67] changes to readme
 1 file changed, 1 insertion(+)

You’ll be asked to supply a commit message. It is a good practice to make your commit messages detailed and specific, but for now we’re not going to be too fussy. Now your edited file has been committed to the branch test. It has not been merged with master or pushed upstream; it’s just sitting there. This is a good stopping point if you need to go do something else.

What if you have multiple files to commit? You can commit specific files, or all available files:

$ git commit file1 file2
$ git commit -a

How do you know which commits have not yet been pushed upstream, but are still sitting in branches? git status won’t tell you, so use this command:

$ git log --branches --not --remotes
commit 5badf677c55d0c53ca13d9753344a2a71de03199
Author: Carla Schroder 
Date:   Thu Nov 20 10:19:38 2014 -0800
    changes to readme

This lists un-merged commits, and when it returns nothing then all commits have been pushed upstream. Now let’s push this commit upstream:

$ git push origin test
Counting objects: 7, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 324 bytes | 0 bytes/s, done.
Total 3 (delta 1), reused 0 (delta 0)
To https://github.com/AlracWebmaven/playground.git
 * [new branch]      test -> test

You may be asked for your Github login credentials. Git caches them for 15 minutes, and you can change this. This example sets the cache at two hours:

$ git config --global credential.helper 'cache --timeout=7200'

Now go to Github and look at your new branch. Github lists all of your branches, and you can preview your files in the different branches

Source

Different Ways To Update Linux Kernel For Ubuntu

update linux kernel for ubuntu

Source

Download Calculate Linux Desktop KDE 18.12

Calculate Linux Desktop KDE is the main edition of the Calculate Linux operating system, built around the powerful KDE Plasma Workspaces and Applications desktop environment. Calculate Linux is an one-man, open source Linux distribution that gets its roots from the complex and exclusivist Gentoo Linux operating system, which is known to perform great on any type of computers and other devices.

Distributed in multiple editions, as Live DVDs

The project provides users with multiple editions, each one designed to be used for a specific task or by a distinct group of people. For example, the two desktop editions can be used by home users as a workstation, or on small and medium businesses as an office workstation.

Calculate Linux Desktop is distributed as two Live DVD ISO images, one for each of the officially supported architectures (64-bit and 32-bit). It supports many languages, including Russian and English.

Boot options

The boot menu will allow users to boot the operating system that is currently installed, test the system memory (RAM) for errors, start the live environment with the X11 Window System, copy the entire ISO image to RAM (requires at least 4GB of system memory).

Features the KDE desktop environment

The desktop session is carefully designed to provide users with a modern and stylish computing environment, comprised of a single panel placed by default on the upper part of the screen (it can be used to launch applications, switch between virtual workspaces and interact with the system tray area.

Default applications

Default applications include the Chromium web browser, LibreOffice office suite, Amarok music player and organizer, as well as the digiKam image viewer and editor. In addition to these, there are also many other useful utilities and tools.

If you don’t like the KDE desktop environment or you find it “too heavy” for your computer, Calculate Linux also provides a special edition dedicated to all fans of the Xfce window manager.

Source

The Many New Features & Improvements Of The Linux 5.0 Kernel

Linus Torvalds just released Linux 5.0-rc1, what was formerly known as Linux 4.21 over the past two weeks. While the bumping was rather arbitrary as opposed to a major change necessitating the big version bump, this next version of the Linux kernel does come with some exciting changes and new features (of course, our Twitter followers already have known Linux was thinking of the 5.0 re-brand from 4.21). Here is our original feature overview of the new material to find in this kernel.

The merge window is now closed so we have a firm look at what’s new for this next kernel version. As is standard practice, there will be seven to eight weekly release candidates before Linux 5.0 is officially ready for release around the end of February or early Match. Of the new features for Linux 5.0 below are the highlights from our close monitoring of the Linux kernel mailing list and Git repositories over the holidays. There are lots of CPU and GPU improvements as usual, the long-awaited AMD FreeSync display support, the Raspberry Pi Touchscreen is now supported by the mainline kernel, there is a new console font for HiDPI/retina displays, initial open-source NVIDIA RTX Turing display support, Adiantum data encryption support, Logitech high resolution scrolling support, the I3C subsystem was finally merged, and a lot more to get excited about as the first kernel cycle of 2019.

Direct Rendering Manager (DRM) Drivers / Graphics

AMD FreeSync support is easily the biggest AMDGPU feature we’ve seen in a while. The Linux 5.0 kernel paired with Mesa 19.0 can now yield working support for FreeSync / VESA Adaptive-Sync over DisplayPort connections! This was one of the few missing features from the open-source AMD Linux driver.

Support for a new VegaM and other new Vega IDs.

AMDKFD compute support for Vega 12 and Polaris 12.

NVIDIA Xavier display support with the Tegra DRM code.

– Continued work bringing up Intel Icelake Gen11 graphics and the Intel DRM driver also enables DP FEC support.

Initial support for NVIDIA Turing GPUs but only kernel mode-setting so far and no hardware acceleration on Nouveau.

– Media driver updates including ASpeed video engine support.

Processors

Initial support for the NXP i.MX8 SoCs as well as the MX8 reference board.

– The Cortex-A5-based RDA Micro RDA8810PL is another new ARM SoC now supported by the mainline kernel.

– Updates to the Chinese 32-bit C-SKY CPU architecture code.

– NVIDIA Tegra suspend-and-resume for the Tegra X2 and Xavier SoCs.

– Support for the Allwinner T3, Qualcomm QCS404, and NXP Layerscape LX2160A.

Intel VT-d Scalable Mode support for Scalable I/O Virtualization.

New Intel Stratix 10 FPGA drivers.

– Updates to the Andes NDS32 CPU architecture.

NXP PowerPC processors finally mitigated for Spectre V2.

ARM big.LITTLE Energy Aware Scheduling has made it into the kernel for conserving power and some minor possible performance benefits.

AArch64 pointer authentication support.

AMD Zen 2 temperature monitoring support. There is also temperature support for the Hygon Dhyana Chinese-made AMD CPUs.

POWER On-Chip Controller driver support.

Many updates for MIPS CPUs including prepping for nanoMIPS.

Improved AMD CPU microcode handling.

AMD Always-On STIBP Preferred Mode.

AMD Platform QoS support for next-generation EPYC processors.


Source

Linux Today – Linux 5.0 rc1

Jan 07, 2019, 06:00

(Other stories by Linus Torvalds)

So this was a fairly unusual merge window with the holidays, and as a
result I’m not even going to complain about the pull requests that
ended up coming in late. It all mostly worked out fine, I think. And
lot of people got their pull requests in early, and hopefully had a
calm holiday season. Thanks again to everybody.

The numbering change is not indicative of anything special. If you

want to have an official reason, it’s that I ran out of fingers and
toes to count on, so 4.21 became 5.0. There’s no nice git object
numerology this time (we’re _about_ 6.5M objects in the git repo), and
there isn’t any major particular feature that made for the release
numbering either. Of course, depending on your particular interests,
some people might well find a feature _they_ like so much that they
think it can do as a reason for incrementing the major number.

So go wild. Make up your own reason for why it’s 5.0.

Because as usual, there’s a lot of changes in there. Not because this

merge window was particularly big – but even our smaller merge windows
aren’t exactly small. It’s a very solid and average merge window with
just under 11k commits (or about 11.5k if you count merges).

The stats look fairly normal. About 50% is drivers, 20% is

architecture updates, 10% is tooling, and the remaining 20% is all
over (documentation, networking, filesystems, header file updates,
core kernel code..). Nothing particular stands out, although I do like
seeing how some ancient drivers are getting put out to pasture
(*cought*isdn*cough*).

As usual even the shortlog is much too big to post, so the summary

below is only a list of the pull requests I merged.

Go test. Kick the tires. Be the first kid on your block running a 5.0

pre-release kernel.

Linus

Complete Story

Related Stories:

Source

Aliases: To Protect and Serve

Happy 2019! Here in the new year, we’re continuing our series on aliases. By now, you’ve probably read our , and it should be quite clear how they are the easiest way to save yourself a lot of trouble. You already saw, for example, that they helped with muscle-memory, but let’s see several other cases in which aliases come in handy.

Aliases as Shortcuts

One of the most beautiful things about Linux’s shells is how you can use zillions of options and chain commands together to carry out really sophisticated operations in one fell swoop. All right, maybe beauty is in the eye of the beholder, but let’s agree that this feature *is* practical.

The downside is that you often come up with recipes that are often hard to remember or cumbersome to type. Say space on your hard disk is at a premium and you want to do some New Year’s cleaning. Your first step may be to look for stuff to get rid off in you home directory. One criteria you could apply is to look for stuff you don’t use anymore. ls can help with that:

ls -lct

The instruction above shows the details of each file and directory (-l) and also shows when each item was last accessed (-c). It then orders the list from most recently accessed to least recently accessed (-t).

Is this hard to remember? You probably don’t use the -c and -t options every day, so perhaps. In any case, defining an alias like

alias lt=’ls -lct’

will make it easier.

Then again, you may want to have the list show the oldest files first:

alias lo=’lt -F | tac’

There are a few interesting things going here. First, we are using an alias (lt) to create another alias — which is perfectly okay. Second, we are passing a new parameter to lt (which, in turn gets passed to ls through the definition of the lt alias).

The -F option appends special symbols to the names of items to better differentiate regular files (that get no symbol) from executable files (that get an *), files from directories (end in /), and all of the above from links, symbolic and otherwise (that end in an @ symbol). The -F option is throwback to the days when terminals where monochrome and there was no other way to easily see the difference between items. You use it here because, when you pipe the output from lt through to tac you lose the colors from ls.

The third thing to pay attention to is the use of piping. Piping happens when you pass the output from an instruction to another instruction. The second instruction can then use that output as its own input. In many shells (including Bash), you pipe something using the pipe symbol (|).

In this case, you are piping the output from lt -F into tac. tac’s name is a bit of a joke. You may have heard of cat, the instruction that was nominally created to concatenate files together, but that in practice is used to print out the contents of a file to the terminal. tac does the same, but prints out the contents it receives in reverse order. Get it? cat and tac. Developers, you so funny!

The thing is both cat and tac can also print out stuff piped over from another instruction, in this case, a list of files ordered chronologically.

So… after that digression, what comes out of the other end is the list of files and directories of the current directory in inverse order of freshness.

The final thing you have to bear in mind is that, while lt will work the current directory and any other directory…

# This will work:
lt
# And so will this:
lt /some/other/directory

… lo will only work with the current directory:

# This will work:
lo
# But this won’t:
lo /some/other/directory

This is because Bash expands aliases into their components. When you type this:

lt /some/other/directory

Bash REALLY runs this:

ls -lct /some/other/directory

which is a valid Bash command.

However, if you type this:

lo /some/other/directory

Bash tries to run this:

ls -lct -F | tac /some/other/directory

which is not a valid instruction, because tac mainly because /some/other/directory is a directory, and cat and tac don’t do directories.

More Alias Shortcuts

  • alias lll=’ls -R’ prints out the contents of a directory and then drills down and prints out the contents of its subdirectories and the subdirectories of the subdirectories, and so on and so forth. It is a way of seeing everything you have under a directory.
  • mkdir=’mkdir -pv’ let’s you make directories within directories all in one go. With the base form of mkdir, to make a new directory containing a subdirectory you have to do this:
    mkdir newdir
    mkdir newdir/subdirOr this:

    mkdir -p newdir/subdir

    while with the alias you would only have to do this:

    mkdir newdir/subdir

    Your new mkdir will also tell you what it is doing while is creating new directories.

Aliases as Safeguards

The other thing aliases are good for is as safeguards against erasing or overwriting your files accidentally. At this stage you have probably heard the legendary story about the new Linux user who ran:

rm -rf /

as root, and nuked the whole system. Then there’s the user who decided that:

rm -rf /some/directory/ *

was a good idea and erased the complete contents of their home directory. Notice how easy it is to overlook that space separating the directory path and the *.

Both things can be avoided with the alias rm=’rm -i’ alias. The -i option makes rm ask the user whether that is what they really want to do and gives you a second chance before wreaking havoc in your file system.

The same goes for cp, which can overwrite a file without telling you anything. Create an alias like alias cp=’cp -i’ and stay safe!

Next Time

We are moving more and more into scripting territory. Next time, we’ll take the next logical step and see how combining instructions on the command line gives you really interesting and sophisticated solutions to everyday admin problems.

Source

How To Install and Configure ownCloud with Apache on Ubuntu 18.04

ownCloud is an open source, self-hosted file sync and file share platform, similar to Dropbox, Microsoft OneDrive and Google Drive. ownCloud is extensible via apps and has desktop and mobile clients for all major platforms.

In this tutorial we’ll show you how to install and configure ownCloud with Apache on an Ubuntu 18.04 machine.

Prerequisites

You’ll need to be logged in as a user with sudo access to be able to install packages and configure system services.

Step 1: Creating MySQL Database

ownCloud can use SQLite, Oracle 11g, PostgreSQL or MySQL database to store all its data. In this tutorial we will use MySQL as a database back-end.

If MySQL or MariaDB is not installed on your Ubuntu server you can install by following one of the guides below:

Start by login in to the MySQL shell by typing the following command:

From inside the mysql console, run the following SQL statement to create a database:

CREATE DATABASE owncloud CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci;

Next, create a MySQL user account and grant access to the database:

GRANT ALL ON owncloud.* TO ‘owncloudsuser’@’localhost’ IDENTIFIED BY ‘change-with-strong-password’;

Finally, exit the mysql console by typing:

Step 2: Installing PHP and Apache

ownCloud is a PHP application. PHP 7.2 which is the default PHP in Ubuntu 18.04 is fully supported and recommended for ownCloud.

Install Apache and all required PHP extensions using the following command:

sudo apt install apache2 libapache2-mod-php7.2 openssl php-imagick php7.2-common php7.2-curl php7.2-gd php7.2-imap php7.2-intl php7.2-json php7.2-ldap php7.2-mbstring php7.2-mysql php7.2-pgsql php-smbclient php-ssh2 php7.2-sqlite3 php7.2-xml php7.2-zip

Step 3: Configuring Firewall

Assuming you are using UFW to manage your firewall, you’ll need to open HTTP (80) and HTTPS (443) ports. You can do that by enabling the ‘Apache Full’ profile which includes rules for both ports:

sudo ufw allow ‘Apache Full’

Step 4: Downloading ownCloud

At the time of writing this article, the latest stable version of ownCloud is version 10.0.10. Before continuing with the next step visit the ownCloud download page and check if there is a new version of ownCloud available.

Use the following wget command to download the ownCloud zip archive:

wget https://download.owncloud.org/community/owncloud-10.0.10.zip -P /tmp

Once the download is complete, extract the archive to the /var/www directory:

sudo unzip /tmp/owncloud-10.0.10.zip -d /var/www

Set the correct ownership so that the Apache web server can have full access to the ownCloud’s files and directories.

sudo chown -R www-data: /var/www/owncloud

Step 5: Configuring Apache

Open your text editor and create the following Apache configuration file.

sudo nano /etc/apache2/conf-available/owncloud.conf

/etc/apache2/conf-available/owncloud.conf

Alias /owncloud “/var/www/owncloud/”

<Directory /var/www/owncloud/>
Options +FollowSymlinks
AllowOverride All

<IfModule mod_dav.c>
Dav off
</IfModule>

SetEnv HOME /var/www/owncloud
SetEnv HTTP_HOME /var/www/owncloud

</Directory>

Enable the newly added configuration and all required Apache modules with:

sudo a2enconf owncloud
sudo a2enmod rewrite
sudo a2enmod headers
sudo a2enmod env
sudo a2enmod dir
sudo a2enmod mime

Activate the changes by restarting Apache service:

sudo systemctl reload apache2

Step 6: Installing ownCloud

Now that ownCloud is downloaded and all necessary services are configured open you browser and start the ownCloud installation by visiting your server’s domain name or IP address followed by /owncloud :

http://domain_name_or_ip_address/owncloud

You will be presented with the ownCloud setup page.

Enter your desired admin username and password and the MySQL user and database details you previously created.

Click on the Finish setup button and once the installation process is completed you will be redirected to the ownCloud dashboard logged in as admin user.

Conclusion

You have learned how to install and configure ownCloud on your Ubuntu 18.04 machine. If you have a domain name associated with your ownCloud server, you can follow this guide and secure your Apache with Let’s Encrypt.

To find more information about how to manage your ownCloud instance visit the ownCloud documentation page.

If you have any question, please leave a comment below.

Source

Welcome » Linux Magazine

It is getting harder to write this column. I used to have a ready supply of topics, with all the outrageous things that were happening to Linux: Microsoft’s disinformation, SCO’s lawsuit, patent licenses, the Novell/Microsoft pact – big things that threatened the very existence of the Linux community.

Dear Reader,

It is getting harder to write this column. I used to have a ready supply of topics, with all the outrageous things that were happening to Linux: Microsoft’s disinformation, SCO’s lawsuit, patent licenses, the Novell/Microsoft pact – big things that threatened the very existence of the Linux community.

I was trying to think up a new topic today, and it occurred to me that there used to be way more in the news on an average day that could rile up a Linux guy. That’s the good news, because Linux is in a safer place and is no longer faced with the threat of imminent destruction. Microsoft is playing nice (sort of); SCO has collapsed under the weight of its own imagination deficit. But are we really walking on easy street now? Surely some other threats must be out there? Are there still factors that are threatening the livelihood of the Linux community, and if so, what are they?

That sounded like a good topic for a column, so I resolved to create my own list of the current top threats. One note on this list: Because these threats are not quite as dire as they used to be, they are also a little more arbitrary. This is my list – some of you might see different threats, but either way, the main point is that challenges still exist:

  • Fragmentation – the Linux desktop still isn’t unified, and the recent controversy over the systemd init daemon is a reminder that the community does not always move in the same direction. The beauty of open source is that you can always fork the code, but if too many developers take the code in too many different directions, the project could lose the critical mass necessary to hold the mainstream, becoming a collection of smaller projects, like the BSDs, that will receive less attention from hardware vendors and, ultimately, users.
  • Irrelevance – will the general-purpose computer OS still be a thing in 10 years? Already, people are doing more with their cell phones and tablets. Linux is still running inside Android, but there are so many other things going on inside a smartphone that you can’t exactly just hack on it like you can on a Linux system. Linux would still be running on toasters and washing machines (and on servers – see the next item), but it could recede into the background and be more under the control of hardware and cloud companies, rather than driven by a vibrant, independent community.
  • Cloud computing – According to Free Software Foundation president Richard Stallman, cloud servers “… wrest control from the users even more inexorably than proprietary software” [1]. Software running from within a proprietary portal maximizes the power of the vendor and minimizes the freedom of the user in ways that are antithetical to the spirit of Free Software. Also, the copyleft protection of the GPL, which forces the sharing of source code when changes are made to a program, is triggered when the software is distributed. As many have pointed out, cloud computing doesn’t really distribute the software, so it falls in a gray area that is beyond the protection of Free Software licensing.
  • Re-emergence of a “friendlier” Microsoft – Microsoft is no longer bent on destroying Linux; in fact, they say they “love” Linux. But a little too much love from Microsoft could be a scary thing too. Many in the Linux community distrust Redmond’s motives and wonder if some kind of assimilation might be taking place. The GPL offers some natural defenses against a single company gaining control, but could Microsoft use its cash stores and market clout to take Linux in a direction that the greater community doesn’t want to go, and what would happen if they did?
  • Succession issues – Linux is still run by the same guy who created it 27 years ago. Linus Torvalds is still young and healthy, but he might not want to do this forever. Will Linux survive the handoff to a new generation of leaders?
  • Bad judges and politicians – Linux and open source licensing have survived several tests in the courts over patent law, copyright law, and other intellectual property issues, but the questions are complex and lots of politicians, business leaders, and jurists still don’t exactly get what’s going on with open source. Actually, I sometimes wonder if the open source community totally gets it. (Many voices in the community have called for a loosening of copyright laws to increase the freedom to consume music and movies, but actually, a strong copyright is the foundation on which the GPL is built, so you have to be very careful.) These issues are too vast and intricate to sort out in a one-page intro column, but suffice it to say, a few bad decisions from judges or regulators could bring back questions that we all thought were settled.

That’s my list – at least for now. I’m not saying all this stuff is actually going to happen, but, as with any challenge, the best way to keep these dark threats in abeyance is to be aware and not get too complacent. As the old saying goes “Eternal vigilance is the price of liberty” [2].

Joe Casad, Editor in Chief

Source

WP2Social Auto Publish Powered By : XYZScripts.com