What you need to know about the GPL Cooperation Commitment

Imagine what the world would look like if growth, innovation, and development were free from fear. Innovation without fear is fostered by consistent, predictable, and fair license enforcement. That is what the GPL Cooperation Commitment aims to accomplish.

Last year, I wrote an article about licensing effects on downstream users of open source software. As I was conducting research for that article, it became apparent that license enforcement is infrequent and often unpredictable. In that article, I offered potential solutions to the need to make open source license enforcement consistent and predictable. However, I only considered “traditional” methods (e.g., through the court system or some form of legislative action) that a law student might consider.

In November 2017, Red Hat, IBM, Google, and Facebook proposed the the “non-traditional” solution I had not considered: the GPL Cooperation Commitment, which provides for fair and consistent enforcement of the GPL. I believe the GPL Cooperation Commitment is critical for two reasons: First, consistent and fair license enforcement is crucial for growth in the open source community; second, unpredictability is undesirable in the legal community.

Understanding the GPL

To understand the GPL Cooperation Commitment, you must first understand the GPL’s history. GPL is short for GNU General Public License. The GPL is a “copyleft” open source license, meaning that a software’s distributor must make the source code available to downstream users. The GPL also prohibits placing restrictions on downstream use. These requirements keep individual users from denying freedoms (to use, study, share, and improve the software) to others. Under the GPL, a license to use the code is granted to all downstream users, provided they meet the requirements and conditions of the license. If a licensee does not meet the license’s requirements, they are non-compliant.

Under the second version of the GPL (GPLv2), a license automatically terminates upon any non-compliance, which causes some software developers to shy away from using the GPL. However, the third version of the GPL (GPLv3) added a “cure provision” that gives a 30-day period for a licensee to remediate any GPL violation. If the violation is cured within 30 days following notification of non-compliance, the license is not terminated.

This provision eliminates the fear of termination due to an innocent mistake, thus fostering development and innovation by bringing peace of mind to users and distributors of the software.

What the GPL Cooperation Commitment does

The GPL Cooperation Commitment applies the GPLv3’s cure provisions to GPLv2-licensed software, thereby protecting licensees of GPLv2 code from the automatic termination of their license, consistent with the protections afforded by the GPLv3.

The GPL Cooperation Commitment is important because, while software engineers typically want to do the right thing and maintain compliance, they sometimes misunderstand how to do so. This agreement enables developers to avoid termination when they are non-compliant due to confusion or simple mistakes.

The GPL Cooperation Commitment spawned from an announcement in 2017 by the Linux Foundation Technical Advisory Board that the Linux kernel project would adopt the cure provision from GPLv3. With the GPL Cooperation Commitment, many major technology companies and individual developers made the same commitment and expanded it by applying the cure period to all of their software licensed under GPLv2 (and LGPLv2.1), not only to contributions to the Linux kernel.

Broad adoption of the GPL Cooperation Commitment will have a positive impact on the open source community because a significant amount of software is licensed under GPLv2. An increasing number of companies and individuals are expected to adopt the GPL Cooperation Commitment, which will lead to a significant amount of GPLv2 (and LGPLv2.1) code under license terms that promote fair and predictable approaches to license enforcement.

In fact, as of November 2018, more than 40 companies, including industry leaders IBM, Google, Amazon, Microsoft, Tencent, Intel, and Red Hat, have signed onto the GPL Cooperation Commitment and are working collaboratively to create a standard of fair and predictable enforcement within the open source community. The GPL Cooperation Commitment is just one example of how the community comes together to ensure the future of open source.

The GPL Cooperation Commitment tells downstream licensees that you respect their good intentions and that your GPLv2 code is safe for them to use. More information, including about how you can add your name to the commitment, is available on the GPL Cooperation Commitment website.

Source

Ruby in Containers | Linux.com

There was a time when deploying software was an event, a ceremony because of the difficulty that was required to keep this consistency. Teams spent a lot of time making the destination environments run the software as the source environment. They thereafter prayed that the gods kept the software running perfectly in production as in development.

With containers, deployments are more frequent because we package our applications with their libraries as a unit making them portable thereby helping us maintain consistency and reliability when moving software between environments. For developers, this is improved productivity, portability and ease of scaling.

Because of this portability, containers have become the universal language of the cloud allowing us to move software from one cloud to another without much trouble.

In this article, I will discuss two major concepts to note while working with containers in Ruby. I will discuss how to create small container images and how to test them.

Read more at The New Stack

Source

Red Hat Enterprise Linux 8 makes its debut

5661612668_b0fc299f7f_b

Four years on from the release of Red Hat Enterprise Linux 7, open source software company Red Hat Inc. finally announced today that version 8 of its computer server operating system is now in beta.

A lot has changed in the world of Linux during that time, with vastly more workloads running in public clouds and more agile software development practices increasingly becoming the norm. The new RHEL reflects those differences.

Whereas the RHEL 7 release was all about better support for virtual machines and improved Windows interoperability, today’s version gives a nod to the fact that most information technology operations are increasingly all about the cloud and software containers.

The public beta release of RHEL 8 is important because Linux is the most dominant server operating system for both on-premises and cloud infrastructure, Constellation Research Inc. analyst Holger Mueller told SiliconANGLE. And of the companies that sell Linux OS platforms, Red Hat is one of the biggest. Late last month, IBM Corp. said it signed a deal to acquire the company for $34 billion, though the acquisition won’t close until well into next year.

“When a key vendor like RedHat updates its Linux OS, executives pay close attention to the rate of innovation and how much it has future-proofed the platform,” Mueller said. “We expect RedHat to get good grades in both regards, thanks to its focus on changing the underlying platform to receive more granular updates and [the improved] container capabilities.”

As always, Red Hat has made literally hundreds of improvements to its flagship software platform. Still, one stands out from the pack.

RHEL 8 introduces a new concept called Application Streams, which are designed to deliver “userspace packages.” That refers to independent software code that runs outside of the OS’s kernel, more easily and with greater flexibility.

So userspace packages, which could be the latest version of a programming language, for example, can now be updated without needing to wait for a new version of Red Hat’s operating system to come out. The idea is that this will help enterprises become more agile and customize their IT infrastructure better, without breaking anything along the way.

Application Streams also allow companies to use more than one version of the same userspace package simultaneously. This allows for much greater freedom, as it means developers can work with the latest release of a new database for example, while production apps keep running the stable release developers are sure the new one works smoothly.

Red Hat has also improved networking for containers, which are isolated development environments used to build applications that can run on any platform. The release introduces a new Transmission Control Protocol and Internet Protocol or TCP/IP stack that increases bandwidth and boosts other networking functions, with the aim of providing superior performance for video streaming and other services.

There’s also a new container toolkit for developers to play with. It includes the latest version of Buildah, which is used to create containers; Podman, which is used to get them up and running; and Skopeo, which is a tool for sharing containerized apps. The idea is to help developers build, run and share their container-based apps more easily.

On the security side, RHEL 8 brings the latest OpenSSL 1.1.1 and Transport Layer Security 1.3 releases to the table. OpenSSL is a software library for applications that secure communications over computer networks against eavesdropping or need to identify the party at the other end. Meanwhile, TLS is a cryptographic protocol that provides end-to-end communications security over networks and is widely used for internet communications and online transactions. Red Hat said it hopes the updates here can ease headaches around regulatory compliance issues.

RHEL 8 should also be simpler to manage due to the addition of single user control via the Web Console, while the new RHEL Composer provides a way for users to create and deploy container images across multiple cloud platforms, including private, public and virtual ones.

Source

Bisected: The Unfortunate Reason Linux 4.20 Is Running Slower

After running a lot of tests and then bisecting the Linux 4.20 kernel merge window, the reason for the significant slowdowns in the Linux 4.20 kernel for many real-world workloads is now known…

This latest Linux 4.20 testing endeavor started out with seeing the Intel Core i9 performance pulling back in many synthetic and real-world tests. This ranged from Rodinia scientific OpenMP tests taking 30% longer to Java-based DaCapo tests taking up to ~50% more time to complete to code compilation tests taking measurably longer to lower PostgreSQL database server performance to longer Blender3D rendering times. That happened with a Core i9 7960X and Core i9 7980XE test systems while the AMD Threadripper 2990WX performance was unaffected by the Linux 4.20 upgrade.

In some cases this Linux 4.20 slowdown is enough where the Threadripper 2990WX is able to pick up extra wins over the Core i9 7980XE.

 

When digging through more of my test system data, a set of systems I have running the latest Linux kernel Git benchmarks every other day also saw a significant pullback in performance from the early days of the Linux 4.20 merge window up through the very latest kernel code as of today. Those affected systems weren’t high-end HEDT boxes but included a low-end Core i3 7100 as well as a Xeon E5 v3 and Core i7 systems. AMD systems though still didn’t appear impacted. Those tests also found workloads like the Smallpt renderer to slowdown significant, PHP performance to take a major dive, and other scientific workloads like HMMer also faced a major setback compared to the current Linux 4.19 stable series.

Bisecting the Linux 4.20 kernel slowdown… The sizable difference during that process.

With seeing clear performance regressions on a number of systems when running the latest Linux 4.20 code, and especially with being able to reproduce it on high-core-count hardware (thus significantly cutting down the kernel build times), this morning I kicked off the kernel bisecting process to see why this new kernel is causing many workloads to run so much slower than Linux 4.19. With the Phoronix Test Suite doing the heavy-lifting, the problematic commit was quickly uncovered.

Going into this testing my thinking was perhaps an Intel P-State CPU frequency scaling driver regression as something that has caused some performance regressions in the past or perhaps a scheduler change. There’s also been a lot of Linux 4.20 changes in general that some unintentional regression must have slipped in there somewhere primarily hurting the Intel Linux performance… As a reminder, Linux 4.20 is the biggest kernel release of the year in terms of lines of code changed with more than 354 thousand lines of new code added at the end of October when this merge window opened.

 

As outlined in the Linux 4.20 feature overview, there are a lot of exciting changes with this kernel. But why is it slower? More work on f!*#(# Spectre!

Source

Installing and Using AWS CLI on Ubuntu

AWS offers an enormous range of services and to launch even the simplest of these services require numerous steps. You will soon find that time spent on AWS console (the Web UI) is time well wasted. While I don’t condone this design and wish for something simpler, I do realize that most of us are stuck with AWS because our organization chose it as their platform for one reason or another.

Instead of complaining about it, let’s try and limit our attention to a small set of services that an organization typically uses. This may be ECS, AWS Lambda, S3 or EC2. One way of doing it is by using the AWS CLI. It offers you a way to easily integrate AWS interface with your everyday work flow. Once you get over the initial hurdle of setting up the CLI and getting used to a few commands, this will save you hours and hours of time. Time that you can spend on much more pleasant activities.

This tutorial assumes that you already have an AWS account. This can be an IAM user account with programmatic access issued by your organization. If you have your own personal account with AWS then do not use your AWS root credentials for the CLI! Instead create an IAM user with programmatic access for all CLI related stuff. When deciding for policy that you will attach to this new user, think about what you want to do with this account.

The most permissive policy is that of Administrative Access, which I will be using. As you create an IAM user gets assigned a username, an Access ID and a Secret ID Key. Keep the latter two confidential.

For my local environment, I will be using Ubuntu 18.04 LTS.

Installing AWS CLI

Ubuntu 18.04 LTS comes with Python 3.6 preinstalled and you can install pip package manager to go with this by running (if you wish for an apt package for the CLI, read further below for a note on that):

$ sudo apt install python3-pip

If you are running Python 2.6 or earlier, then replace python3-pip with python-pip. AWS CLI is shipped as a pip package so we will need it. Once installed use pip to install the cli.

Once again, if you are using Python 2, replace pip3 with pip. If you want you can use, sudo apt install awscli to install aws cli as well. You will be a couple of revisions behind but it is fine. Once it is installed relaunch the bash session.

Configuring the Environment

Assuming you don’t have your IAM access keys, you can either ask your organization’s AWS Root user to create one for you or if you are using your own personal account and are your own root admin, then open up the IAM Console in your browser.

Go to the “Users” tab and select the User Account you want to use to access the CLI. Go to “Security Credentials” and create access key and secret access key. Never share this key with anyone, and make sure you don’t push them along with your git commits, etc.

Use these keys as the command below prompts you to enter their respective values:

Output:

AWS Access Key ID [None]: ADSLKFJAASDFKLJLGA
AWS Secret Access Key [None]: lkdsfh490IODSFOIsGFSD98+fdsfs/fs
Default region name [None]: us-west-2
Default output format [None]: json

The value for access key and secret key will obviously be different in your case. When it comes to region, choose the one that is closest to you (or your users). For output JSON format is fine. Once you have entered valid information for all the values your CLI is ready to interface with the AWS remotely.

The ID and secret as well as other config parameters are stored in a subdirectory inside your home directory ~/.aws. Make sure that it doesn’t get compromised. If it does get compromised, immediately revoke the ID and associated key using the IAM Console.

To login to different machines, you can always create more of these.

Using the CLI

This is the part where you need to do go through the man pages. Fortunately, the CLI is well-documented. Each service is its own command and then various actions that you can perform using that particular service are listed under its own help section.

To illustrate this point better, let’s start with:

If you scroll down in the output page, you will see all the services listed:

Output:

AVAILABLE SERVICES
o acm
o acm-pca
o alexaforbusiness
o apigateway
.
.
.
o dynamodb
o dynamodbstreams
o ec2
o ecr
o ecs
o efs
o eks

Now, let’s say you want to use Amazon EC2 service to launch your EC2 instances. You explore further by going to:

This will get you all sorts of subcommand that you could use for creating snapshots, launching fleets of VMs, managing SSH-keys, etc. However, what your application would demand is something that is for you to decide upon. Of course, the list of commands, subcommands and, valid arguments that can be used is in fact quite long. But you probably won’t have to use every option.

Conclusion

If you are just starting out, I’d recommend begin with the console for launching various instances and managing them. This will give you a pretty good idea of what option to look for when using the CLI. Eventually, as you use more and more of the CLI, you can start writing scripts to automate the entire resources creation, management and deletion process.

Don’t force yourself into learning about it. These things take time to sink in.

Source

Kodak’s new 3D printer has a Raspberry Pi inside

Kodak has launched a Raspberry Pi 3 based Kodak Portrait 3D Printer with a dual-extrusion system, multiple filament types, a 5-inch touchscreen, and WiFi and Ethernet connections to a Kodak 3D Cloud service.

Kodak and Smart Int’l. have collaborated on a professional, dual extrusion Kodak Portrait 3D Printer that runs a Linux-based 3DprinterOS on a Raspberry Pi 3 board. The $3,500 device offers connections to a Kodak 3D Cloud service, and is designed for engineering, design, and education professionals.

Kodak Portrait 3D Printer
(click images to enlarge)

 

Like the BeagleBone-based Autodesk

Ember 3D printer

, the Kodak Portrait 3D Printer is based on a popular Linux hacker board. In this case, it’s a Raspberry Pi 3 SBC running Kodak’s Linux-based 3DprinterOS print management software.

Other Raspberry Pi based 3D printers include the industrial-oriented, $4,500 and up AON 3D Printer. There are also a variety of Raspberry Pi 3D printer hacking projects available, many of which use OctoPrint’s RPi-compatible software.

Kodak Portrait 3D Printer dual extrusion system (left) and interior view
(click images to enlarge)

 

The Kodak Portrait 3D Printer has a dual extrusion system with a 1.75mm filament diameter and automatic nozzle lifting. The extrusion system provides swappable PTFE and all-metal hotends “for optimal material compatibility,” says Kodak.

The printer provides a 0.4mm nozzle with 20-250 micron layer resolution and XCYZ accuracy of 12.5, 12.5, 2.5 microns. It also offers 16mm XY motion and 12mm Z motion. A sensor warns you when your filament is almost gone.

Kodak Portrait 3D Printer, front and back
(click images to enlarge)

 

Materials include different grades of PLA, as well as ABS, Flex 98, HIPS, PETG, water soluble PVA, and two grades of Nylon. The Kodak manufactured materials are available in a wide color palette, including Kodak’s Trade Dress Yellow. They are claimed to offer low moisture packaging and high dimensional accuracy.

The 455 x 435 x 565mm printer has an all-steel structure allowing high-temperature builds, with support for up to 105ºC build plate and up to 295ºC nozzle temperatures. A fully-enclosed print chamber with a 200 x 200 x 235mm build volume features a HEPA and activated-carbon filter, thereby “reducing unwanted odors and keeping fingers away from hot moving parts,” says Kodak. Other features include magnetically attached print surfaces.

Kodak Portrait 3D Printer (left) and touchscreen
(click images to enlarge)

 

The Kodak Portrait 3D Printer is equipped with a 5-inch, 800 x 480 color touchscreen, as well as WiFi, Ethernet, a USB port, and a build chamber camera. Using 3DprinterOS, you can manage print settings such as automatic leveling and calibration, and you can preset print parameters for every material.

The Linux-based software provides free access to the Kodak 3D Cloud service, where you can manage a print farm for multiple machines from anywhere in the world. Users can access “slice online, monitor their prints and receive over-the-air updates,” says Kodak.

Kodak Portrait 3D Printer video demo

Further information

The Kodak Portrait 3D Printer is available now for $3,499 in Europe and the U.S. More information may be found in Kodak’s announcement, as well as its product and shopping pages.

Source

Snaps are the new Linux Apps that work on every Distro

Ask anyone that is using any operating that is mainstream, be it on PCs or mobile. Their biggest gripe is apps, finding useful and functional apps when using anything other than MacOS, Windows, Android or iOS is a serious hustle. Those of us trying our feet in the murky Linux ecosystem are not spared.

For a long time, getting apps for your Linux computer was an exercise in futility. This issue was made even worse with just how fragmented the Linux ecosystem is. This drove most of us to the relatively more mainstream distros like Ubuntu and Linux Mint for their relatively active developer community and support.

Advertisement – Continue reading below

See, when using Linux, you couldn’t exactly Google the name of a program you want, then download the .exe file, double click it and it is installed like you would on Windows (although technically you can do that now with .deb files). You had to know your way around the Terminal. Once in the Terminal, like for the case of Ubuntu, you needed to add the software source to your Repository with sudo apt commands, then now update the cache, then finally install the app you want with sudo apt-get install. In most cases, the dependencies would be all messed up and you’d have to scroll through endless forums trying to figure out how to fix that one pesky dependency that just won’t allow your app to run well.

You’d jump through all these hoops and then finally the app would run, but then it would look all weird because maybe it wasn’t made for your distro. Bottom line, it takes patience and resilience to install Linux Apps.

Snaps

Snaps are essentially applications that are compressed together with their dependencies and descriptions of how to run and interact with other software on the system that they are installed on. Snaps are secure in that, they are mainly designed to be sandboxed and isolated from other system software.

Snaps are easily installable, upgradeable, degradable, and removable irrespective of its underlying system. For this reason, they are easily installed on basically any Linux-based system. Canonical is even developing Snaps as the new packaging medium for Ubuntu’s Internet of Things devices and large container deployments referred to as Ubuntu Core.

How to Install Snap in Linux

In this section, I will show you to install Snap in Linux and how to use snap to install, update or remove packages. Ubuntu has been shipping distros since Ubuntu 16.04 with Snap already pre-installed on the system. Any Linux distro based on Ubuntu 16.04 and newer doesn’t need to install again. For other distribution, you can follow instructions as shown:

On Arch Linux

$ sudo yaourt -S snapd
$ sudo systemctl start snapd.socket

On Fedora

$ sudo dnf copr enable zyga/snapcore
$ sudo dnf install snapd
$ sudo systemctl enable –now snapd.service
$ sudo setenforce 0

Once snap has been installed and started, you can list all available packages in the snap store as shown.

$ snap find

To search for a particular package, just specify package name as shown.

$ snap find package-name

To install a snap package, specifying the package by name.

$ sudo snap install package-name

To update an installed snap package, specifying the package by name.

$ sudo snap refresh package-name

To remove an installed snap package, run.

$ sudo snap remove package-name

To learn more about snap packages, go through Snapcraft’s official page or head on out to the Snap Store to explore the bunch of apps that are already available.

I feel like Snaps are growing to be more like the Google Play Store. A central place for Linuxers, irrespective of which fork of Linux they’re running to come to get apps that just work, and do so with little to no fuss at all. At the moment, there are thousands of snaps that are used by millions of people across 41 Linux distributions. This number is only going to grow bigger. If there’s ever a good time to switch to Linux, it is now. The platform really has come of age.

Source

WP2Social Auto Publish Powered By : XYZScripts.com