Canonical Announces Plex as a Snap, DuckDuck Go Reaches 30 Million Direct Searches a Day, Purism’s Librem 5 Phone to Ship with GNOME 3.32 Desktop, Libre Computer Project Launches the La Frite SBC and Google Releases Oboe

News briefs for October 12, 2018.

Canonical yesterday announced that Plex has arrived in its Snap Store. You
now can download the
multimedia platform
as a snap for Ubuntu, KDE Neon,
Debian, Fedora, Manjaro, OpenSUSE and Zorin. For more details, see the Ubuntu
Blog
.

DuckDuck Go, the privacy-focused search engine, has reached the milestone
of 30 million direct searches a
day
. According to The
Verge
and Search
Engine Journal
, DuckDuck Go’s market share is estimated to be .18%, compared
with Google’s 77% and Bing’s 5%; however, DuckDuck Go’s traffic is up 50%
from last year.

Purism’s Librem 5 phone will ship with the GNOME 3.32 desktop, which is
scheduled for release March 13, 2019. Softpedia
News reports
that GNOME developer Adrien Plazas invites “GNOME and GTK+
app developers to adapt their applications to work both on their favorite
GNU/Linux distribution and on the upcoming Librem 5 Linux phone, which will
use Purism’s Debian-based and security-oriented Pure OS operating system by
default.” See also Adrien’s
blog post
for more details on Librem 5 + GNOME 3.32.

The Libre Computer Project recently announced its new open-source, libre ARM SBC
called La Frite. Phoronix
reports
the new 512MB model will ship for $5 USD, or you can get the
1GB version for $10 USD. In addition, “the $5 ARM SBC is said to be 10x
faster than the Raspberry Pi Zero” and also includes real HDMI, Ethernet
and USB ports. La Frite, the miniature version of Le Potato SBC supported by
mainline Linux and Android 8, should be available in November. See the Kickstarter
page
for details.

Google yesterday released Oboe, a C++ library for creating real-time audio
apps. According to the post
on Packt
, one of Oboe’s main benefits is “the lowest possible audio
latency across the widest range of Android devices”. See the GitHub repository to get started
with Oboe.

Source

Set Up a CI/CD Pipeline with Kubernetes Part 1: Overview | Linux.com

What’s New?

We’ve updated the four parts of this blog series and versioned the code along with it to include the following new technology components.

  • Jenkins Plugin Kubernetes Continuous Deploy has been added to deployments. https://plugins.jenkins.io/kubernetes-cd

  • Kubernetes RBAC and serviceaccounts are being used by applications to interact with the cluster.

  • We are now introducing and using Helm for a deployment (specifically for the deployment of the etcd-operator in part 3)

  • All versions of the main tools and technologies have been upgraded and locked

  • Fixed bugs, refactored K8s manifests and refactored applications’ code

  • We are now providing Dockerfile specs for socat registry and Jenkins

  • We’ve improved all instructions in the blog post and included a number of informational text boxes

The software industry is rapidly seeing the value of using containers as a way to ease development, deployment, and environment orchestration for app developers. Large-scale and highly-elastic applications that are built in containers definitely have their benefits, but managing the environment can be daunting. This is where an orchestration tool like Kubernetes really shines.

Kubernetes is a platform-agnostic container orchestration tool created by Google and heavily supported by the open source community as a project of the Cloud Native Computing Foundation. It allows you to spin up a number of container instances and manage them for scaling and fault tolerance. It also handles a wide range of management activities that would otherwise require separate solutions or custom code, including request routing, container discovery, health checking, and rolling updates.

Kenzan is a services company that specializes in building applications at scale. We’ve seen cloud technology evolve over the last decade, designing microservice-based applications around the Netflix OSS stack, and more recently implementing projects using the flexibility of container technology. While each implementation is unique, we’ve found the combination of microservices, Kubernetes, and Continuous Delivery pipelines to be very powerful.

Crossword Puzzles, Kubernetes, and CI/CD

This article is the first in a series of four blog posts. Our goal is to show how to set up a fully-containerized application stack in Kubernetes with a simple CI/CD pipeline to manage the deployments.

We’ll describe the setup and deployment of an application we created especially for this series. It’s called the Kr8sswordz Puzzle, and working with it will help you link together some key Kubernetes and CI/CD concepts. The application will start simple enough, then as we progress we will introduce components that demonstrate a full application stack, as well as a CI/CD pipeline to help manage that stack, all running as containers on Kubernetes. Check out the architecture diagram below to see what you’ll be building.

Read all the articles in the series:

The completed application will show the power and ease with which Kubernetes manages both apps and infrastructure, creating a sandbox where you can build, deploy, and spin up many instances under load.

Get Kubernetes up and Running

The first step in building our Kr8sswordz Puzzle application is to set up Kubernetes and get comfortable with running containers in a pod. We’ll install several tools explained along the way: Docker, Minikube, and Kubectl.

9m-m-dyeYpMNvBD5Cxr3_GzLX-MrMKWZ9A9MGR-a

This tutorial only runs locally in Minikube and will not work on the cloud. You’ll need a computer running an up-to-date version of Linux or macOS. Optimally, it should have 16 GB of RAM. Minimally, it should have 8 GB of RAM. For best performance, reboot your computer and keep the number of running apps to a minimum.

Install Docker

Docker is one of the most widely used container technologies and works directly with Kubernetes.

Install Docker on Linux

To quickly install Docker on Ubuntu 16.04 or higher, open a terminal and enter the following commands (see the Linux installation instructions for other distributions):

sudo apt-get update
curl -fsSL https://get.docker.com/ | s

After installation, create a Docker group so you can run Docker commands as a non-root user (you’ll need to log out and then log back in after running this command):

sudo usermod -aG docker $USER

When you’re all done, make sure Docker is running:

sudo service docker start

Install Docker on macOS

Download Docker for Mac (stable) and follow the installation instructions. To launch Docker, double-click the Docker icon in the Applications folder. Once it’s running, you’ll see a whale icon in the menu bar.

2wFuUBKImxVs4uoJ8wc-giTDD_vtnEI5R2GXzlRp

Try Some Docker Commands

You can test out Docker by opening a terminal window and entering the following commands:

# Display the Docker version

docker version

# Pull and run the Hello-World image from Docker Hub

docker run hello-world

# Pull and run the Busybox image from Docker Hub

docker run busybox echo “hello, you’ve run busybox”

# View a list of containers that have run

docker ps -a

9m-m-dyeYpMNvBD5Cxr3_GzLX-MrMKWZ9A9MGR-a

Images are specs that define all the files and resources needed for a container to run. Images are defined in a DockerFile, and built and stored in a repository. Many OSS images are publically available on Docker Hub, a web repository for Docker images. Later we will setup a private image repository for our own images.

For more on Docker, see Docker Getting Started. For a complete listing of commands, see The Docker Commands.

Install Minikube and Kubectl

Minikube is a single-node Kubernetes cluster that makes it easy to run Kubernetes locally on your computer. We’ll use Minikube as the primary Kubernetes cluster to run our application on. Kubectl is a command line interface (CLI) for Kubernetes and the way we will interface with our cluster. (For details, check out Running Kubernetes Locally via Minikube.)

Install Virtual Box

Download and install the latest version of VirtualBox for your operating system. VirtualBox lets Minikube run a Kubernetes node on a virtual machine (VM)

Install Minikube

Head over to the Minikube releases page and install the latest version of Minikube using the recommended method for your operating system. This will set up our Kubernetes node.

Install Kubectl

The last piece of the puzzle is to install kubectl so we can talk to our Kubernetes node. Use the commands below, or go to the kubectl install page.

On Linux, install kubectl using the following command:

curl -LO
​ https://storage.googleapis.com/kubernetes-release/release/$(curl -s
https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
&& chmod +x kubectl && sudo mv kubectl /usr/local/bin/

On macOS, install kubectl using the following command:

curl -LO
https://storage.googleapis.com/kubernetes-release/release/$(curl -s
https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl
&& chmod +x kubectl && sudo mv kubectl /usr/local/bin/

Install Helm

Helm is a package manager for Kubernetes. It allows you to deploy Helm Charts (or packages) onto a K8s cluster with all the resources and dependencies needed for the application. We will use it a bit later in Part 3, and highlight how powerful Helm charts are.

On Linux or macOS, install Helm with the following command.

curl
https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get >
get_helm.sh; chmod 700 get_helm.sh; ./get_helm.sh

Fork the Git Repo

Now it’s time to make your own copy of the Kubernetes CI/CD repository on Github.

1. Install Git on your computer if you don’t have it already.

On Linux, use the following command:

sudo apt-get install git

On macOS, download and run the macOS installer for Git. To install, first double-click the .dmg file to open the disk image. Right-click the .pkg file and click Open, and then click Open again to start the installation.

2. Fork Kenzan’s Kubernetes CI/CD repository on Github. This has all the containers and other goodies for our Kr8sswordz Puzzle application, and you’ll want to fork it as you’ll later be modifying some of the code.

a. Sign up if you don’t yet have an account on Github.

b. On the Kubernetes CI/CD repository on Github, click the Fork button in the upper right and follow the instructions.

VWmK6NaGcXD3TPZL6YRk_XPNZ8lqloN6of6yIUe7

c. Within a chosen directory, clone your newly forked repository.

git clone https://github.com/YOURUSERNAME/kubernetes-ci-cd

d. Change directories into the newly cloned repo.

Clear out Minikube

Let’s get rid of any leftovers from previous experiments you might have conducted with Minikube. Enter the following terminal command:

minikube stop; minikube delete; sudo rm -rf ~/.minikube; sudo rm -rf ~/.kub

9m-m-dyeYpMNvBD5Cxr3_GzLX-MrMKWZ9A9MGR-a

This command will clear out any other Kubernetes contexts you’ve previously setup on your machine locally, so be careful. If you want to keep your previous contexts, avoid the last command which deletes the ~/.kube folder.

Run a Test Pod

Now we’re ready to test out Minikube by running a Pod based on a public image on Docker Hub.

9m-m-dyeYpMNvBD5Cxr3_GzLX-MrMKWZ9A9MGR-a

A Pod is Kubernetes’ resiliency wrapper for containers, allowing you to horizontally scale replicas.

1. Start up the Kubernetes cluster with Minikube, giving it some extra resources.

minikube start –memory 8000 –cpus 2 –kubernetes-version v1.6.0

9m-m-dyeYpMNvBD5Cxr3_GzLX-MrMKWZ9A9MGR-a

If your computer does not have 16 GB of RAM, we suggest giving Minikube less RAM in the command above. Set the memory to a minimum of 4 GB rather than 8 GB.

2. Enable the Minikube add-ons Heapster and Ingress.

minikube addons enable heapster; minikube addons enable ingress

Inspect the pods in the cluster. You should see the add-ons heapster, influxdb-grafana, and nginx-ingress-controller.

kubectl get pods –all-namespaces

3. View the Minikube Dashboard in your default web browser. Minikube Dashboard is a UI for managing deployments. You may have to refresh the web browser if you don’t see the dashboard right away.

minikube service kubernetes-dashboard –namespace kube-system

4. Deploy the public nginx image from DockerHub into a pod. Nginx is an open source web server that will automatically download from Docker Hub if it’s not available locally.

kubectl run nginx –image nginx –port 80

After running the command, you should be able to see nginx under Deployments in the Minikube Dashboard with Heapster graphs. (If you don’t see the graphs, just wait a few minutes.)

taZzJW57y2HD12JINuNJeuo-9LrkFMLjQEfcU0G5

9m-m-dyeYpMNvBD5Cxr3_GzLX-MrMKWZ9A9MGR-a

A Kubernetes Deployment is a declarative way of creating, maintaining and updating a specific set of Pods or objects. It defines an ideal state so K8s knows how to manage the Pods.

5. Create a K8s service for deployment. This will expose the nginx pod so you can access it with a web browser.

kubectl expose deployment nginx –type NodePort –port 80

6. The following command will launch a web browser to test the service. The nginx welcome page displays, which means the service is up and running. Nice work!

minikube service nginx

5Mm8CSeIyO1clhqVqD4v-j4hZGWjUMPGCI1MA36E

7. Delete the nginx deployment and service you created.

kubectl delete service nginx
kubectl delete deployment nginx

Create a Local Image Registry

We previously ran a public image from Docker Hub. While Docker Hub is great for public images, setting up a private image repository on the site involves some security key overhead that we don’t want to deal with. Instead, we’ll set up our own local image registry. We’ll then build, push, and run a sample Hello-Kenzan app from the local registry. (Later, we’ll use the registry to store the container images for our Kr8sswordz Puzzle app.

8. From the root directory of the cloned repository, set up the cluster registry by applying a .yaml manifest file.

kubectl apply -f manifests/registry.yaml

9m-m-dyeYpMNvBD5Cxr3_GzLX-MrMKWZ9A9MGR-a

Manifest .yaml files (also called k8s files) serve as a way of defining objects such as Pods or Deployments in Kubernetes. While previously we used the run command to launch a pod, here we are applying k8s files to deploy pods into Kubernetes.

9. Wait for the registry to finish deploying using the following command. Note that this may take several minutes.

kubectl rollout status deployments/registry

10. View the registry user interface in a web browser. Right now it’s empty, but you’re about to change that.

minikube service registry-ui​​

DUUet5TikWjRivAuP0aELBrwSx0QxKPBrOKfIzlB

11. Let’s make a change to an HTML file in the cloned project. Open the /applications/hello-kenzan/index.html file in your favorite text editor, or run the command below to open it in the nano text editor.

nano applications/hello-kenzan/index.html

Change some text inside one of the <p> tags. For example, change “Hello from Kenzan!” to “Hello from Me!”. When you’re done, save the file. (In nano, press Ctrl+X to close the file, type Y to confirm the filename, and press Enter to write the changes to the file.)

12. Now let’s build an image, giving it a special name that points to our local cluster registry.

docker build -t 127.0.0.1:30400/hello-kenzan:latest -f
applications/hello-kenzan/Dockerfile applications/hello-kenzan

9m-m-dyeYpMNvBD5Cxr3_GzLX-MrMKWZ9A9MGR-a

When a docker image is tagged with a hostname prefix (as shown above), Docker will perform pull and push actions against a private registry located at the hostname as opposed to the default Docker Hub registry.

13. We’ve built the image, but before we can push it to the registry, we need to set up a temporary proxy. By default the Docker client can only push to HTTP (not HTTPS) via localhost. To work around this, we’ll set up a Docker container that listens on 127.0.0.1:30400 and forwards to our cluster.

First, build the image for our proxy container:

docker build -t socat-registry -f applications/socat/Dockerfile applications/socat

14. Now run the proxy container from the newly created image. (Note that you may see some errors; this is normal as the commands are first making sure there are no previous instances running.)

docker stop socat-registry; docker rm socat-registry;

docker run -d -e “REG_IP=`minikube ip`” -e “REG_PORT=30400”

–name socat-registry -p 30400:5000 socat-registry

9m-m-dyeYpMNvBD5Cxr3_GzLX-MrMKWZ9A9MGR-a

This step will fail if local port 30400 is currently in use by another process. You can check if there’s any process currently using this port by running the command
lsof -i :30400

15. With our proxy container up and running, we can now push our hello-kenzan image to the local repository.

docker push 127.0.0.1:30400/hello-kenzan:latest

Refresh the browser window with the registry UI and you’ll see the image has appeared.

YSBsriST1ssQBC1z0Lewx67eZ8Lx4eeAkBNuW7gn

16. The proxy’s work is done for now, so you can go ahead and stop it.

docker stop socat-registry

17. With the image in our cluster registry, the last thing to do is apply the manifest to create and deploy the hello-kenzan pod based on the image.

kubectl apply -f applications/hello-kenzan/k8s/deployment.yaml

18. Launch a web browser and view the service.

minikube service hello-kenzan

Notice the change you made to the index.html file. That change was baked into the image when you built it and then was pushed to the registry. Pretty cool!

1Q5e2bfkbGFdwJWNa2LB16mkr1Y5dGx40Ep7DwEA

19. Delete the hello-kenzan deployment and service you created.

kubectl delete service hello-kenzan
kubectl delete deployment hello-kenzan

We are going to keep the registry deployment in our cluster as we will need it for the next few parts in our series.

If you’re done working in Minikube for now, you can go ahead and stop the cluster by entering the following command:

minikube stop

If you need to walk through the steps we did again (or do so quickly), we’ve provided npm scripts that will automate running the same commands in a terminal.

1. To use the automated scripts, you’ll need to install NodeJS and npm.

On Linux, follow the NodeJS installation steps for your distribution. To quickly install NodeJS and npm on Ubuntu 16.04 or higher, use the following terminal commands.

a. curl -sL https://deb.nodesource.com/setup_7.x | sudo -E bash –
b. sudo apt-get install -y nodejs

On macOS, download the NodeJS installer, and then double-click the .pkg file to install NodeJS and npm.

2. Change directories to the cloned repository and install the interactive tutorial script:

a. cd ~/kubernetes-ci-cd

b. npm install

3. Start the script

npm run part1 (or part2, part3, part4 of the blog series)

4. Press Enter to proceed running each command.

​Up Next

In Part 2 of the series, we will continue to build out our infrastructure by adding in a CI/CD component: Jenkins running in its own pod. Using a Jenkins 2.0 Pipeline script, we will build, push, and deploy our Hello-Kenzan app, giving us the infrastructure for continuous deployment that will later be used with our Kr8sswordz Puzzle app.

This article was revised and updated by David Zuluaga, a front end developer at Kenzan. He was born and raised in Colombia, where he studied his BE in Systems Engineering. After moving to the United States, he studied received his master’s degree in computer science at Maharishi University of Management. David has been working at Kenzan for four years, dynamically moving throughout a wide range of areas of technology, from front-end and back-end development to platform and cloud computing. David’s also helped design and deliver training sessions on Microservices for multiple client teams.

Source

Enterprise Ethereum Alliance and Hyperledger to Advance the Global Blockchain Business Ecosystem

Through Joint Associate Memberships, EEA and Hyperledger Will Collaborate to Meet Global Demand For Enterprise Blockchain

NEW YORK AND SAN FRANCISCO – Oct 1, 2018 –The Enterprise Ethereum Alliance (EEA), the global standards organization driving the adoption of Enterprise Ethereum, and Hyperledger, The Linux Foundation open source collaborative effort advancing cross-industry blockchain technologies, today jointly announced they have become Associate Members, respectively, within each other’s organization. The open-source, standards-based, cross-platform collaboration between the two organizations will contribute to accelerating mass adoption of blockchain technologies for business.

With hundreds of member companies combined, the EEA and Hyperledger communities represent a wide variety of business sectors from every region of the world.

Hyperledger Executive Director, Brian Behlendorf, and EEA Executive Director, Ron Resnick, have jointly authored a blog post (see Hyperledger’s blog or EEA’s blog) to announce this partnership.

“This is a time of great opportunity,” said Resnick. “Collaborating through mutual associate membership provides more opportunities for both organizations to work more closely together. In addition, Hyperledger developers who join the EEA can participate in EEA Certification to ensure solution compliance for projects related to the Enterprise Ethereum Client Specification.”

As members of each other’s organizations, the leadership of both organizations will be able to collaborate across tens of Special Interest Groups, Working Groups, meetups and conferences globally, across hundreds of thousands of developers in both communities. EEA community members working on specifications and standards can turn to Hyperledger to collaborate on software implementations of those standards.

“Great open standards depend upon great open source code, so this is a natural alliance for both organizations,” said Behlendorf. “Standards, specifications and certification all help enterprise blockchain customers commit to implementations with confidence since they have better assurances of interoperability as well as multiple vendors of choice.”

More About EEA and Hyperledger Work Underway

In 2017, Hyperledger launched the Hyperledger Burrow project, an Apache-licensed implementation of the Ethereum Virtual Machine (EVM) bytecode interpreter. Earlier this year, Hyperledger Sawtooth added support for the EVM as a transaction processor, bringing smart contracts developed for the Ethereum MainNet over to Sawtooth-based networks. That effort, dubbed “Seth,” is now in active use, and the developers anticipate submitting it for conformance testing to the EEA specification as soon as possible. Likewise, support for the EVM is now available in Hyperledger Fabric.

Another example of EEA and Hyperledger’s collaboration is the EEA’s Special Interest Group on Trusted Execution Environments, and a prototype implementation of those proposed standards, called “Private Data Objects” being built within Hyperledger Labs. This project is a best practice example of internet-scale software development work, combining community-driven open standards and community-developed, production-quality open source reference implementation. The effort mirrors work such as the IETF (Internet Engineering Task Force) and Apache working on the web’s underlying protocol HTTP, or ECMA International and Mozilla working on JavaScript, a standardized, multi-platform language used by developers worldwide for web design.

Down the road, this mutually beneficial relationship will encourage Ethereum developers to consider submitting their enterprise projects to Hyperledger and Hyperledger project maintainers to consider taking de-facto interfaces appropriate for standardization to the appropriate EEA working groups. This relationship will also enable Hyperledger developers to write code that conforms to the EEA specification and certify them through EEA certification testing programs expected to launch in the second half of 2019.

“As a founding member of both Hyperledger and EEA, we’ve been proud to participate in the incredible growth of both communities. This is a logical next step that will strengthen the industry as a whole, expand each organization’s reach and benefit from the collaboration across ecosystems, while supporting each organization’s distinct mission,” said David Treat, Managing Director at Accenture.

“For anyone who ever put a ‘vs.’ between Ethereum and Hyperledger, this collaboration shows it’s now ‘Ethereum AND Hyperledger,’” said Behlendorf. “We expect developers building Enterprise Ethereum-related technologies to be motivated to submit projects to Hyperledger, and we hope that project maintainers will consider taking de-facto interfaces that are suitable for standardization to the appropriate Special Interest Group at the EEA.”

About The Enterprise Ethereum Alliance

The Enterprise Ethereum Alliance (EEA) is the industry’s first global standards organization to deliver an open, standards-based architecture and specification to accelerate the adoption of Enterprise Ethereum. The EEA’s world-class Enterprise Ethereum Client Specification and forth-coming testing and certification programs will ensure interoperability, multiple vendors of choice, and lower costs for its members – the world’s largest enterprises and most innovative startups. For additional information about joining the EEA, please reach out to membership@entethalliance.org.

About Hyperledger

Hyperledger is an open source collaborative effort created to advance cross-industry blockchain technologies. It is a global collaboration including leaders in finance, banking, Internet of Things, supply chains, manufacturing and Technology. The Linux Foundation hosts Hyperledger under the foundation. To learn more, visit: https://www.hyperledger.org/.

Source

Alt-Ruby updated – CloudLinux OS Blog

Alt-Ruby updated

New updated Alt-Ruby packages are now available for download from our production repository.

Changelog:

alt-ruby18-rubygem-lsapi-4.4-2

  • ALR-114: fixed permissions to /opt/alt/ruby18/lib64/ruby/gems/1.8/gems/ruby-lsapi-4.4/lib folder.

alt-ruby20-rubygem-lsapi-4.4-2

  • ALR-114: fixed permissions to /opt/alt/ruby20/lib64/ruby/gems/2.0.0/gems/ruby-lsapi-4.4/lib folder.

alt-ruby21-rubygem-lsapi-4.4-2

  • ALR-114: fixed permissions to /opt/alt/ruby21/lib64/ruby/gems/2.1.0/gems/ruby-lsapi-4.4/lib folder.

alt-ruby22-rubygem-lsapi-4.4-2

  • ALR-114: fixed permissions to /opt/alt/ruby22/lib64/ruby/gems/2.2.0/gems/ruby-lsapi-4.4/lib folder.

alt-ruby23-rubygem-lsapi-4.4-2

  • ALR-114: fixed permissions to /opt/alt/ruby23/lib64/ruby/gems/2.3.0/gems/ruby-lsapi-4.4/lib folder.

alt-ruby24-rubygem-lsapi-4.4-2

  • ALR-114: fixed permissions to /opt/alt/ruby24/lib64/ruby/gems/2.4.0/gems/ruby-lsapi-4.4/lib folder.

alt-ruby25-rubygem-lsapi-4.4-2

  • ALR-114: fixed permissions to /opt/alt/ruby25/lib64/ruby/gems/2.5.0/gems/ruby-lsapi-4.4/lib folder.

Update command:

yum update alt-ruby*-rubygem-lsapi

154 people viewed this

Source

RHEL 7.5 released and here is how to upgrade 7.4 to 7.5

Red Hat Enterprise Linux (RHEL) 7.5 released. This version includes updates and various improvements such as GNOME rebased to version 3.26, LibreOffice rebased to version 5.3, Support for libva (VA-API) added, GStreamer now supports mp3 and more. RHEL is one of the leading enterprise Linux distribution for both bare metal and cloud platform. It targeted toward the commercial users. RHEL works with x86-64, IBM System z, and other platforms.

From the RHEL 7.5 release note:

The world’s leading provider of open source solutions, today announced the general availability of Red Hat Enterprise Linux 7.5, the latest version of the world’s leading enterprise Linux platform. Serving as a consistent foundation for hybrid cloud environments, Red Hat Enterprise Linux 7.5 provides enhanced security and compliance controls, tools to reduce storage costs, and improved usability, as well as further integration with Microsoft Windows infrastructure both on-premise and in Microsoft Azure.

RHEL 7.5 released

Some new features in RHEL 7.5 release:

  1. Securely unlock Network Bound Disk Encrypted (NBDE) devices at boot-time
  2. The integration of Red Hat Ansible Automation with OpenSCAP enables ease of automation
  3. The introduction of Virtual Data Optimizer (VDO), designed to reduce data redundancy through inline deduplication and compression of primary storage
  4. KVM virtualization is now supported on IBM POWER8/POWER9 systems

Update packages in RHEL 7.5

  • Linux kernel version 3.10.0-862
  • The kernel-alt packages include kernel version 4.14. This kernel version provides support for 64-bit ARM, IBM POWER9 (little endian), and IBM z Systems
  • LVM v2.02.177-4
  • qemu-kvm v1.5.3-156
  • Samba v4.7.1
  • Directory server v1.3.7.5
  • binutils v2.27
  • valgrind v3.13.0
  • rsync v3.1.2
  • Gnome v3.26
  • libreoffice v5.3
  • GIMP v2.8.22
  • Inkscape 0.92.2
  • qt5 5.9.2
  • And more here

How to update RHEL 7.4 to 7.5

The procedure to upgrade or update RHEL from version 7.4 to 7.5 is as follows:

  1. Login as root user
  2. Check for updates using the yum check-update command
  3. Update the system using the yum update command
  4. Reboot the server/box using the reboot command
  5. Verify new kernel and updates

Let us see all steps in details:

Step 1 – Note down the current kernel version

Type the following uname command or cat command to view RHEL kernel version and OS info:
$ uname -a
$ uname -r
$ cat /etc/os-release

Step 2 – Backups

Make a backup – it cannot be stressed enough how important it is to make a backup of your system before you do this. Most of the actions listed in this post are written with the assumption that they will be executed by the root user running the bash or any other modern shell.

Step 3 – Check for updates

Type the following yum command:
$ sudo yum check-update

Step 4 – Apply/install updates

Type the following yum command:
$ sudo yum update -y

Step 4 – Reboot the RHEL 7.4 box

Type the following reboot command or shutdown command:

$ sudo reboot OR

$ sudo shutdown -r now

Step 3 – Verify the RHEL 7.5 update

Type the following commands:

$ uname -a
$ uname -r
$ cat /etc/os-release
$ tail -f /var/log/logfilenames
$ dmesg | grep -i ‘err|warn|cri’
$ ss -tulpn Sample session from version 7.5:

RHEL 7.5 released and my box updated to 7.5

Video demo

Here is a quick video demo showing upgrade procedure.

Posted by: Vivek Gite

The author is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter.

Source

Hack it all! | Linux Format

Buy it now!

Read a sample

The internet joke is: I’ve installed Kali, now I’m Hackerman! Just having the tools doesn’t mean you’re an instant expert, but at least it’s a first step…

This issue we’re taking our regular look into the world of hacking and we’re backing it up with Kali Linux on the disc alongside an in-depth look at the core tools you’ll need. We’re not promising to turn you into an expert (white-hat) hacker overnight, but we can at least set you off on the right path.

We’re also keen to get you started with Linux, if you’re not already using it. With this in mind, on the DVD we’ve put the latest release of the cool Feren OS. Based on the popular Mint, it has a classic-styled desktop that everyone will love. As it comes with Wine baked in, people moving from Windows can still hold on to their favourite programs and games. Of course, we’d suggest people hunt out open source alternatives – of which there are plenty – once they’re happily up and running.

Something that comes up in this month’s interview is open data, a kindred spirit for open source. Specifically, we look into a project analysing classic crime data to reveal patterns missed by humans, which is leading to prosecutions and solving cases that have laid dormant at the back of a filing cabinet for decades, in some cases.

It’s this bazaar of topics that makes Linux and open source so constantly interesting. We’re not sure there are many other magazines that could have solving crimes, hacking systems, learning to get started, building honeypots, retro gaming consoles, ebook publishing, system administration and coding all sat naturally together! It’s been a fun issue to put together, so enjoy!

Write in now, we want to hear from you!
lxf.letters@futurenet.com

Send your problems and solutions to:

lxf.answers@futurenet.com

Catch all the FLOSS news at our

evil Facebook page

or follow us on the

Twitters

.

Source

16 Linux Certification Paths. Which one is right for you?

I put together a spreadsheet that summarizes the most popular Linux certifications.

In it, you’ll find:

  • How long each certification lasts.
  • What exams you’ll have to pass in order to be certified.
  • The approximate cost of training required to become certified.

There are a lot of certifications, so here’s my take: I’m a fan of the distro-agnostic Linux Professional Institute LPIC-1 certification. However, if you know you’ll be primarily working with RedHat, the Red Hat Certified System Administrator (RHCSA) certification is a great place to start.

By the way, you can have a successful career as a Linux professional without ever being certified. Being certified is just one step on one path along the way. And there are MANY paths that will take you where you want to go.

I’ll be sharing more about that in a future post…

All the best,

Jason

Source

Configure HAProxy and Keepalived with Puppet | Lisenet.com :: Linux | Security

We’re going to use Puppet to install and configure HAProxy to load balance Apache web services. We’ll also configure Keepalived to provide failover capabilities.

This article is part of the Homelab Project with KVM, Katello and Puppet series. See here for a blog post on how to configure HAProxy and Keepalived manually.

Homelab

We have two CentOS 7 servers installed which we want to configure as follows:

proxy1.hl.local (10.11.1.19) – HAProxy with Keepalived (master router node)
proxy2.hl.local (10.11.1.20) – HAProxy with Keepalived (slave router node)

SELinux set to enforcing mode.

See the image below to identify the homelab part this article applies to.

HAProxy and Virtual IP

We use 10.11.1.30 as a virtual IP, with a DNS name of blog.hl.local. This is the DNS of our WordPress site.

Below is a GIF representing our HA setup using HAProxy (primary and secondary load balancers).

Configuration with Puppet

Puppet master runs on the Katello server.

Puppet Modules

We use the following Puppet modules:

  1. arioch-keepalived – to configure Keepalived
  2. puppetlabs-haproxy – to configure HAProxy
  3. thias-sysctl – to configure kernel parameters

Please see each module’s documentation for features supported and configuration options available.

Firewall Configuration

Configure both proxy servers to allow VRRP and HTTP/S traffic. Port 8080 will be used for HAProxy statistics.

firewall { ‘007 allow VRRP’:
source => ‘10.11.1.0/24’,
proto => ‘vrrp’,
action => accept,
}->
firewall { ‘008 allow HTTP/S’:
dport => [80, 443, 8080],
source => ‘10.11.1.0/24’,
proto => tcp,
action => accept,
}

Kernel Parameters and IP Forwarding

Load balancing in HAProxy requires the ability to bind to an IP address that is nonlocal. This allows a running load balancer instance to bind to a an IP that is not local for failover.

In order for the Keepalived service to forward network packets properly to the real servers, each router node must have IP forwarding turned on in the kernel.

sysctl { ‘net.ipv4.ip_forward’: value => ‘1’ }
sysctl { ‘net.ipv4.ip_nonlocal_bind’: value => ‘1’ }

Install HAProxy

This needs to be applied for both proxy servers.

file {‘/etc/pki/tls/private/hl.pem’:
ensure => ‘file’,
source => ‘puppet:///homelab_files/hl.pem’,
path => ‘/etc/pki/tls/private/hl.pem’,
owner => ‘0’,
group => ‘0’,
mode => ‘0640’,
}->
class { ‘haproxy’:
global_options => {
‘log’ => “127.0.0.1 local2″,
‘chroot’ => ‘/var/lib/haproxy’,
‘pidfile’ => ‘/var/run/haproxy.pid’,
‘maxconn’ => ‘4096’,
‘user’ => ‘haproxy’,
‘group’ => ‘haproxy’,
‘daemon’ => ”,
‘ssl-default-bind-ciphers’ => ‘kEECDH+aRSA+AES:kRSA+AES:+AES256:!RC4:!kEDH:!LOW:!EXP:!MD5:!aNULL:!eNULL’,
‘ssl-default-bind-options’ => ‘no-sslv3’,
‘tune.ssl.default-dh-param’ => ‘2048 ‘,
},
defaults_options => {
‘mode’ => ‘http’,
‘log’ => ‘global’,
‘option’ => [
‘httplog’,
‘dontlognull’,
‘http-server-close’,
‘forwardfor except 127.0.0.0/8’,
‘redispatch’,
],
‘retries’ => ‘3’,
‘timeout’ => [
‘http-request 10s’,
‘queue 1m’,
‘connect 10s’,
‘client 1m’,
‘server 1m’,
‘http-keep-alive 10s’,
‘check 10s’,
],
‘maxconn’ => ‘2048’,
},
}
haproxy::listen { ‘frontend00’:
mode => ‘http’,
options => {
‘balance’ => ‘source’,
‘redirect’ => ‘scheme https code 301 if !{ ssl_fc }’,
},
bind => {
‘10.11.1.30:80’ => [],
‘10.11.1.30:443’ => [‘ssl’, ‘crt’, ‘/etc/pki/tls/private/hl.pem’],
},
}->
haproxy::balancermember { ‘web1_web2’:
listening_service => ‘frontend00’,
ports => ‘443’,
server_names => [‘web1.hl.local’,’web2.hl.local’],
ipaddresses => [‘10.11.1.21′,’10.11.1.22’],
options => ‘check ssl verify none’,
}->
haproxy::listen { ‘stats’:
ipaddress => $::ipaddress,
ports => [‘8080’],
options => {
‘mode’ => ‘http’,
‘stats’ => [‘enable’,’uri /’,’realm HAProxy Statistics’,’auth admin:PleaseChangeMe’],
},
}

Note how we forward all HTTP traffic to HTTPS. We also enable HAProxy stats.

There are several HAProxy load balancing algorithms available, we use the source algorithm to select a server based on a hash of the source IP. This method helps to ensure that a user will end up on the same server.

Install Keepalived

Apply the following to the master node proxy1.hl.local:

include ::keepalived
keepalived::vrrp::script { ‘check_haproxy’:
script => ‘/usr/bin/killall -0 haproxy’,
}
keepalived::vrrp::instance { ‘LVS_HAP’:
interface => ‘eth0’,
state => ‘MASTER’,
virtual_router_id => ’51’,
priority => ‘5’,
auth_type => ‘PASS’,
auth_pass => ‘PleaseChangeMe’,
virtual_ipaddress => ‘10.11.1.30/32’,
track_script => ‘check_haproxy’,
}

Apply the following to the slave node proxy2.hl.local:

include ::keepalived
keepalived::vrrp::script { ‘check_haproxy’:
script => ‘/usr/bin/killall -0 haproxy’,
}
keepalived::vrrp::instance { ‘LVS_HAP’:
interface => ‘eth0’,
state => ‘SLAVE’,
virtual_router_id => ’51’,
priority => ‘4’,
auth_type => ‘PASS’,
auth_pass => ‘PleaseChangeMe’,
virtual_ipaddress => ‘10.11.1.30/32’,
track_script => ‘check_haproxy’,
}

HAProxy Stats

If all goes well, we should be able to get some stats from HAProxy.

WordPress Site

Our WordPress site should be accessible via https://blog.hl.local.

Source

Sparky news 2018/09 | SparkyLinux

The 9th monthly report of 2018 of the Sparky project:
– Sparky’s Linux kernel updated up to version 4.18.11 & 4.19-rc5
– Sparky 5.5 “Nibiru” released
– Sparky 5.5 “Nibiru” Special Editions released
– Lumina desktop stoped working on Sparky5/Debian testing (still works fine on stable)
– updated Italian and some French of sparky tools locales
– updated all sparky-desktop-* meta packages; and removed –no-install-recommends option from desktop installer of Advanced Installer and APTus, what should improve working a few desktops installed via the tools.

Source

Debian, Ubuntu, and Other Distros are Leaving U… » Linux Magazine

A security researcher says Linux vendors wait too long to patch the kernel.

Linux is known for a rapid response on fixing problems with the kernel, but the individual distros often take their time with pushing changes to users. Now, one of the researchers for Google Project Zero, Jann Horn, is warning that major distros like Debian and Ubuntu are leaving their users vulnerable.

“Linux distributions often don’t publish distribution kernel updates very frequently. For example, Debian stable ships a kernel based on 4.9, but as of 2018-09-26, this kernel was last updated 2018-08-21. Similarly, Ubuntu 16.04 ships a kernel that was last updated 2018-08-27,” he wrote in a blog post.

According to Horn, the delay means that users of these distributions remain vulnerable to known exploits. Horn describes a case in which, “a security issue was announced on the oss-security mailing list on 2018-09-18, with a CVE allocation on 2018-09-19, making the need to ship new distribution kernels to users more clear. Still: As of 2018-09-26, both Debian and Ubuntu (in releases 16.04 and 18.04) track the bug as unfixed.”

Horn is also critical of Android, which only ships security updates once a month. “…when a security-critical fix is available in an upstream stable kernel, it can still take weeks before the fix is actually available to users – especially if the security impact is not announced publicly,” he wrote.

Greg Kroah-Hartman has also been critical of distributions that don’t push these changes to users. Horn warned, “The fix timeline shows that the kernel’s approach to handling severe security bugs is very efficient at quickly landing fixes in the git master tree, but leaves a window of exposure between the time an upstream fix is published and the time the fix actually becomes available to users – and this time window is sufficiently large that a kernel exploit could be written by an attacker in the meantime.”

Source

WP2Social Auto Publish Powered By : XYZScripts.com