The newest intelligent supercomputer – Red Hat Enterprise Linux Blog

Summit, the world’s fastest supercomputer running at Oak Ridge National Laboratory (ORNL), was designed from the ground up to be flexible and to support a wide range of scientific and engineering workloads. In addition to traditional simulation workloads, Summit is well suited to analysis and AI/ML workloads – it is described as “the world’s first AI supercomputer”. The use of standard components and software makes it easy to port existing applications to Summit as well as develop new applications. As pointed out by Buddy Bland, Project Director for the ORNL Leadership Computing Facility, Summit lets users bring their codes to the machine quickly, thanks to the standard software environment provided by Red Hat Enterprise Linux (RHEL).

Summit’s system is built using “fat node” building block concept, where each identically configured node is a powerful IBM Power System AC922 server which is interconnected with others via high-bandwidth dual rail Mellanox infiniband fabric, for a combined cluster of roughly 4,600 nodes. Each node in the system has:

SUMMIT SUPERCOMPUTER NODE COMPOSITION

The result is a system with excellent CPU compute capabilities, plenty of memory to hold data, high performance local storage, and massive communications bandwidth. Additionally, prominent use of graphical processing units (GPU) from Nvidia at the node architecture level provides robust acceleration platform for artificial intelligence (AI) and other workloads. All of this is achieved using standard hardware components, standard software components, and standard interfaces.

So why is workload acceleration so important? In the past, hardware accelerators such as vector processors and array processors were exotic technologies used for esoteric applications. In today’s systems, hardware accelerators are mainstream in the form of GPUs. GPUs can be used for everything from visualization to number crunching to database acceleration, and are omnipresent across the hardware landscape, existing in desktops, traditional servers, supercomputers, and everything in-between, including cloud instances . And the standard unifying component across these configurations is Red Hat Enterprise Linux, the operating system and software development environment supporting hardware, applications, and users across variety of environments at scale.

The breadth of scientific disciplines targeted by Summit can be seen in the list of applications included in the early science program. To help drive optimal use of the full system as soon as it was available, ORNL identified a set of research projects that were given access to small subsets of the full Summit system while Summit was being built. This enabled the applications to be ported to the Summit architecture, optimized for Summit, and be ready to scale out to the full system as soon as it was available. These early applications include astrophysics, materials science, systems biology, cancer research, and AI/ML.

Machine learning (ML) is a great example of a workload that stresses systems: it needs compute power, I/O, and memory to handle data. It needs massive number crunching for training, which is handled by GPUs. All of that requires an enormous amount of electrical power to run. The Summit system is not only flexible and versatile in the way it can handle workloads, it also withstands one of the biggest challenges of today’s supercomputers – excessive power consumption. Besides being the fastest supercomputer on the planet, it is equally significant that Summit performs well on the Green500 list – a supercomputer measurement of speed and efficiency which puts a premium on energy-efficient performance for sustainable supercomputing. Summit comes in at #1 in its category and #5 overall on this list, a very strong performance.

In summary, the fastest supercomputer in the world supports diverse application requirements, driven by simulation, big data, and AI/ML, employs the latest processor, acceleration and interconnect technologies from IBM, Nvidia and Mellanox, respectively, and shows unprecedented power efficiency for that scale of machines. Critical to the success of this truly versatile system is Linux, in Red Hat Enterprise Linux, as the glue that brings everything together and allows us to interact with this modern marvel.

Source

Software Freedom Conservancy Shares Thoughts on Microsoft Joining Open Invention Network’s Patent Non-Aggression Pact

Software Freedom Conservancy Shares Thoughts on Microsoft Joining Open Invention Network’s Patent Non-Aggression Pact (sfconservancy.org)

Posted by msmash
on Sunday October 14, 2018 @06:10PM
from the minute-details dept.

Earlier this week, Microsoft announced that it was
joining the open-source patent consortium Open Invention Network (OIN)

The press release the two shared this week was short on details on how the two organizations intend to work together and what does the move mean to, for instance, the billions of dollars Microsoft earns each year from its Android patents (since
Google is a member of OIN, too.) Software Freedom Conservancy (SFC)
, a non-profit organization that promotes open-source software,
has weighed in on the subject :
While [this week’s] announcement is a step forward, we call on Microsoft to make this just the beginning of their efforts to stop their patent aggression efforts against the software freedom community. The OIN patent non-aggression pact is governed by something called the Linux System Definition. This is the most important component of the OIN non-aggression pact, because it’s often surprising what is not included in that Definition especially when compared with Microsoft’s patent aggression activities. Most importantly, the non-aggression pact only applies to the upstream versions of software, including Linux itself.
We know that Microsoft has done patent troll shakedowns in the past on Linux products related to the exfat filesystem. While we at Conservancy were successful in getting the code that implements exfat for Linux released under GPL (by Samsung), that code has not been upstreamed into Linux. So, Microsoft has not included any patents they might hold on exfat into the patent non-aggression pact.
We now ask Microsoft, as a sign of good faith and to confirm its intention to end all patent aggression against Linux and its users, to now submit to upstream the exfat code themselves under GPLv2-or-later. This would provide two important protections to Linux users regarding exfat: (a) it would include any patents that read on exfat as part of OIN’s non-aggression pact while Microsoft participates in OIN, and (b) it would provide the various benefits that GPLv2-or-later provides regarding patents, including an implied patent license and those protections provided by GPLv2 (and possibly other GPL protections and assurances as well).

 

Small is beautiful.

Working…

Source

Open Hardware – Challenges » Linux Magazine

Changes in funding, manufacturing, and technology have helped move open hardware from an idea to reality.

Like free software, open hardware was an idea before it was a reality. Until developments in the tech industry caught up with the idea, open hardware was impractical. Even now, in 2018, open hardware is at the stage where free software was in about 1999: ready to make its mark, but not being developed by major hardware manufacturers.

As late as 1999, Richard M. Stallman of the Free Software Foundation (FSF) downplayed the practicality of what he called free hardware. In “On ‘Free Hardware'” [1], Stallman suggested that working on free hardware was “a fine thing to do” and said that the FSF would put enthusiasts in touch with each other. However, while firmware is just software, and specifications could be made freely available, he did not think that either would do much good because of the difficulties of manufacturing, writing:

We don’t have automatic copiers for hardware. (Maybe nanotechnology will provide that capability.) So you must expect that making fresh a copy of some hardware will cost you, even if the hardware or design is free. The parts will cost money, and only a very good friend is likely to make circuit boards or solder wires and chips for you as a favor.

[…]

Use Express-Checkout link below to read the full article (PDF).

Source

Workarounds for Exporting to EPUB » Linux Magazine

With a little workaround knowledge and Calibre, you can get the most out of LibreOffice’s EPUB export function.

Starting with the 6.0 release, LibreOffice supports export to EPUB format. Although an extension for EPUB export has been available for several years, this feature is long overdue, considering the growing popularity of ebooks. Unfortunately, though, using ebook export is not as simple as selecting an option from File | Export As. Unless you are exporting the simplest of document designs, many of LibreOffice’s features are not exported. Just as efficiently exporting to HTML requires a special approach to document design, a successful EPUB format requires a few workarounds, or else editing the result in Calibre or another tool like Sigil. Since, post-export editing requires a knowledge of Cascading Style Sheets (CSS), for most users, the easiest place to make adjustments is in the original document.

The simpler a document, the more likely that it will to export to EPUB without problem. If a document is entirely text, most likely you can select File | Export As | Export Directly to EPUB and let LibreOffice do the work using the default export settings. Probably, you also can successfully use File | Export As | Export as EPUB; this allows you to tweak the metadata that is otherwise borrowed from your user settings and File | Properties, such as where page breaks occur and the cover image for the export file. However, although Export as EPUB offers the option for a fixed format, the setting applies mainly to page breaks, and the formatting options remain limited (Figure 1).


Figure 1: LibreOffice offers a few controls for EPUB export, but drops many common formatting choices.

At the opposite extreme, if you require elaborate formatting – as in a brochure – the format for you is probably PDF. An offshoot of the Postscript printing language, the PDF format is well-equipped to handle any design you care to use, although online forms in particular can be challenging. LibreOffice has had PDF export for years, and the options available from File | Export As | Export as PDF allow fine-tuned control (Figure 2).


Figure 2: In contrast to the EPUB export controls, LibreOffice’s PDF controls offer a rich array of options.

However, if your formatting is somewhere in the middle of these two extremes, you can still use EPUB export control with some success, so long as you know what design features are available and how to workaround some of the deficiencies.

Basic Design Limits

Successful EPUB export is often a matter of trial and error. It may take several attempts to get the desired results. For that reason, you can save yourself effort if you use character and paragraph styles, which allow you to quickly make adjustments. Other LibreOffice styles – frames, pages, lists, and tables – are not recognized by the EPUB export filter, but you can use them for your own convenience.

EPUB does not support character and paragraph styles, but you can count on font size, line spacing, alignment, and font color to export. If you have formatted some characters differently from the rest of the text, the difference is preserved, including subscript or superscript. Footnotes and endnotes are also preserved, as well as hyperlinks

However, that is the end of the good news. Any characters that are not entered directly or by fields are lost and will not appear in the output. That means that while list items are preserved, bullets and numbers entered from the toolbar or by a list style are dropped – so are cross references based on page numbers and headings and useful fields like page numbers.

Similarly, any sort of text frame is simply dropped. Text in sections, text frames, columns, and captions are exported, but only as regular paragraphs. The same is true for drop capitals. Text in table cells is also imported, but with erratic spacing and text formatting that makes it unusable.

Worst of all, no graphics or objects created using the Drawing toolbar are exported.

If designing for EPUBS sounds like a study in limitations, you are beginning to get the right idea.

Some Basic Workarounds

This advice is worth repeating: the simpler the document design, the less trouble with exporting to EPUB. The trouble, of course, is that you often need the features not supported by EPUB export. Fortunately, other ways exist to get the same results. At times, you can even use the limitations themselves to find a workaround.

To start with, if features cannot be exported automatically, you can export them by entering them manually. For example, if you want a bulleted list, use Insert | Special Character and enter the bullet character manually (Figure 3). Keep in mind that you cannot indent a list item’s second line to align it with the first line of text rather than the bullet, so you will want to keep your list items short. Cross references can also be entered manually, although admittedly at the cost of increased maintenance if you edit documents. If you are using the Fixed setting in the Export as EPUB window, you can also manually enter headers and footers, including lines to separate them from the rest of the text, by adding them to each page.


Figure 3: If you want a bulleted list in an EPUB export, you have to add the bullet character from the Special Characters dialog window.

If you want a drop capital, you can create one by adding a text box (Insert | Text Box), inserting a large capital into it, and adjusting the space between the capital and the right and bottom of the frame and the surrounding text. Exporting will drop the frame, leaving the capital and the space around it. Text boxes can also be added to create a multicolumn layout, although you need to be aware that EPUB reduces the spacing set between columns in LibreOffice.

Still more features can be preserved by not using the LibreOffice export features at all. Instead, install Calibre and export the original LibreOffice file into Calibre. Calibre has its own export filters, and the one for EPUB is much more versatile. For instance, Calibre exports graphics and preserves their alignment. While it takes the first graphic for a cover, that can be edited later (Figure 4).


Figure 4: Calibre is useful for touching up EPUB exports.

Another useful feature of exporting with Calibre is that, while it cannot reproduce cross references based on page numbers or headings, it can reproduce those made with bookmarks.

Other workarounds depend on a knowledge of CSS. You can, for example, specify a font to use, rather than letting users choose their own. However, the use of CSS to produce ebooks is a lengthy subject and deserves an article to itself.

Other Considerations

LibreOffice probably chose to add EPUB support, because it is an open standard. Kindle/Mobi is not an open standard, but if you need that format, export your LibreOffice or EPUB file into Calibre – or, better yet, export the original LibreOffice .odt file, since the fewer conversions done means the less trouble you are likely to have. However, I am still investigating the limitations of converting to the Kindle/Mobi format.

Be aware, too, that the various validators online for different ebook uses can sometimes have wildly differing results. Some may not permit a fixed file, so check the preferred format before you begin. Since an EPUB file that reads well in Calibre may not be acceptable for a particular use, you should always refer back to any standards that the output file is required to meet.

As open source software, LibreOffice offers a convenient graphical interface for producing ebooks. For now, its EPUB tools remain basic, but, with the addition of Calibre, you can still have an open source tool chain for producing ebooks. Should both those applications fail to give the results you want, then look into CSS, which can be edited in Calibre or the text editor of your choice.

Source

Have a Plan for Netplan

Ubuntu changed networking. Embrace the YAML.

If I’m being completely honest, I still dislike the switch from eth0,
eth1, eth2 to names like, enp3s0, enp4s0, enp5s0. I’ve learned to accept
it and mutter to myself while I type in unfamiliar interface names. Then I
installed the new LTS version of Ubuntu and typed vi
/etc/network/interfaces. Yikes. After a technological lifetime of entering
my server’s IP information in a simple text file, that’s no longer how
things are done. Sigh. The good news is that while figuring out Netplan for
both desktop and server environments, I fixed a nagging DNS issue I’ve had
for years (more on that later).

The Basics of Netplan

The old way of configuring Debian-based network interfaces was based on the
ifupdown package. The new default is called Netplan, and
although it’s not
terribly difficult to use, it’s drastically different. Netplan is sort of
the interface used to configure the back-end dæmons that actually
configure the interfaces. Right now, the back ends supported are
NetworkManager and networkd.

If you tell Netplan to use NetworkManager, all interface configuration
control is handed off to the GUI interface on the desktop. The
NetworkManager program itself hasn’t changed; it’s the same GUI-based
interface configuration system you’ve likely used for years.

If you tell Netplan to use networkd, systemd itself handles the interface
configurations. Configuration is still done with Netplan files, but once
“applied”, Netplan creates the back-end configurations systemd requires. The
Netplan files are vastly different from the old /etc/network/interfaces
file, but it uses YAML syntax, and it’s pretty easy to figure out.

The Desktop and DNS

If you install a GUI version of Ubuntu, Netplan is configured with
NetworkManager as the back end by default. Your system should get IP
information via DHCP or static entries you add via GUI. This is usually not
an issue, but I’ve had a terrible time with my split-DNS setup and
systemd-resolved. I’m sure there is a magical combination of configuration
files that will make things work, but I’ve spent a lot of time, and it
always behaves a little oddly. With my internal DNS server resolving domain
names differently from external DNS servers (that is, split-DNS), I get random
lookup failures. Sometimes ping will resolve, but
dig will not. Sometimes
the internal A record will resolve, but a CNAME will not. Sometimes I get
resolution from an external DNS server (from the internet), even though I
never configure anything other than the internal DNS!

I decided to disable systemd-resolved. That has the potential to break DNS
lookups in a VPN, but I haven’t had an issue with that. With
resolved
handling DNS information, the /etc/resolv.conf file points to 127.0.0.53 as
the nameserver. Disabling systemd-resolved will stop the automatic creation
of the file. Thankfully, NetworkManager itself can handle the creation and
modification of /etc/resolv.conf. Once I make that change, I no longer have
an issue with split-DNS resolution. It’s a three-step process:

  1. Do sudo systemctl disable systemd-resolved.service.
  2. Then sudo rm /etc/resolv.conf (get rid of the symlink).
  3. Edit the /etc/NetworkManager/NetworkManager.conf file, and in the
    [main]
    section, add a line that reads DNS=default.

Once those steps are complete, NetworkManager itself will create the
/etc/resolv.conf file, and the DNS server supplied via DHCP or static entry
will be used instead of a 127.0.0.53 entry. I’m not sure why the
resolved
dæmon incorrectly resolves internal addresses for me, but the above method
has been foolproof, even when switching between networks with my
laptop.

Netplan CLI Configuration

If Ubuntu is installed in server mode, it is almost certainly configured to
use networkd as the back end. To check, have a look at the
/etc/netplan/config.yaml file. The renderer should be set to
networkd
in order to use the systemd-networkd back end. The file should look
something like this:

network:
version: 2
renderer: networkd
ethernets:
enp2s0:
dhcp4: true

Important note: remember that with
YAML files, whitespace matters, so the indentation is important. It’s also
very important to remember that after making any changes, you need to
run sudo netplan apply so the back-end configuration files are
populated.

The default renderer is networkd, so it’s possible you won’t have that line
in your configuration file. It’s also possible your configuration file will
be named something different in the /etc/netplan folder. All .conf files
are read, so it doesn’t matter what it’s called as long as it ends with
.conf. Static configurations are fairly simple to set up:

network:
version: 2
renderer: networkd
ethernets:
enp2s0:
dhcp4: no
addresses:
– 192.168.1.10/24
– 10.10.10.10/16
gateway4: 192.168.1.1
nameservers:
addresses: [192.168.1.1, 8.8.8.8]

Notice I’ve assigned multiple IP addresses to the interface. Netplan does
not support virtual interfaces like enp3s0:0, rather multiple IP
addresses can be assigned to a single interface.

Unfortunately, networkd doesn’t create an /etc/resolv.conf file if you
disable the resolved dæmon. If you have problems with split-DNS on a
headless computer, the best solution I’ve come up with is to disable
systemd-resolved and then manually create an /etc/resolv.conf file. Since
headless computers don’t usually move around as much as laptops, it’s likely
the /etc/resolv.conf file won’t need to be changed. Still, I wish
networkd
had an option to manage the resolv.conf file the same way
NetworkManager
does.

Advanced Network Configurations

The configuration formats are different, but it’s still possible to do more
advanced network configurations with Netplan:

Bonding:

network:
version: 2
renderer: networkd
bonds:
bond0:
dhcp4: yes
interfaces:
– enp2s0
– enp3s0
parameters:
mode: active-backup
primary: enp2s0

The various bonding modes (balance-rr,
active-backup, balance-xor,
broadcast, 802.3ad, balance-tlb and
balance-alb) are supported.

Bridging:

network:
version: 2
renderer: networkd
bridges:
br0:
dhcp4: yes
interfaces:
– enp4s0
– enp3s0

Bridging is even simpler to set up. This configuration creates a bridge
device using the two interfaces listed. The device (br0) gets address
information via DHCP.

CLI Networking Commands

If you’re a crusty old sysadmin like me, you likely type
ifconfig to see
IP information without even thinking. Unfortunately, those tools are not
usually installed by default. This isn’t actually the fault of Ubuntu and
Netplan; the old ifconfig toolset has been deprecated. If you want to use
the old ifconfig tool, you can install the package:

sudo apt install net-tools

But, if you want to do it the “correct” way, the new “ip” tool is the proper
way to do it. Here are some equivalents of things I commonly do with
ifconfig:

Show network interface information.

Old way:

ifconfig

New way:

ip address show

(Or you can just do ip a, which is actually less typing than
ifconfig.)

Bring interface up.

Old way:

ifconfig enp3s0 up

New way:

ip link set enp3s0 up

Assign IP address.

Old way:

ifconfig enp3s0 192.168.1.22

New way:

ip address add 192.168.1.22 dev enp3s0

Assign complete IP information.

Old way:

ifconfig enp3s0 192.168.1.22 net mask 255.255.255.0 broadcast
↪192.168.1.255

New way:

ip address add 192.168.1.22/24 broadcast 192.168.1.255
↪dev enp3s0

Add alias interface.

Old way:

ifconfig enp3s0:0 192.168.100.100/24

New way:

ip address add 192.168.100.100/24 dev enp3s0 label enp3s0:0

Show the routing table.

Old way:

route

New way:

ip route show

Add route.

Old way:

route add -net 192.168.55.0/24 dev enp4s0

New way:

ip route add 192.168.55.0/24 dev enp4s0

Old Dogs and New Tricks

I hated Netplan when I first installed Ubuntu 18.04. In fact, on the
particular server I was installing, I actually started over and installed
16.04 because it was “comfortable”. After a while, curiosity got the better
of me, and I investigated the changes. I’m still more comfortable with the
old /etc/network/interfaces file, but I have to admit, Netplan makes a
little more sense. There is a single “front end” for configuring networks,
and it uses different back ends for the heavy lifting. Right now, the only
back ends are the GUI NetworkManager and the systemd-networkd
dæmon. With
the modular system, however, that could change someday without the need to
learn a new way of configuring interfaces. A simple change to the
renderer line would send the configuration information to a new back end.

With regard to the new command-line networking tool (ip vs.
ifconfig),
it really behaves more like other network devices (routers and so on), so that’s
probably a good change as well. As technologists, we need to be ready and
eager to learn new things. If we weren’t always trying the next best thing,
we’d all be configuring Trumpet Winsock to dial in to the internet on our
Windows 95 machines. I’m glad I tried that new Linux thing, and while it
wasn’t quite as dramatic, I’m glad I tried Netplan as well!

Source

Canonical Announces Partnership with Eurotech, the Big Four to End Support of TLS 1.0 and 1.1, Sony Using Blockchain for DRM, NETWAYS Web Services Launches IaaS OpenStack, Grey Hat Patching MikroTik Routers and Paul Allen Dies at 65

News briefs for October 16, 2018.

Canonical
announced a partnership with Eurotech
to help organizations
advance in the IoT realm. In connection with this partnership, Canonical
“has published a Snap for the Eclipse Kura project—the
popular, open-source Java-based IoT edge framework. Having Kura available as
a Snap—the universal Linux application packaging format—will
enable a wider availability of Linux users across multiple distributions to
take advantage of the framework and ensure it is supported on more hardware.
Snap support will also extend on Eurotech’s commercially supported
version; the Everywhere Software Framework (ESF).”

Apple, Google, Microsoft and Mozilla all announce the end of support for TLS
1.0 and 1.1 standards starting in 2020, ZDNet
reports
. Chrome and Firefox already support TLS 1.3, and Microsoft and
Apple will soon follow suit.

Sony announced it’s planning to use the blockchain for digital rights
management (DRM). According to the story
on Engadget
, the company plans to begin with the Sony Global Education
written educational materials. This blockchain system is “built on Sony’s
pre-existing DRM tools, which keep track of the distribution of copyrighted
materials, but will have advantages that come with blockchain’s inherent
security.”

NETWAYS Web Services launches IaaS OpenStack.
According to the press release, “the Open Source experts from ‘NETWAYS Web
Services’ (NWS) add with OpenStack
a customizable, fully managed Infrastructure as a Service (Iaas) to their
platform.” Customers can choose between SSD or Ceph based packages, and in
addition to OpenStack, the platform offers “a diverse selection of Open
Source applications for various purposes”. If you’re interested, you can try NWS OpenStack 30 days for free.
For more information and to get started, go here.

A grey-hat hacker is breaking into MikroTik routers and patching them so
they can’t be compromised by cryptojackers or other attackers. According
to ZDNet
, the hacker, who goes by Alexey, is a system
administrator and claims to have disinfected more then 100,000 MikroTik
routers. He told ZDNet that he added firewall rules to block access to
the routers from outside the local network, and then “in the comments, I wrote information about the
vulnerability and left the address of the @router_os Telegram channel, where
it was possible for them to ask questions.” Evidently, a few folks have said “thanks”, but many are outraged.

Paul Allen—”co-founder of Microsoft and noted technologist,
philanthropist, community builder, conservationist, musician and supporter of
the arts”—passed away yesterday. See the statements released on behalf
of the Allen Family, Vulcan Inc. and the Paul G. Allen network at the Vulcan
Inc. website
.

Source

Set Up a CI/CD Pipeline with a Jenkins Pod in Kubernetes (Part 2) | Linux.com

In Part 1 of our series, we got our local Kubernetes cluster up and running with Docker, Minikube, and kubectl. We set up an image repository, and tried building, pushing, and deploying a container image with code changes we made to the Hello-Kenzan app. It’s now time to automate this process.

In Part 2, we’ll set up continuous delivery for our application by running Jenkins in a pod in Kubernetes. We’ll create a pipeline using a Jenkins 2.0 Pipeline script that automates building our Hello-Kenzan image, pushing it to the registry, and deploying it in Kubernetes. That’s right: we are going to deploy pods from a registry pod using a Jenkins pod. While this may sound like a bit of deployment alchemy, once the infrastructure and application components are all running on Kubernetes, it makes the management of these pieces easy since they’re all under one ecosystem.

With Part 2, we’re laying the last bit of infrastructure we need so that we can run our Kr8sswordz Puzzle in Part 3.

Read all the articles in the series:

RyHz4CL2OgRH4G8M7BLPxwZ7MMAZh-DpmmEuXBoa

This tutorial only runs locally in Minikube and will not work on the cloud. You’ll need a computer running an up-to-date version of Linux or macOS. Optimally, it should have 16 GB of RAM. Minimally, it should have 8 GB of RAM. For best performance, reboot your computer and keep the number of running apps to a minimum.

Creating and Building a Pipeline in Jenkins

Before you begin, you’ll want to make sure you’ve run through the steps in Part 1, in which we set up our image repository running in a pod (to do so quickly, you can run the npm part1 automated script detailed below).

If you previously stopped Minikube, you’ll need to start it up again. Enter the following terminal command, and wait for the cluster to start:

minikube start

You can check the cluster status and view all the pods that are running.

kubectl cluster-info

kubectl get pods –all-namespaces

Make sure that the registry pod has a Status of Running.

We are ready to build out our Jenkins infrastructure.

Remember, you don’t actually have to type the commands below—just press Enter at each step and the script will enter the command for you!

1. First, let’s build the Jenkins image we’ll use in our Kubernetes cluster.

docker build -t 127.0.0.1:30400/jenkins:latest

-f applications/jenkins/Dockerfile applications/jenkins

2. Once again we’ll need to set up the Socat Registry proxy container to push images, so let’s build it. Feel free to skip this step in case the socat-registry image already exists from Part 1 (to check, run docker images).

docker build -t socat-registry -f applications/socat/Dockerfile applications/socat

3. Run the proxy container from the image.

docker stop socat-registry; docker rm socat-registry;

docker run -d -e “REG_IP=`minikube ip`” -e “REG_PORT=30400”

–name socat-registry -p 30400:5000 socat-registry

n-z3awvRgVXCO-QIpllgiqXOWtsTeePM62asXPD5

This step will fail if local port 30400 is currently in use by another process. You can check if there’s any process currently using this port by running the command
lsof -i :30400

4. With our proxy container up and running, we can now push our Jenkins image to the local repository.

docker push 127.0.0.1:30400/jenkins:latest

You can see the newly pushed Jenkins image in the registry UI using the following command.

minikube service registry-ui

5. The proxy’s work is done, so you can go ahead and stop it.

docker stop socat-registry

6. Deploy Jenkins, which we’ll use to create our automated CI/CD pipeline. It will take the pod a minute or two to roll out.

kubectl apply -f manifests/jenkins.yaml; kubectl rollout status deployment/jenkins

Inspect all the pods that are running. You’ll see a pod for Jenkins now.

kubectl get pods

_YIHeGg141vkuJmdJZBO0zN2s3pjLdDMgo5pfQFe

Jenkins as a CD tool needs special rights in order to interact with the Kubernetes cluster, so we’ve setup RBAC (Role Based Access Control) authorization for it inside the jenkins.yaml deployment manifest. RBAC consists of a Role, a ServiceAccount and a Binding object that binds the two together. Here’s how we configured Jenkins with these resources:

Role: For simplicity we leveraged the pre-existing ClusterRole “cluster-admin” which by default has unlimited access to the cluster. (In a real life scenario you might want to narrow down Jenkins’ access rights by creating a new role with the least privileged PolicyRule.)

ServiceAccount: We created a new ServiceAccount named “Jenkins”. The property “automountServiceAccountToken” has been set to true; this will automatically mount the authentication resources needed for a kubeconfig context to be setup on the pod (i.e. Cluster info, User represented by a token and a Namespace).

RoleBinding: We created a ClusterRoleBinding that binds together the “Jenkins” serviceAccount to the “cluster-admin” ClusterRole.

Lastly, we tell our Jenkins deployment to run as the Jenkins ServiceAccount.

n-z3awvRgVXCO-QIpllgiqXOWtsTeePM62asXPD5

Notice our Jenkins deployment has an initContainer. This is a container that will run to completion before the main container is deployed on our pod. The job of this init container is to create a kubeconfig file based on the provided context and to share it with the main Jenkins container through an “emptyDir” volume.

7. Open the Jenkins UI in a web browser.

minikube service jenkins

8. Display the Jenkins admin password with the following command, and right-click to copy it.

kubectl exec -it `kubectl get pods –selector=app=jenkins

–output=jsonpath={.items..metadata.name}` cat

/var/jenkins_home/secrets/initialAdminPassword

9. Switch back to the Jenkins UI. Paste the Jenkins admin password in the box and click Continue. Click Install suggested plugins. Plugins have actually been pre-downloaded during the Jenkins image build, so this step should finish fairly quickly.

RyHz4CL2OgRH4G8M7BLPxwZ7MMAZh-DpmmEuXBoa

One of the plugins being installed is Kubernetes Continuous Deploy, which allows Jenkins to directly interact with the Kubernetes cluster rather than through kubectl commands. This plugin was pre-downloaded with the Jenkins image build.

10. Create an admin user and credentials, and click Save and Continue. (Make sure to remember these credentials as you will need them for repeated logins.)

s7KGWbFBCOau5gi7G05Fs_mjAtBOVNy7LlEQ4wTL

11. On the Instance Configuration page, click Save and Finish. On the next page, click Restart (if it appears to hang for some time on restarting, you may have to refresh the browser window). Login to Jenkins.

12. Before we create a pipeline, we first need to provision the Kubernetes Continuous Deploy plugin with a kubeconfig file that will allow access to our Kubernetes cluster. In Jenkins on the left, click on Credentials, select the Jenkins store, then Global credentials (unrestricted), and Add Credentials on the left menu

13. The following values must be entered precisely as indicated:

  • Kind: Kubernetes configuration (kubeconfig)

  • ID: kenzan_kubeconfig

  • Kubeconfig: From a file on the Jenkins master

  • File: /var/jenkins_home/.kube/config

Finally click Ok.

HznE6h9fOjuiv543Oqs5MqiIj0D52wSFJ44a-3An

13. We now want to create a new pipeline for use with our Hello-Kenzan app. Back on Jenkins Home, on the left, click New Item.

EdS4p4roTIfvBrg5Fz0n7sx8gTtMiXQMT7mqYqT-

Enter the item name as Hello-Kenzan Pipeline, select Pipeline, and click OK.

4If4KfHDUj8hGFn8kkaavcX9H8sboABcODIkrVL3

14. Under the Pipeline section at the bottom, change the Definition to be Pipeline script from SCM.

15. Change the SCM to Git. Change the Repository URL to be the URL of your forked Git repository, such as https://github.com/[GIT USERNAME]/kubernetes-ci-cd.

OPuG1YZM70f-TcKx-dkQQLl223gu0PudZe12eQPl

RyHz4CL2OgRH4G8M7BLPxwZ7MMAZh-DpmmEuXBoa

Note for the Script Path, we are using a Jenkinsfile located in the root of our project on our Github repo. This defines the build, push and deploy steps for our hello-kenzan application.

Click Save. On the left, click Build Now to run the new pipeline. You should see it run through the build, push, and deploy steps in a few seconds.

b4KTpFJ4vnNdFbTKcMxn7Yy3aFr8UTlmQBuVK6YB

16. After all pipeline stages are colored green as complete, view the Hello-Kenzan application.

minikube service hello-kenzan

You might notice that you’re not seeing the uncommitted change you previously made to index.html in Part 1. That’s because Jenkins wasn’t using your local code. Instead, Jenkins pulled the code from your forked repo on GitHub, used that code to build the image, push it, and then deploy it.

Pushing Code Changes Through the Pipeline

Now let’s see some Continuous Integration in action! try changing the index.html in our Hello-Kenzan app, then building again to verify that the Jenkins build process works.

a. Open applications/hello-kenzan/index.html in a text editor.

nano applications/hello-kenzan/index.html

b. Add the following html at the end of the file (or any other html you like). (Tip: You can right-click in nano and choose Paste.)

<p style=”font-family:sans-serif”>For more from Kenzan, check out our
<a href=”http://kenzan.io”>website</a>.</p>

c. Press Ctrl+X to close the file, type Y to confirm the filename, and press Enter to write the changes to the file.

d. Commit the changed file to your Git repo (you may need to enter your GitHub credentials):

git commit -am “Added message to index.html”

git push

In the Jenkins UI, click Build Now to run the build again.

Jc8EnFCovLr3FfxWQxfuaeqX4VDJCHaq-mxvBIeC

18. View the updated Hello-Kenzan application. You should see the message you added to index.html. (If you don’t, hold down Shift and refresh your browser to force it to reload.)

minikube service hello-kenzan

ZyyeJWIXiqbBXfNd9MwG25_9Ewb8YmrKFTI-4zUz

And that’s it! You’ve successfully used your pipeline to automatically pull the latest code from your Git repository, build and push a container image to your cluster, and then deploy it in a pod. And you did it all with one click—that’s the power of a CI/CD pipeline.

If you’re done working in Minikube for now, you can go ahead and stop the cluster by entering the following command:

minikube stop

Automated Scripts

If you need to walk through the steps we did again (or do so quickly), we’ve provided npm scripts that will automate running the same commands in a terminal.

1. To use the automated scripts, you’ll need to install NodeJS and npm.

On Linux, follow the NodeJS installation steps for your distribution. To quickly install NodeJS and npm on Ubuntu 16.04 or higher, use the following terminal commands.

a. curl -sL https://deb.nodesource.com/setup_7.x | sudo -E bash –

b. sudo apt-get install -y nodejs

2. Change directories to the cloned repository and install the interactive tutorial script:

a. cd ~/kubernetes-ci-cd

b. npm install

3. Start the script

npm run part1 (or part2, part3, part4 of the blog series)

​4. Press Enter to proceed running each command.

Up Next

In Parts 3 and 4, we will deploy our Kr8sswordz Puzzle app through a Jenkins CI/CD pipeline. We will demonstrate its use of caching with etcd, as well as scaling the app up with multiple puzzle service instances so that we can try running a load test. All of this will be shown in the UI of the app itself so that we can visualize these pieces in action.

Curious to learn more about Kubernetes? Enroll in Introduction to Kubernetes, a FREE training course from The Linux Foundation, hosted on edX.org.

This article was revised and updated by David Zuluaga, a front end developer at Kenzan. He was born and raised in Colombia, where he studied his BE in Systems Engineering. After moving to the United States, he studied received his master’s degree in computer science at Maharishi University of Management. David has been working at Kenzan for four years, dynamically moving throughout a wide range of areas of technology, from front-end and back-end development to platform and cloud computing. David’s also helped design and deliver training sessions on Microservices for multiple client teams.

Source

Automotive Grade Linux Enables Telematics and Instrument Cluster Applications with Latest UCB 6.0 Release

SAN FRANCISCO, October 15, 2018Automotive Grade Linux (AGL), a collaborative cross-industry effort developing an open platform for the connected car, today announced the latest release of the AGL platform, Unified Code Base (UCB) 6.0, which features device profiles for telematics and instrument cluster.

“The addition of the telematics and instrument cluster profiles opens up new deployment possibilities for AGL,” said Dan Cauchy, Executive Director of Automotive Grade Linux at The Linux Foundation. “Motorcycles, fleet services, rental car tracking, basic economy cars with good old-fashioned radios, essentially any vehicle without a head unit or infotainment display can now leverage the AGL Unified Code Base as a starting point for their products.”

Developed through a joint effort by dozens of member companies, the AGL Unified Code Base (UCB) is an open source software platform that can serve as the de facto industry standard for infotainment, telematics and instrument cluster applications. Sharing a single software platform across the industry reduces fragmentation and accelerates time-to-market by encouraging the growth of a global ecosystem of developers and application providers that can build a product once and have it work for multiple automakers.

Many AGL members have already started integrating the UCB into their production plans. Mercedes-Benz Vans is using AGL as a foundation for a new onboard operating system for its commercial vehicles, and Toyota’s AGL-based infotainment system is now in vehicles globally.

The AGL UCB 6.0 includes an operating system, middleware and application framework. Key features include:

  • Device profiles for telematics and instrument cluster
  • Core AGL Service layer can be built stand-alone
  • Reference applications including media player, tuner, navigation, web browser, Bluetooth, WiFi, HVAC control, audio mixer and vehicle controls
  • Integration with simultaneous display on IVI system and instrument cluster
  • Multiple display capability including rear seat entertainment
  • Wide range of hardware board support including Renesas, Qualcomm Technologies, Intel, Texas Instrument, NXP and Raspberry Pi
  • Software Development Kit (SDK) with application templates
  • SmartDeviceLink ready for easy integration and access to smartphone applications
  • Application Services APIs for navigation, voice recognition, bluetooth, audio, tuner and CAN signaling
  • Near Field Communication (NFC) and identity management capabilities including multilingual support
  • Over-The-Air (OTA) upgrade capabilities
  • Security frameworks with role-based-access control

The full list of additions to the UCB 6.0 can be found here.

The global AGL community will gather in Dresden, Germany for the bi-annual All Member Meeting on October 17-18, 2018. At this gathering, members and community leaders will get together to share best practices and future plans for the project. To learn more or register, please visit here.

About Automotive Grade Linux (AGL)

Automotive Grade Linux is a collaborative open source project that is bringing together automakers, suppliers and technology companies to accelerate the development and adoption of a fully open software stack for the connected car. With Linux at its core, AGL is developing an open platform from the ground up that can serve as the de facto industry standard to enable rapid development of new features and technologies. Although initially focused on In-Vehicle-Infotainment (IVI), AGL is the only organization planning to address all software in the vehicle, including instrument cluster, heads up display, telematics, advanced driver assistance systems (ADAS) and autonomous driving. The AGL platform is available to all, and anyone can participate in its development. Learn more: https://www.automotivelinux.org/

Automotive Grade Linux is a Collaborative Project at The Linux Foundation. Linux Foundation Collaborative Projects are independently funded software projects that harness the power of collaborative development to fuel innovation across industries and ecosystems. www.linuxfoundation.org

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Inquiries

Emily Olin

Automotive Grade Linux

eolin@linuxfoundation.org

Source

Install Plex Media Server on CentOS 7

by Marin Todorov | Published: October 15, 2018 |

October 15, 2018

‘,
enableHover: false,
enableTracking: true,
buttons: { twitter: },
click: function(api, options){
api.simulateClick();
api.openPopup(‘twitter’);
}
});
jQuery(‘#facebook’).sharrre({
share: {
facebook: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
click: function(api, options){
api.simulateClick();
api.openPopup(‘facebook’);
}
});
jQuery(‘#googleplus’).sharrre({
share: {
googlePlus: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
urlCurl: ‘https://www.tecmint.com/wp-content/themes/tecmint/js/sharrre.php’,
click: function(api, options){
api.simulateClick();
api.openPopup(‘googlePlus’);
}
});
jQuery(‘#linkedin’).sharrre({
share: {
linkedin: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
buttons: {
linkedin: {
description: ‘Install Plex Media Server on CentOS 7’,media: ‘https://www.tecmint.com/wp-content/uploads/2018/10/Install-Plex-Media-Server-on-CentOS-7.png’ }
},
click: function(api, options){
api.simulateClick();
api.openPopup(‘linkedin’);
}
});
// Scrollable sharrre bar, contributed by Erik Frye. Awesome!
var shareContainer = jQuery(“.sharrre-container”),
header = jQuery(‘#header’),
postEntry = jQuery(‘.entry’),
$window = jQuery(window),
distanceFromTop = 20,
startSharePosition = shareContainer.offset(),
contentBottom = postEntry.offset().top + postEntry.outerHeight(),
topOfTemplate = header.offset().top;
getTopSpacing();
shareScroll = function(){
if($window.width() > 719){
var scrollTop = $window.scrollTop() + topOfTemplate,
stopLocation = contentBottom – (shareContainer.outerHeight() + topSpacing);
if(scrollTop > stopLocation){
shareContainer.offset();
}
else if(scrollTop >= postEntry.offset().top-topSpacing){
shareContainer.offset();
}else if(scrollTop 1024)
topSpacing = distanceFromTop + jQuery(‘.nav-wrap’).outerHeight();
else
topSpacing = distanceFromTop;
}
});

If You Appreciate What We Do Here On TecMint, You Should Consider:

  1. Stay Connected to: Twitter | Facebook | Google Plus
  2. Subscribe to our email updates: Sign Up Now
  3. Get your own self-hosted blog with a Free Domain at ($3.45/month).
  4. Become a Supporter – Make a contribution via PayPal
  5. Support us by purchasing our premium books in PDF format.
  6. Support us by taking our online Linux courses

We are thankful for your never ending support.

View all Posts

Marin Todorov

I am a bachelor in computer science and a Linux Foundation Certified System Administrator. Currently working as a Senior Technical support in the hosting industry. In my free time I like testing new software and inline skating.

Your name can also be listed here. Got a tip? Submit it here to become an TecMint author.

Source

How to List All Virtual Hosts in Apache Web Server

by Aaron Kili | Published: October 16, 2018 |

October 16, 2018

‘,
enableHover: false,
enableTracking: true,
buttons: { twitter: },
click: function(api, options){
api.simulateClick();
api.openPopup(‘twitter’);
}
});
jQuery(‘#facebook’).sharrre({
share: {
facebook: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
click: function(api, options){
api.simulateClick();
api.openPopup(‘facebook’);
}
});
jQuery(‘#googleplus’).sharrre({
share: {
googlePlus: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
urlCurl: ‘https://www.tecmint.com/wp-content/themes/tecmint/js/sharrre.php’,
click: function(api, options){
api.simulateClick();
api.openPopup(‘googlePlus’);
}
});
jQuery(‘#linkedin’).sharrre({
share: {
linkedin: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
buttons: {
linkedin: {
description: ‘How to List All Virtual Hosts in Apache Web Server’,media: ‘https://www.tecmint.com/wp-content/uploads/2018/10/List-All-Virtual-Hosts-in-Apache-Web-Server.png’ }
},
click: function(api, options){
api.simulateClick();
api.openPopup(‘linkedin’);
}
});
// Scrollable sharrre bar, contributed by Erik Frye. Awesome!
var shareContainer = jQuery(“.sharrre-container”),
header = jQuery(‘#header’),
postEntry = jQuery(‘.entry’),
$window = jQuery(window),
distanceFromTop = 20,
startSharePosition = shareContainer.offset(),
contentBottom = postEntry.offset().top + postEntry.outerHeight(),
topOfTemplate = header.offset().top;
getTopSpacing();
shareScroll = function(){
if($window.width() > 719){
var scrollTop = $window.scrollTop() + topOfTemplate,
stopLocation = contentBottom – (shareContainer.outerHeight() + topSpacing);
if(scrollTop > stopLocation){
shareContainer.offset();
}
else if(scrollTop >= postEntry.offset().top-topSpacing){
shareContainer.offset();
}else if(scrollTop 1024)
topSpacing = distanceFromTop + jQuery(‘.nav-wrap’).outerHeight();
else
topSpacing = distanceFromTop;
}
});

If You Appreciate What We Do Here On TecMint, You Should Consider:

  1. Stay Connected to: Twitter | Facebook | Google Plus
  2. Subscribe to our email updates: Sign Up Now
  3. Get your own self-hosted blog with a Free Domain at ($3.45/month).
  4. Become a Supporter – Make a contribution via PayPal
  5. Support us by purchasing our premium books in PDF format.
  6. Support us by taking our online Linux courses

We are thankful for your never ending support.

View all Posts

Aaron Kili

Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.

Your name can also be listed here. Got a tip? Submit it here to become an TecMint author.

Source

WP2Social Auto Publish Powered By : XYZScripts.com