Set Up a CI/CD Pipeline with a Jenkins Pod in Kubernetes (Part 2) | Linux.com

In Part 1 of our series, we got our local Kubernetes cluster up and running with Docker, Minikube, and kubectl. We set up an image repository, and tried building, pushing, and deploying a container image with code changes we made to the Hello-Kenzan app. It’s now time to automate this process.

In Part 2, we’ll set up continuous delivery for our application by running Jenkins in a pod in Kubernetes. We’ll create a pipeline using a Jenkins 2.0 Pipeline script that automates building our Hello-Kenzan image, pushing it to the registry, and deploying it in Kubernetes. That’s right: we are going to deploy pods from a registry pod using a Jenkins pod. While this may sound like a bit of deployment alchemy, once the infrastructure and application components are all running on Kubernetes, it makes the management of these pieces easy since they’re all under one ecosystem.

With Part 2, we’re laying the last bit of infrastructure we need so that we can run our Kr8sswordz Puzzle in Part 3.

Read all the articles in the series:

RyHz4CL2OgRH4G8M7BLPxwZ7MMAZh-DpmmEuXBoa

This tutorial only runs locally in Minikube and will not work on the cloud. You’ll need a computer running an up-to-date version of Linux or macOS. Optimally, it should have 16 GB of RAM. Minimally, it should have 8 GB of RAM. For best performance, reboot your computer and keep the number of running apps to a minimum.

Creating and Building a Pipeline in Jenkins

Before you begin, you’ll want to make sure you’ve run through the steps in Part 1, in which we set up our image repository running in a pod (to do so quickly, you can run the npm part1 automated script detailed below).

If you previously stopped Minikube, you’ll need to start it up again. Enter the following terminal command, and wait for the cluster to start:

minikube start

You can check the cluster status and view all the pods that are running.

kubectl cluster-info

kubectl get pods –all-namespaces

Make sure that the registry pod has a Status of Running.

We are ready to build out our Jenkins infrastructure.

Remember, you don’t actually have to type the commands below—just press Enter at each step and the script will enter the command for you!

1. First, let’s build the Jenkins image we’ll use in our Kubernetes cluster.

docker build -t 127.0.0.1:30400/jenkins:latest

-f applications/jenkins/Dockerfile applications/jenkins

2. Once again we’ll need to set up the Socat Registry proxy container to push images, so let’s build it. Feel free to skip this step in case the socat-registry image already exists from Part 1 (to check, run docker images).

docker build -t socat-registry -f applications/socat/Dockerfile applications/socat

3. Run the proxy container from the image.

docker stop socat-registry; docker rm socat-registry;

docker run -d -e “REG_IP=`minikube ip`” -e “REG_PORT=30400”

–name socat-registry -p 30400:5000 socat-registry

n-z3awvRgVXCO-QIpllgiqXOWtsTeePM62asXPD5

This step will fail if local port 30400 is currently in use by another process. You can check if there’s any process currently using this port by running the command
lsof -i :30400

4. With our proxy container up and running, we can now push our Jenkins image to the local repository.

docker push 127.0.0.1:30400/jenkins:latest

You can see the newly pushed Jenkins image in the registry UI using the following command.

minikube service registry-ui

5. The proxy’s work is done, so you can go ahead and stop it.

docker stop socat-registry

6. Deploy Jenkins, which we’ll use to create our automated CI/CD pipeline. It will take the pod a minute or two to roll out.

kubectl apply -f manifests/jenkins.yaml; kubectl rollout status deployment/jenkins

Inspect all the pods that are running. You’ll see a pod for Jenkins now.

kubectl get pods

_YIHeGg141vkuJmdJZBO0zN2s3pjLdDMgo5pfQFe

Jenkins as a CD tool needs special rights in order to interact with the Kubernetes cluster, so we’ve setup RBAC (Role Based Access Control) authorization for it inside the jenkins.yaml deployment manifest. RBAC consists of a Role, a ServiceAccount and a Binding object that binds the two together. Here’s how we configured Jenkins with these resources:

Role: For simplicity we leveraged the pre-existing ClusterRole “cluster-admin” which by default has unlimited access to the cluster. (In a real life scenario you might want to narrow down Jenkins’ access rights by creating a new role with the least privileged PolicyRule.)

ServiceAccount: We created a new ServiceAccount named “Jenkins”. The property “automountServiceAccountToken” has been set to true; this will automatically mount the authentication resources needed for a kubeconfig context to be setup on the pod (i.e. Cluster info, User represented by a token and a Namespace).

RoleBinding: We created a ClusterRoleBinding that binds together the “Jenkins” serviceAccount to the “cluster-admin” ClusterRole.

Lastly, we tell our Jenkins deployment to run as the Jenkins ServiceAccount.

n-z3awvRgVXCO-QIpllgiqXOWtsTeePM62asXPD5

Notice our Jenkins deployment has an initContainer. This is a container that will run to completion before the main container is deployed on our pod. The job of this init container is to create a kubeconfig file based on the provided context and to share it with the main Jenkins container through an “emptyDir” volume.

7. Open the Jenkins UI in a web browser.

minikube service jenkins

8. Display the Jenkins admin password with the following command, and right-click to copy it.

kubectl exec -it `kubectl get pods –selector=app=jenkins

–output=jsonpath={.items..metadata.name}` cat

/var/jenkins_home/secrets/initialAdminPassword

9. Switch back to the Jenkins UI. Paste the Jenkins admin password in the box and click Continue. Click Install suggested plugins. Plugins have actually been pre-downloaded during the Jenkins image build, so this step should finish fairly quickly.

RyHz4CL2OgRH4G8M7BLPxwZ7MMAZh-DpmmEuXBoa

One of the plugins being installed is Kubernetes Continuous Deploy, which allows Jenkins to directly interact with the Kubernetes cluster rather than through kubectl commands. This plugin was pre-downloaded with the Jenkins image build.

10. Create an admin user and credentials, and click Save and Continue. (Make sure to remember these credentials as you will need them for repeated logins.)

s7KGWbFBCOau5gi7G05Fs_mjAtBOVNy7LlEQ4wTL

11. On the Instance Configuration page, click Save and Finish. On the next page, click Restart (if it appears to hang for some time on restarting, you may have to refresh the browser window). Login to Jenkins.

12. Before we create a pipeline, we first need to provision the Kubernetes Continuous Deploy plugin with a kubeconfig file that will allow access to our Kubernetes cluster. In Jenkins on the left, click on Credentials, select the Jenkins store, then Global credentials (unrestricted), and Add Credentials on the left menu

13. The following values must be entered precisely as indicated:

  • Kind: Kubernetes configuration (kubeconfig)

  • ID: kenzan_kubeconfig

  • Kubeconfig: From a file on the Jenkins master

  • File: /var/jenkins_home/.kube/config

Finally click Ok.

HznE6h9fOjuiv543Oqs5MqiIj0D52wSFJ44a-3An

13. We now want to create a new pipeline for use with our Hello-Kenzan app. Back on Jenkins Home, on the left, click New Item.

EdS4p4roTIfvBrg5Fz0n7sx8gTtMiXQMT7mqYqT-

Enter the item name as Hello-Kenzan Pipeline, select Pipeline, and click OK.

4If4KfHDUj8hGFn8kkaavcX9H8sboABcODIkrVL3

14. Under the Pipeline section at the bottom, change the Definition to be Pipeline script from SCM.

15. Change the SCM to Git. Change the Repository URL to be the URL of your forked Git repository, such as https://github.com/[GIT USERNAME]/kubernetes-ci-cd.

OPuG1YZM70f-TcKx-dkQQLl223gu0PudZe12eQPl

RyHz4CL2OgRH4G8M7BLPxwZ7MMAZh-DpmmEuXBoa

Note for the Script Path, we are using a Jenkinsfile located in the root of our project on our Github repo. This defines the build, push and deploy steps for our hello-kenzan application.

Click Save. On the left, click Build Now to run the new pipeline. You should see it run through the build, push, and deploy steps in a few seconds.

b4KTpFJ4vnNdFbTKcMxn7Yy3aFr8UTlmQBuVK6YB

16. After all pipeline stages are colored green as complete, view the Hello-Kenzan application.

minikube service hello-kenzan

You might notice that you’re not seeing the uncommitted change you previously made to index.html in Part 1. That’s because Jenkins wasn’t using your local code. Instead, Jenkins pulled the code from your forked repo on GitHub, used that code to build the image, push it, and then deploy it.

Pushing Code Changes Through the Pipeline

Now let’s see some Continuous Integration in action! try changing the index.html in our Hello-Kenzan app, then building again to verify that the Jenkins build process works.

a. Open applications/hello-kenzan/index.html in a text editor.

nano applications/hello-kenzan/index.html

b. Add the following html at the end of the file (or any other html you like). (Tip: You can right-click in nano and choose Paste.)

<p style=”font-family:sans-serif”>For more from Kenzan, check out our
<a href=”http://kenzan.io”>website</a>.</p>

c. Press Ctrl+X to close the file, type Y to confirm the filename, and press Enter to write the changes to the file.

d. Commit the changed file to your Git repo (you may need to enter your GitHub credentials):

git commit -am “Added message to index.html”

git push

In the Jenkins UI, click Build Now to run the build again.

Jc8EnFCovLr3FfxWQxfuaeqX4VDJCHaq-mxvBIeC

18. View the updated Hello-Kenzan application. You should see the message you added to index.html. (If you don’t, hold down Shift and refresh your browser to force it to reload.)

minikube service hello-kenzan

ZyyeJWIXiqbBXfNd9MwG25_9Ewb8YmrKFTI-4zUz

And that’s it! You’ve successfully used your pipeline to automatically pull the latest code from your Git repository, build and push a container image to your cluster, and then deploy it in a pod. And you did it all with one click—that’s the power of a CI/CD pipeline.

If you’re done working in Minikube for now, you can go ahead and stop the cluster by entering the following command:

minikube stop

Automated Scripts

If you need to walk through the steps we did again (or do so quickly), we’ve provided npm scripts that will automate running the same commands in a terminal.

1. To use the automated scripts, you’ll need to install NodeJS and npm.

On Linux, follow the NodeJS installation steps for your distribution. To quickly install NodeJS and npm on Ubuntu 16.04 or higher, use the following terminal commands.

a. curl -sL https://deb.nodesource.com/setup_7.x | sudo -E bash –

b. sudo apt-get install -y nodejs

2. Change directories to the cloned repository and install the interactive tutorial script:

a. cd ~/kubernetes-ci-cd

b. npm install

3. Start the script

npm run part1 (or part2, part3, part4 of the blog series)

​4. Press Enter to proceed running each command.

Up Next

In Parts 3 and 4, we will deploy our Kr8sswordz Puzzle app through a Jenkins CI/CD pipeline. We will demonstrate its use of caching with etcd, as well as scaling the app up with multiple puzzle service instances so that we can try running a load test. All of this will be shown in the UI of the app itself so that we can visualize these pieces in action.

Curious to learn more about Kubernetes? Enroll in Introduction to Kubernetes, a FREE training course from The Linux Foundation, hosted on edX.org.

This article was revised and updated by David Zuluaga, a front end developer at Kenzan. He was born and raised in Colombia, where he studied his BE in Systems Engineering. After moving to the United States, he studied received his master’s degree in computer science at Maharishi University of Management. David has been working at Kenzan for four years, dynamically moving throughout a wide range of areas of technology, from front-end and back-end development to platform and cloud computing. David’s also helped design and deliver training sessions on Microservices for multiple client teams.

Source

Automotive Grade Linux Enables Telematics and Instrument Cluster Applications with Latest UCB 6.0 Release

SAN FRANCISCO, October 15, 2018Automotive Grade Linux (AGL), a collaborative cross-industry effort developing an open platform for the connected car, today announced the latest release of the AGL platform, Unified Code Base (UCB) 6.0, which features device profiles for telematics and instrument cluster.

“The addition of the telematics and instrument cluster profiles opens up new deployment possibilities for AGL,” said Dan Cauchy, Executive Director of Automotive Grade Linux at The Linux Foundation. “Motorcycles, fleet services, rental car tracking, basic economy cars with good old-fashioned radios, essentially any vehicle without a head unit or infotainment display can now leverage the AGL Unified Code Base as a starting point for their products.”

Developed through a joint effort by dozens of member companies, the AGL Unified Code Base (UCB) is an open source software platform that can serve as the de facto industry standard for infotainment, telematics and instrument cluster applications. Sharing a single software platform across the industry reduces fragmentation and accelerates time-to-market by encouraging the growth of a global ecosystem of developers and application providers that can build a product once and have it work for multiple automakers.

Many AGL members have already started integrating the UCB into their production plans. Mercedes-Benz Vans is using AGL as a foundation for a new onboard operating system for its commercial vehicles, and Toyota’s AGL-based infotainment system is now in vehicles globally.

The AGL UCB 6.0 includes an operating system, middleware and application framework. Key features include:

  • Device profiles for telematics and instrument cluster
  • Core AGL Service layer can be built stand-alone
  • Reference applications including media player, tuner, navigation, web browser, Bluetooth, WiFi, HVAC control, audio mixer and vehicle controls
  • Integration with simultaneous display on IVI system and instrument cluster
  • Multiple display capability including rear seat entertainment
  • Wide range of hardware board support including Renesas, Qualcomm Technologies, Intel, Texas Instrument, NXP and Raspberry Pi
  • Software Development Kit (SDK) with application templates
  • SmartDeviceLink ready for easy integration and access to smartphone applications
  • Application Services APIs for navigation, voice recognition, bluetooth, audio, tuner and CAN signaling
  • Near Field Communication (NFC) and identity management capabilities including multilingual support
  • Over-The-Air (OTA) upgrade capabilities
  • Security frameworks with role-based-access control

The full list of additions to the UCB 6.0 can be found here.

The global AGL community will gather in Dresden, Germany for the bi-annual All Member Meeting on October 17-18, 2018. At this gathering, members and community leaders will get together to share best practices and future plans for the project. To learn more or register, please visit here.

About Automotive Grade Linux (AGL)

Automotive Grade Linux is a collaborative open source project that is bringing together automakers, suppliers and technology companies to accelerate the development and adoption of a fully open software stack for the connected car. With Linux at its core, AGL is developing an open platform from the ground up that can serve as the de facto industry standard to enable rapid development of new features and technologies. Although initially focused on In-Vehicle-Infotainment (IVI), AGL is the only organization planning to address all software in the vehicle, including instrument cluster, heads up display, telematics, advanced driver assistance systems (ADAS) and autonomous driving. The AGL platform is available to all, and anyone can participate in its development. Learn more: https://www.automotivelinux.org/

Automotive Grade Linux is a Collaborative Project at The Linux Foundation. Linux Foundation Collaborative Projects are independently funded software projects that harness the power of collaborative development to fuel innovation across industries and ecosystems. www.linuxfoundation.org

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Inquiries

Emily Olin

Automotive Grade Linux

eolin@linuxfoundation.org

Source

Install Plex Media Server on CentOS 7

by Marin Todorov | Published: October 15, 2018 |

October 15, 2018

‘,
enableHover: false,
enableTracking: true,
buttons: { twitter: },
click: function(api, options){
api.simulateClick();
api.openPopup(‘twitter’);
}
});
jQuery(‘#facebook’).sharrre({
share: {
facebook: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
click: function(api, options){
api.simulateClick();
api.openPopup(‘facebook’);
}
});
jQuery(‘#googleplus’).sharrre({
share: {
googlePlus: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
urlCurl: ‘https://www.tecmint.com/wp-content/themes/tecmint/js/sharrre.php’,
click: function(api, options){
api.simulateClick();
api.openPopup(‘googlePlus’);
}
});
jQuery(‘#linkedin’).sharrre({
share: {
linkedin: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
buttons: {
linkedin: {
description: ‘Install Plex Media Server on CentOS 7’,media: ‘https://www.tecmint.com/wp-content/uploads/2018/10/Install-Plex-Media-Server-on-CentOS-7.png’ }
},
click: function(api, options){
api.simulateClick();
api.openPopup(‘linkedin’);
}
});
// Scrollable sharrre bar, contributed by Erik Frye. Awesome!
var shareContainer = jQuery(“.sharrre-container”),
header = jQuery(‘#header’),
postEntry = jQuery(‘.entry’),
$window = jQuery(window),
distanceFromTop = 20,
startSharePosition = shareContainer.offset(),
contentBottom = postEntry.offset().top + postEntry.outerHeight(),
topOfTemplate = header.offset().top;
getTopSpacing();
shareScroll = function(){
if($window.width() > 719){
var scrollTop = $window.scrollTop() + topOfTemplate,
stopLocation = contentBottom – (shareContainer.outerHeight() + topSpacing);
if(scrollTop > stopLocation){
shareContainer.offset();
}
else if(scrollTop >= postEntry.offset().top-topSpacing){
shareContainer.offset();
}else if(scrollTop 1024)
topSpacing = distanceFromTop + jQuery(‘.nav-wrap’).outerHeight();
else
topSpacing = distanceFromTop;
}
});

If You Appreciate What We Do Here On TecMint, You Should Consider:

  1. Stay Connected to: Twitter | Facebook | Google Plus
  2. Subscribe to our email updates: Sign Up Now
  3. Get your own self-hosted blog with a Free Domain at ($3.45/month).
  4. Become a Supporter – Make a contribution via PayPal
  5. Support us by purchasing our premium books in PDF format.
  6. Support us by taking our online Linux courses

We are thankful for your never ending support.

View all Posts

Marin Todorov

I am a bachelor in computer science and a Linux Foundation Certified System Administrator. Currently working as a Senior Technical support in the hosting industry. In my free time I like testing new software and inline skating.

Your name can also be listed here. Got a tip? Submit it here to become an TecMint author.

Source

How to List All Virtual Hosts in Apache Web Server

by Aaron Kili | Published: October 16, 2018 |

October 16, 2018

‘,
enableHover: false,
enableTracking: true,
buttons: { twitter: },
click: function(api, options){
api.simulateClick();
api.openPopup(‘twitter’);
}
});
jQuery(‘#facebook’).sharrre({
share: {
facebook: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
click: function(api, options){
api.simulateClick();
api.openPopup(‘facebook’);
}
});
jQuery(‘#googleplus’).sharrre({
share: {
googlePlus: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
urlCurl: ‘https://www.tecmint.com/wp-content/themes/tecmint/js/sharrre.php’,
click: function(api, options){
api.simulateClick();
api.openPopup(‘googlePlus’);
}
});
jQuery(‘#linkedin’).sharrre({
share: {
linkedin: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
buttons: {
linkedin: {
description: ‘How to List All Virtual Hosts in Apache Web Server’,media: ‘https://www.tecmint.com/wp-content/uploads/2018/10/List-All-Virtual-Hosts-in-Apache-Web-Server.png’ }
},
click: function(api, options){
api.simulateClick();
api.openPopup(‘linkedin’);
}
});
// Scrollable sharrre bar, contributed by Erik Frye. Awesome!
var shareContainer = jQuery(“.sharrre-container”),
header = jQuery(‘#header’),
postEntry = jQuery(‘.entry’),
$window = jQuery(window),
distanceFromTop = 20,
startSharePosition = shareContainer.offset(),
contentBottom = postEntry.offset().top + postEntry.outerHeight(),
topOfTemplate = header.offset().top;
getTopSpacing();
shareScroll = function(){
if($window.width() > 719){
var scrollTop = $window.scrollTop() + topOfTemplate,
stopLocation = contentBottom – (shareContainer.outerHeight() + topSpacing);
if(scrollTop > stopLocation){
shareContainer.offset();
}
else if(scrollTop >= postEntry.offset().top-topSpacing){
shareContainer.offset();
}else if(scrollTop 1024)
topSpacing = distanceFromTop + jQuery(‘.nav-wrap’).outerHeight();
else
topSpacing = distanceFromTop;
}
});

If You Appreciate What We Do Here On TecMint, You Should Consider:

  1. Stay Connected to: Twitter | Facebook | Google Plus
  2. Subscribe to our email updates: Sign Up Now
  3. Get your own self-hosted blog with a Free Domain at ($3.45/month).
  4. Become a Supporter – Make a contribution via PayPal
  5. Support us by purchasing our premium books in PDF format.
  6. Support us by taking our online Linux courses

We are thankful for your never ending support.

View all Posts

Aaron Kili

Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.

Your name can also be listed here. Got a tip? Submit it here to become an TecMint author.

Source

Helios4 Arm-Based Open Source NAS SBC For Linux/FreeBSD

Helios4 is ARM-based open source NAS SBC (Single-board computer) for Linux. This NAS (Network Attached Storage) comes with 4 SATA 3.0 port and comes with ECC memory. Let us see some details about the Helios4 Arm-Based Open Source NAS SBC and ongoing Kickstarter camping.

What is network-attached storage (NAS)?

NAS is an acronym for Network-attached storage. A NAS server or computer can store and retrieve files from a centralized location on your LAN or Intranet. NAS device typically uses Ethernet-based connections and do not have display output. NAS do not need keyboard or mouse to operate. You can manage your NAS using an ssh-based tool or browser-based configuration tool.

NAS allows users to share data using standard protocols such as NFS, CIFS, SSH, iSCSI, FTP, SSH and more. You can turn NAS into a personal cloud. NAS supports MS-Windows, macOS, Linux and Unix clients. Advanced NAS features may include full-disk encryption and virtualization support.

Helios4 Arm-Based Open Source NAS SBC For Linux

Helios4 is an ARM-based device specially designed for Network Attached Storage (NAS). The ARMADA 38x-MicroSoM from SolidRun is main SoC (system on chip) for Helios4. Kobol claims that the specs for Helios4 are open source and it is an open hardware project.

Helios4 hardware specification

  1. CPU – Marvell Armada 388 (88F6828) ARM Cortex-A9. ARMv7 32-bit. Dual Core 1.6 Ghz.
  2. CPU feature – RAID acceleration engines and security acceleration engines
  3. RAM – 2GB DDR3L ECC
  4. SATA 3.0 Ports – 4
  5. GbE LAN Port – 1
  6. USB 3.0 ports – 2
  7. microSD (SDIO 3.0) – 1
  8. GPIO – 12
  9. I2C – 1
  10. UART – 1 (via onboard Micro-USB converter)
  11. SPI NOR Flash – 32Mbit onboard
  12. PWM FAN – 2
  13. DC input – 12V / 8A

Helios4 Arm-Based Open Source NAS SBC For LinuxHelios4 SBC

Helios4 software specification

  1. Armbian Linux operating system
  2. Mdadm for RAID support on Linux
  3. OpenMediaVault Linux NAS operating system
  4. FreeBSD head
  5. U-Boot the Universal Boot Loader for SBC. You need it for both Linux and FreeBSD

Helios4 pricing

Helios4 PriceHelios4 Price

  • Full Kit (2GB RAM ECC) – USD 194.60 + VAT + Shipping
  • Basic Kit (2GB RAM ECC) – USD 176.20 + VAT + Shipping

Conclusion

helios4 assmbled with sbc
Overall Helios4 is a low-cost and a sturdy energy-efficient SoC NAS. It can run both Linux and FreeBSD (head) Unix operating system. It comes with 2GB ECC ram for data protection. It supports software RAID and maxes out disk support up to 48TB (4 x 12TB disks). The price is competitive too. I think it a hackers dream system due to open hardware and open source software. You can order it online here and find more information here.

Posted by: Vivek Gite

The author is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter.

Source

Kali Linux for Vagrant: Hands-On | Linux.com

What Vagrant actually does is provide a way of automating the building of virtualized development environments using a variety of the most popular providers, such as VirtualBox, VMware, AWS and others. It not only handles the initial setup of the virtual machine, it can also provision the virtual machine based on your specifications, so it provides a consistent environment which can be shared and distributed to others.

The first step, obviously, is to get Vagrant itself installed and working — and as it turns out, doing that requires getting at least one of the virtual machine providers installed and working. In the case of the Kali distribution for Vagrant, this means getting VirtualBox installed.

Fortunately, both VirtualBox and Vagrant are available in the repositories of most of the popular Linux distributions. I typically work on openSUSE Tumbleweed, and I was able to install both of them from the YAST Software Management tool. I have also checked that both are available on Manjaro, Debian Testing and Linux Mint. I didn’t find Vagrant on Fedora, but there are several articles in the Fedora Developer Portal which describe installing and using it.

Read more at ZDNet

Source

Spinnaker: The Kubernetes of Continuous Delivery | Linux.com

Comparing Spinnaker and Kubernetes in this way is somewhat unfair to both projects. The scale, scope, and magnitude of these technologies are different, but parallels can still be drawn.

Just like Kubernetes, Spinnaker is a technology that is battle tested, with Netflix using Spinnaker internally for continuous delivery. Like Kubernetes, Spinnaker is backed by some of the biggest names in the industry, which helps breed confidence among users. Most importantly, though, both projects are open source, designed to build a diverse and inclusive ecosystem around them.

Frankenstein’s Monster

Continuous Delivery (CD) is a solved problem, but it has been a bit of a Frankenstein’s monster, with companies trying to build their own creations by stitching parts together, along with Jenkins. “We tried to build a lot of custom continuous delivery tooling, but they all fell short of our expectation,” said Brandon Leach, Sr. Manager of Platform Engineering at Lookout.

“We were using Jenkins along with tools like Rundeck, but both had their own set of problems. While Rundeck didn’t have a first-class deployment tool, Jenkins was becoming a nightmare and we ended up moving to Gitlabs,” said Gard Voigt Rimestad of Schibsted, a major Norwegian media group.

Netflix created a more elegant way for continuous delivery called Asgard, open sourced in 2012, which was designed to run Netflix’s own workload on AWS. Many companies were using Asgard, including Schibsted, and it was gaining momentum. But it was tied closely to the kind of workload Netflix was running with AWS. Bigger companies who liked Asgard forked it to run their own workloads. IBM forked it twice to make it work with Docker containers.

IBM’s forking of Asgard was an eye-opening experience for Netflix. At that point, Netflix had started looking into containerized workloads, and IBM showed how it could be done with Asgard.

Google was also planning to fork Asgard to make it work on Google Compute Engine. By that time, Netflix had started working on the successor to Asgard, called Spinnaker. “Before Google could fork the project, we managed to convince Google to collaborate on Spinnaker instead of forking Asgard. Pivotal also joined in,” said Andy Glover, shepherd of Spinnaker and Director of Delivery Engineering at Netflix. The rest is history.

Continuous popularity

There are many factors at play that contribute to the popularity and adoption of Spinnaker. First and foremost, it’s a proven technology that’s been used at Netflix. It instills confidence in users. “Spinnaker is the way Netflix deploys its services. They do things at the scale we don’t do in AWS. That was compelling,” said Leach.

The second factor is the powerful community around Spinnaker that includes heavyweights like Microsoft, Google, and Netflix. “These companies have engineers on their staff that are dedicated to working on Spinnaker,” added Leach.

Governance

In October 2018, the Spinnaker community organized its first official Spinnaker Summit in Seattle. During the Summit, the community announced the governance structure for the project.

“Initially, there will be a steering committee and a technical oversight committee. At the moment Google and Netflix are steering the governance body, but we would like to see more diversity,” said Steven Kim, Google’s Software Engineering Manager who leads the Google team that works on Spinnaker. The broader community is organized around a set of special interest groups (SIGs) that enable users to focus on particular areas of interest.

“There are users who have deployed Spinnaker in their environment, but they are often intimidated by two big players like Google and Netflix. The governance structure will enable everyone to be able to have a voice in the community,” said Kim.

At the moment, the project is being run by Google and Netflix, but eventually, it may be donated to an organization that has a better infrastructure for managing such projects. “It could be the OpenStack Foundation, CNCF, or the Apache Foundation,” said Boris Renski, Co-founder and CMO of Mirantis.

I met with more than a dozen users at the Summit, and they were extremely bullish about Spinnaker. Companies are already using it in a way even Netflix didn’t envision. Since continuous delivery is at the heart of multi-cloud strategy, Spinnaker is slowly but steadily starting to beat at the heart of many companies.

Spinnaker might not become as big as Kubernetes, due to its scope, but it’s certainly becoming as important. Spinnaker has made some bold promises, and I am sure it will continue to deliver on them.

Source

WP2Social Auto Publish Powered By : XYZScripts.com