Challenges and Solutions for Scaling Kubernetes in the Hybrid Cloud

Introduction

Let’s assume you’re in business online: you have your own datacenter and a private cloud running your website. You’ll have a number of servers deployed to run applications and store their data.

The overall website traffic in this scenario is pretty constant, yet there are times where you expect traffic growth. How do you handle all this growth in traffic?

The first thing that comes to mind is that you need to be able to scale some of your applications in order to cope with the traffic increase. As you don’t want to spend money on new hardware, which you’ll use only a few times per year, you think of moving to a hybrid cloud set up.

This can be a real time and cost saver. Scaling (parts of) your application to public cloud will allow you to pay for only the resources you use, for the time you use them.

But how do you choose that public cloud, and can you choose more than one?

The short answer is yes, you’ll most likely need to choose more than one public cloud provider. Because you have different teams, working on different applications, having different requirements, one cloud provider may not fit all your needs. In addition, many organizations need to follow certain laws, regulations and policies which dictate that their data must physically reside in certain locations. A strategy of using more than one public cloud can help organizations meet those stringent and varied requirements. They can also select from multiple data center regions or availability zones, to be as close to their end users as possible, providing them optimal performance and minimal latency.

Challenges of scaling across multiple cloud providers

You’ve decided now upon the cloud(s) to use, so let’s go back and think about the initial problem. You have an application with a microservice deployment architecture for your applications, running containers that need to be scaled. Here is where Kubernetes comes into play. Essentially Kubernetes is a solution which helps you manage and orchestrate containerized applications in a cluster of nodes. Although Kubernetes will help you manage and scale deployments, nodes and clusters, it won’t help you easily manage and scale them across cloud providers. More on that later.

A Kubernetes cluster is a set of machines (physical/virtual), resourced by Kubernetes to run applications. Essential Kubernetes concepts that you need to understand for our purposes are:

  • Pods are units that control one or more containers, scheduled as one application. Typically you should create one Pod per application, so you can scale and control them separately.
  • Node components are worker machines in Kubernetes. A node may be a virtual machine (VM) or physical machine, depending on the cluster. Each node contains the services necessary to run pods and is managed by the master components.
  • Master components manage the lifecycle of a Pod. If a Pod dies, the Controller creates a new one, if you scale up/down Pods, the Controller creates/destroys your Pods. More on the controller types you can find here

The role of these three components is to scale and schedule containers. The master component dictates the scheduling and scaling commands. The nodes then orchestrate the pods accordingly.

01

These are only the basics of Kubernetes, for a more detailed understanding, you can check our Intro to Kubernetes article.

There are a few key challenges that come to mind when trying to use Kubernetes to solve our scaling problem across multiple clouds problem:

  • Difficult to manage multiple clouds, multiple clusters, set users, set policies
  • Complexity of installation and configuration
  • Different experiences for users/teams depending on environment

Here’s where Rancher can help you. Rancher is an open source container manager used to run Kubernetes in production. Below are some features that Rancher provides that help us manage and scale our applications regardless of whether the compute resources are hosted on-prem or across multiple clouds:

  • common infrastructure management across multiple clusters and clouds
  • easy-to-use interface for Kubernetes configuration and deployment
  • easy to scale Pods and clusters with a few simple clicks
  • access control and user management (ldap, AD)
  • workload, RBAC, policy and project management

Rancher becomes your single point of control for multiple clusters, running on multiple clouds, on pretty much any infrastructure that can run Kubernetes.

Let’s see now how we can use Rancher in order to manage more than one cluster, in two different regions.

Starting a Rancher 2.0 instance

To begin, start a Rancher 2.0 instance. There is a very intuitive getting started guide for this purpose here.

Hands-on with Rancher and Kubernetes

Let’s create two hosted Kubernetes clusters in GCP, in two different regions. For this you will need a service account key.

In Global Tab, we can see all the available clusters and their state. From Provisioning state, when ready, they should turn to Active.
02

03

A number of pods are already deployed to each node from your Kubernetes Cluster. Those pods are used by Kubernetes and Rancher’s internal systems.

Let’s proceed by deploying Workloads for both the clusters. Sequentially select Default project; this will open the Workloads tab. Click on Deploy and set the Name and the Docker image to be httpd for the first cluster, nginx for the second one, since we want to expose our webservers to internet traffic in the Port mapping area select aLayer-4 Load Balancer`.
05070608

If you click on nginx/httpd workload, you will see that Rancher actually created a Deployment just as recommended by Kubernetes to manage ReplicaSets. You will also see the Pod created by that ReplicaSet.

Scaling Pods and clusters

Our Rancher instance is managing two clusters:

  • us-east1b-cluster, running 5 httpd Pods
  • europe-west4-a cluster, running 1 nginx Pod

Let’s scale down some httpd Pods by clicking on – under Scale column. In no time we see the number of Pods decrease.
0910

To scale up Pods, click + under Scale column. Once you do that, you should instantly see Pods being created and ReplicaSet scaling events. Try to delete one of the pods, by using the right-hand side menu of the Pod and notice how ReplicaSet is recreating it back, to match the desired state.
1112

So, we went from 5 httpd Pods to 2 for first cluster, and from 1 nginx Pod to 7 Pods for second one. Second cluster looks now almost to be running out of resources.
13

From Rancher we can also scale the cluster by adding extra Nodes. Let’s try do that, let’s edit the node count to 5.
14

While Rancher shows us “reconciling cluster,” Kubernetes behind the scenes is actually upgrading the cluster master and resizing the node pool.
15

Give this action some time and eventually you should see 5 nodes up and running.
16

Let’s check the Global tab, so we can have an overview of all the clusters Rancher is managing.
17

Now we can add more Pods if we want as there are new resources available, let’s go to 13.
18

Most importantly, any of these operations is performed with no downtime. While scaling Pods up or down, or resizing the cluster, hitting the public IP for httpd/nginx Deployment the HTTP response status code was all the time 200.
1920

Conclusion

Let’s recap our hands-on scaling exercise:

  • we created two clusters using Rancher
  • we deployed workloads having a deployment of 1 Pod (nginx) and a deployment of 5 Pods (httpd)
  • scaled in/out those two deployments
  • resized the cluster

All of these actions were done with a few simple clicks, from Rancher, making use of the friendly and intuitive UI. Of course, you can do this entirely from the API as well. In either case, you have a single central point from where you can manage all your kubernetes clusters, observe their state or scale Deployments if needed. If you are looking for a tool to help you with infrastructure management and container orchestration in a hybrid/multi-cloud, multi-region clusters, then Rancher might be the perfect fit for you.

Source

Deploying JFrog Artifactory with Rancher, Part One

JFrog Artifactory is a universal artifact repository that supports all major packaging formats, build tools and continuous integration (CI) servers. It holds all of your binary content in a single location and presents an interface that makes it easy to upload, find, and use binaries throughout the application development and delivery process.

In this article we’ll walk through using Rancher to deploy and manage JFrog Artifactory on a Kubernetes cluster. When you have finished reading this article, you will have a fully functional installation of JFrog Artifactory OSS, and you can use the same steps to install the OSS or commercial version of Artifactory in any other Kubernetes cluster. We’ll also show you how to create a generic repository in Artifactory and upload artifacts into it.

Artifactory has many more features besides the ones presented in this article, and a future article will explore those in greater detail.

Let’s get started!

Software Used

This article uses the following software:

  • Rancher v2.0.8
  • Kubernetes cluster running on Google Kubernetes Engine version 1.10.7-gke.2
  • Artifactory helm chart version 7.4.2
  • Artifactory OSS version 6.3.2

If you’re working through the article at a future date, please use the versions current for that time.

As with all things Kubernetes, there are multiple ways to install Artifactory. We’re going to use the Helm chart. Helm provides a way to package application installation instructions and share them with others. You can think of it as a package manager for Kubernetes. Rancher integrates with Helm via the Rancher Catalog, and through the Catalog you can deploy any Helm-backed application with only a few clicks. Rancher has other features, including:

  • an easy and intuitive web interface
  • the ability to manage Kubernetes clusters deployed anywhere, on-premise or with any provider
  • a single view into all managed clusters
  • out of the box monitoring of the clusters
  • workload, role-based access control (RBAC), policy and project management
  • all the power of Kubernetes without the need to install any software locally

Installing Rancher

NOTE: If you already have a Rancher v2 server and Kubernetes cluster installed, skip ahead to the section titled Installing JFrog Artifactory.

We’re proud of Rancher’s ability to manage Kubernetes clusters anywhere, so we’re going to launch a Rancher Server in standalone mode on a GCE instance and use it to deploy a Kubernetes cluster in GKE.

Spinning up a Rancher Server in standalone mode is easy – it’s a Docker container. Before we can launch the container, we’ll need a compute instance on which to run it. Let’s launch that with the following command:

gcloud compute –project=rancher-20 instances create rancher-instance
–zone=europe-west2-c
–machine-type=g1-small
–tags=http-server,https-server
–image=ubuntu-1804-bionic-v20180911
–image-project=ubuntu-os-cloud

Please change the project and zone parameters as appropriate for your deployment.

After a couple of minutes you should see that your instance is ready to go.

Created [https://www.googleapis.com/compute/v1/projects/rancher-20/zones/europe-west2-c/instances/rancher-instance].
NAME ZONE MACHINE_TYPE INTERNAL_IP EXTERNAL_IP STATUS
rancher-instance europe-west2-c g1-small 10.154.0.2 35.242.185.165 RUNNING

Make a note of the EXTERNAL_IP address, as you will need it in a moment to connect to the Rancher Server.

With the compute node up and running, let’s use the GCE CLI to SSH into it.

gcloud compute ssh
–project “rancher-20”
–zone “europe-west2-c”
“rancher-instance”

Again, be sure that you adjust the project and zone parameters to reflect your instance if you launched it in a different zone or with a different name.

Once connected, run the following commands to install some prerequisites and then install Docker CE. Because the Rancher Server is a Docker container, we need Docker installed in order to continue with the installation.

sudo apt-get update
sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add –
sudo apt-key fingerprint 0EBFCD88
sudo add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable”
sudo apt-get update
sudo apt-get -y install docker-ce

With that out of the way, we’re ready to deploy the Rancher Server. When we launch the container for the first time, the Docker Engine will fetch the container image from Docker Hub and store it locally before launching a container from it. Future launches of the container, should we need to relaunch it, will use the local image store and be much faster.

Use the next command to instruct Docker to launch the Rancher Server container and have it listen on port 80 and 443 on the host.

sudo docker run -d –restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher:v2.0.8

If nothing goes awry, Docker will print the download status and then the new container ID before returning you to a prompt.

Unable to find image ‘rancher/rancher:latest’ locally
latest: Pulling from rancher/rancher
124c757242f8: Pull complete
2ebc019eb4e2: Pull complete
dac0825f7ffb: Pull complete
82b0bb65d1bf: Pull complete
ef3b655c7f88: Pull complete
437f23e29d12: Pull complete
52931d58c1ce: Pull complete
b930be4ed025: Pull complete
4a2d2c2e821e: Pull complete
9137650edb29: Pull complete
f1660f8f83bf: Pull complete
a645405725ff: Pull complete
Digest: sha256:6d53d3414abfbae44fe43bad37e9da738f3a02e6c00a0cd0c17f7d9f2aee373a
Status: Downloaded newer image for rancher/rancher:latest
454aa51a6f0ed21cbe47dcbb20a1c6a5684c9ddb2a0682076237aef5e0fdb3a4

Congratulations! You’ve successfully launched a Rancher Server instance.

Use the EXTERNAL_IP address that you saved above and connect to that address in a browser. You’ll be asked to accept the self-signed certificate that Rancher installs by default. After this, you’ll be presented with the welcome screen. Set a password (and remember it!), and continue to the next page.

Welcome to Rancher

On this page you’re asked to set the URL for the Rancher Server. In a production deployment this would be a hostname like rancher.yourcompany.com, but if you’re following along with a demo server, you can use the EXTERNAL_IP address from above.

Rancher Server URL

When you click Save URL on this page, you’ll be taken to the Clusters page, and from there we’ll deploy our Kubernetes cluster.

Using Rancher to Deploy a GKE Cluster

Rancher can deploy and manage Kubernetes clusters anywhere. They can be in Google, Amazon, Azure, on cloud nodes, in datacenters, or even running in a VM on your laptop. It’s one of the most powerful features of the product. For today we’ll be using GKE, so after clicking on Add Cluster, choose Google Container Engine as your provider.

Set the name to something appropriate for this demo, like jfrog-artifactory.

In order to create the cluster, Rancher needs permission to access the Google Cloud Platform. Those permissions are granted via a Service Account private key JSON file. To generate that, first find the service account name (replace the project name with yours if necessary):

gcloud iam service-accounts list –project rancher-20

NAME EMAIL
Compute Engine default service account <SA>-compute@developer.gserviceaccount.com

The output will have a service account number in place of <SA>. Copy this entire address and use it in the following command:

gcloud iam service-accounts keys create ./key.json
–iam-account <SA>-compute@developer.gserviceaccount.com

This will create a file named key.json in the current directory. This is the Service Account private key that Rancher needs to create the cluster:

Add Cluster Rancher

You can either paste the contents of that file into the text box, or you can click Read from a file and point it to the key.json file. Rancher will use this info to generate a page wherein you can configure your new cluster:

Add Cluster Rancher second step

Set your preferred Zone, Machine Type, Node Count and Root Disk Size. The values presented in the above screenshot are sane defaults that you can use for this demo.

When you click Create, the cluster will be provisioned in GKE, and when it’s ready, you’ll see it become active in the UI:

Rancher cluster view

Installing JFrog Artifactory

We’ll install Artifactory by using the Helm chart repository from JFrog. Helm charts, like OS package management systems, give you a stable way to deploy container applications into Kubernetes, upgrade them, or roll them back. The chart guarantees that you’re installing a specific version or tag for the container, and where applications have multiple components, a Helm chart assures that you’re getting the right version for all of them.

Installing the JFrog Helm Repository

Rancher ships with a library of Helm charts in its Application Catalog, but in keeping with the Rancher objective of user flexibility, you can install any third-party Helm repository to have those applications available for deployment in your cluster. We’ll use this today by installing the JFrog repository.

In the Global Cluster view of Rancher click on Catalogs and then click on Add Catalog. In the window that opens, enter a name that makes sense, like jfrog-artifactory and then enter the location of the official JFrog repository.

Rancher add catalog

Click on Create, and the JFrog repository will will appear in the list of custom catalogs.

Rancher list of Catalogs

Deploying Artifactory

We’re ready to deploy Artifactory. From the Global view, select the Default project under the jfrog-artifactory cluster:

Rancher default project

Once you are inside of the Default project, select Catalog Apps, and then click on Launch. Rancher will show you the apps available for installation from the Application Catalogs. You’ll notice that artifactory-ha shows up twice, once as a partner-provided chart within the default Library of apps that ship with Rancher, and again from the JFrog repository itself. We installed the Helm repository because we want to install the regular, non-HA Artifactory, which is just called artifactory. All catalog apps indicate which library they come from, so in a situation where a chart is present in multiple libraries, you can still choose which to install.

Rancher select app from Catalog

When you select View Details, you have the opportunity to change items about how the application is installed. By default this catalog item will deploy the licensed, commercial version of Artifactory, for which you need a license. If you have a license, then you can leave the default options as they are; however, because we want to install the OSS version, we’re going to change the image that the chart installs.

We do this under the Configuration Options pane, by selecting Add Answer. Set a variable name of artifactory.image.repository and a value of docker.bintray.io/jfrog/artifactory-oss.

Catalog app set Answer

Now, when you click Launch, Rancher will deploy Artifactory into your cluster.

Rancher Deploying Artifactory

When the install completes, the red line will change to green. After this happens, if you click on artifactory, it will present you with the resources that Rancher created for you. In this case, it created three workloads, three services, one volume and one secret in Kubernetes.

If you select Workloads, you will see all of them running:

Rancher Artifactory workloads

Resolving a Pending Ingress

At the time of this article’s publication, there is a bug that results in the Ingress being stuck in a Pending state. If you see this when you click on Load Balancing, continue reading for the solution.

Rancher Pending LoadBalancer

To resolve the pending Ingress, we need to create the Service to which the Ingress is sending traffic. Click Import YAML in the top right, and in the window that opens, paste the following information and then click Import.

apiVersion: v1
kind: Service
metadata:
labels:
app: artifactory
chart: artifactory-7.4.2
component: nginx
heritage: Tiller
io.cattle.field/appId: artifactory
release: artifactory
name: artifactory-artifactory-nginx
namespace: artifactory
spec:
externalTrafficPolicy: Local
ports:
– name: nginxhttp
port: 80
protocol: TCP
targetPort: 80
– name: artifactoryhttps
port: 443
protocol: TCP
targetPort: 443
selector:
app: artifactory
component: nginx
release: artifactory
sessionAffinity: None
type: LoadBalancer

Rancher import YAML

Accessing Artifactory

The Workloads pane will now show clickable links for ports 443/tcp and 80/tcp under the artifactory-artifactory-nginx workload:

Workloads clickable ports

When you select 443/tcp, it will open the Artifactory UI in a new browser tab. Because it’s using a self-signed certificate by default, your browser may give you a warning and ask you to accept the certificate before proceeding.

Welcome to JFrog Artifactory

Taking Artifactory for a Spin

You now have a fully-functional binary artifact repository available for use. That was easy! Before you can start using it, it needs a tiny bit of configuration.

First, set an admin password in the wizard. When it asks you about the proxy server, select Skip unless you’ve deployed this in a place that needs proxy configuration. Create a generic repository, and select Finish.

Now, let’s do a quick walkthrough of some basic usage.

First, we’ll upload the helm chart that you used to create the Artifactory installation.

Select Artifacts from the left-side menu. You will see the generic repository that you created above. Choose it, and then from the upper right corner, select Deploy. Upload the Helm chart zipfile (or any other file) to the repository.

Deploy file to Artifactory

After the deploy finishes, you will see it in the tree under the repository.

Artifactory Repository Browser

Although this is a simple test of Artifactory, it demonstrates that it can already can be used in its full capacity.

You’re all set to use Artifactory for binary artifact storage and distribution and Rancher for easy management of the workloads, the cluster, and everything related to the deployment itself.

Cleanup

If you’ve gone through this article as a demo, you can delete the Kubernetes cluster from the Global Cluster view within Rancher. This will remove it from GKE. After doing so, you can delete the Rancher Server instance directly from GCE.

Closing

JFrog Artifactory is extremely powerful. More organizations use it every day, and being able to deploy it quickly and securely into a Kubernetes cluster is useful knowledge.

According to their own literature, Artifactory empowers you to “release fast or die.” Similarly, Rancher allows you to deploy fast while keeping control of the resources and the security around them. You can build, deploy, tear down, secure, monitor, and interact with Kubernetes clusters anywhere in the world, all from a single, convenient, secure interface.

It doesn’t get much easier than that.

Source

Raw Block Volume support to Beta

Kubernetes v1.13 moves raw block volume support to beta. This feature allows persistent volumes to be exposed inside containers as a block device instead of as a mounted file system.

What are block devices?

Block devices enable random access to data in fixed-size blocks. Hard drives, SSDs, and CD-ROMs drives are all examples of block devices.

Typically persistent storage is implemented in a layered maner with a file system (like ext4) on top of a block device (like a spinning disk or SSD). Applications then read and write files instead of operating on blocks. The operating systems take care of reading and writing files, using the specified filesystem, to the underlying device as blocks.

It’s worth noting that while whole disks are block devices, so are disk partitions, and so are LUNs from a storage area network (SAN) device.

Why add raw block volumes to kubernetes?

There are some specialized applications that require direct access to a block device because, for example, the file system layer introduces unneeded overhead. The most common case is databases, which prefer to organize their data directly on the underlying storage. Raw block devices are also commonly used by any software which itself implements some kind of storage service (software defined storage systems).

From a programmer’s perspective, a block device is a very large array of bytes, usually with some minimum granularity for reads and writes, often 512 bytes, but frequently 4K or larger.

As it becomes more common to run database software and storage infrastructure software inside of Kubernetes, the need for raw block device support in Kubernetes becomes more important.

Which volume plugins support raw blocks?

As of the publishing of this blog, the following in-tree volumes types support raw blocks:

  • AWS EBS
  • Azure Disk
  • Cinder
  • Fibre Channel
  • GCE PD
  • iSCSI
  • Local volumes
  • RBD (Ceph)
  • Vsphere

Out-of-tree CSI volume drivers may also support raw block volumes. Kubernetes CSI support for raw block volumes is currently alpha. See documentation here.

Kubernetes raw block volume API

Raw block volumes share a lot in common with ordinary volumes. Both are requested by creating PersistentVolumeClaim objects which bind to PersistentVolume objects, and are attached to Pods in Kubernetes by including them in the volumes array of the PodSpec.

There are 2 important differences however. First, to request a raw block PersistentVolumeClaim, you must set volumeMode = “Block” in the PersistentVolumeClaimSpec. Leaving volumeMode blank is the same as specifying volumeMode = “Filesystem” which results in the traditional behavior. PersistentVolumes also have a volumeMode field in their PersistentVolumeSpec, and “Block” type PVCs can only bind to “Block” type PVs and “Filesystem” PVCs can only bind to “Filesystem” PVs.

Secondly, when using a raw block volume in your Pods, you must specify a VolumeDevice in the Container portion of the PodSpec rather than a VolumeMount. VolumeDevices have devicePaths instead of mountPaths, and inside the container, applications will see a device at that path instead of a mounted file system.

Applications open, read, and write to the device node inside the container just like they would interact with any block device on a system in a non-containerized or virtualized context.

Creating a new raw block PVC

First, ensure that the provisioner associated with the storage class you choose is one that support raw blocks. Then create the PVC.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
– ReadWriteMany
volumeMode: Block
storageClassName: my-sc
resources:
requests:
storage: 1Gi

Using a raw block PVC

When you use the PVC in a pod definition, you get to choose the device path for the block device rather than the mount path for the file system.

apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
– name: my-container
image: busybox
command:
– sleep
– “3600”
volumeDevices:
– devicePath: /dev/block
name: my-volume
imagePullPolicy: IfNotPresent
volumes:
– name: my-volume
persistentVolumeClaim:
claimName: my-pvc

As a storage vendor, how do I add support for raw block devices to my CSI plugin?

Raw block support for CSI plugins is still alpha, but support can be added today. The CSI specification details how to handle requests for volume that have the BlockVolume capability instead of the MountVolume capability. CSI plugins can support both kinds of volumes, or one or the other. For more details see documentation here.

Issues/gotchas

Because block devices are actually devices, it’s possible to do low-level actions on them from inside containers that wouldn’t be possible with file system volumes. For example, block devices that are actually SCSI disks support sending SCSI commands to the device using Linux ioctls.

By default, Linux won’t allow containers to send SCSI commands to disks from inside containers though. In order to do so, you must grant the SYS_RAWIO capability to the container security context to allow this. See documentation here.

Also, while Kubernetes is guaranteed to deliver a block device to the container, there’s no guarantee that it’s actually a SCSI disk or any other kind of disk for that matter. The user must either ensure that the desired disk type is used with his pods, or only deploy applications that can handle a variety of block device types.

How can I learn more?

Check out additional documentation on the snapshot feature here: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#raw-block-volume-support

How do I get involved?

Join the Kubernetes storage SIG and the CSI community and help us add more great features and improve existing ones like raw block storage!

https://github.com/kubernetes/community/tree/master/sig-storage
https://github.com/container-storage-interface/community/blob/master/README.md

Special thanks to all the contributors who helped add block volume support to Kubernetes including:

Source

Using JFrog Artifactory as Docker Image Repository

This article is a continuation of Deploying JFrog Artifactory with Rancher. In this chapter we’ll demonstrate how to use JFrog Artifactory as a private repository for your own Docker images.

NOTE: This feature of JFrog Artifactory requires a license, but you can get a 30-day trial and use it to follow along.

Prepare GCP for the Deployment

If you plan to use Artifactory as a repository for Docker outside of your local network, you’ll need a public IP address. In the first part of this article we deployed our cluster into Google Cloud, and we’ll continue to use GCP resources now.

You can reserve a public IP by running the following command in the Google Cloud Shell or in your local environment via the gcloud command:

gcloud compute addresses create artifactory-demo –global

Use the name you chose (artifactory-demo in our case) to retrieve the address:

gcloud compute addresses describe artifactory-demo –global

Look for the address label in the output:

We’ll use the magical xip.io service from Basecamp to assign a fully-qualifed domain name to our service, which in our case will be 35.190.61.62.xip.io.

Deploy Artifactory

You can follow the steps in the previous chapter to deploy Rancher and Artifactory, but when you reach the part about configuring the variables in the app deployment page, add or change the following variables:

ingress.enabled=true
ingress.hosts[0]=35.190.61.62.xip.io
artifactory.service.type=NodePort
nginx.enabled=false
ingress.annotations.”kubernetes.io/ingress.global-static-ip-name”=artifactory-demo

(You can copy/paste that block of text into a field, and Rancher will convert it for you.)

When all is done, it should look like the image below:

Artifactory Docker Registry Configs

Click Launch to begin deploying the resources.

An Explanation of the Variables

While your new Artifactory instance spins up, let’s look at what we just configured.

ingress.enabled=true

This enables the creation of an ingress resource, which will serve as a proxy for Artifactory. In our case the Ingress will be a load balancer within GCP.

ingress.hosts[0]=35.190.61.62.xip.io

This sets the hostname for Artifactory. Part of the magic of xip.io is that we can create any subdomain and have it resolve back to the IP, so when we use docker-demo.35.190.61.62.xip.io later in this walkthrough, it will resolve to 35.190.61.62.

artifactory.service.type=NodePort

This exposes Artifactory’s service via a random port on the Kubernetes node. The Ingress resource will send traffic to this port.

nginx.enabled=false

Because we’re using the Ingress resource to talk to Artifactory via the Service resource, we want to disable the nginx proxy that Artifactory would otherwise start.

ingress.annotations…

This is the glue that ties Kubernetes to the static public IP address. We set this to the name of the address that you reserved so that the Ingress finds and uses the correct IP. We had to escape a large part of it because that’s the name of the annotation. Without escaping the elements, Kubernetes would misunderstand what we were asking it to do.

Review the Deployment

Once the deployment completes, look at the Workloads tab. There you will see two workloads. One is the application (artifactory-artifactory), and the other is the PostgreSQL database that artifactory uses (artifactory-postgresql).

Artifactory Workloads

Look at the Load Balancing tab next. There you will see the Ingress object with the hostname that we provided.

Load Balancing Ingress

If you select View/Edit YAML and scroll to the bottom, you will see the annotation that points to the address name in GCP (line 10 in the image):

Load Balancer View YAML

At the bottom of the Ingress definition you will also see that the hostname in spec.rules.host matches the IP address from status.loadBalancer.ingress.ip at the bottom.

Configure Artifactory

When you close the View/Edit YAML window, you’ll return to the Load Balancing tab. There you’ll find a link with the xip.io address. Click it to open Artifactory, or just enter the hostname into your browser.

Click through the wizard, first adding your license key and then setting an admin password. Click through the rest until the wizard completes.

In the menu on the left side, select Admin, and then under Repositories select Local.

Artifactory Admin Local Repo

There you will see the default repository created by the setup wizard. Select + New from the upper right corner to create a new repository. Choose Docker as the package type and enter a name for the repository. In our case we chose docker-demo. Leave everything else at the defaults and select Save & Finish to create the new repository.

Artifactory create docker registry

The name that you chose (docker-demo for us) becomes the subdomain for your xip.io domain. For our installation, we’ll be using docker-demo.35.190.61.62.xip.io. Yours will of course be different, but it will follow the same format.

Test the Registry

What fun is it to have a private Docker repository if you don’t use it?

For a production deployment you would secure the registry with an SSL certificate, and that would require a real hostname on a real domain. For this walkthrough, though, you can use the newly-created registry by telling Docker that it’s an insecure registry.

Create or edit daemon.json according to the documentation, adding your host like we do in the following example:

{
“insecure-registries”: [“docker-demo.35.190.61.62.xip.io:80”]
}

If you use Docker for Mac or Docker for Windows, set this in the preferences for the application:

Docker for Mac Insecure Registry

Restart Docker for it to pick up the changes, and after it restarts, you can use the registry:

docker login docker-demo.35.190.61.62.xip.io
Username: admin
Password:
Login Succeeded

To continue with the test, we’ll pull a public container image, re-tag it, and then push it to the new private registry.

Pull a Public Container Image

$ docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
f17d81b4b692: Pull complete
d5c237920c39: Pull complete
a381f92f36de: Pull complete
Digest: sha256:b73f527d86e3461fd652f62cf47e7b375196063bbbd503e853af5be16597cb2e
Status: Downloaded newer image for nginx:latest

Re-tag the Image

You can see the current image id and information on your system:

$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest dbfc48660aeb 2 days ago 109MB

Re-tag it so that Docker knows to push it to your new private registry:

$ docker tag nginx docker-demo.35.190.61.62.xip.io:80/nginx:latest

Push the Image to the Private Registry

With the image re-tagged, use docker push to send it to your private registry:

$ docker push docker-demo.35.190.61.62.xip.io:80/nginx:latest
The push refers to repository [docker-demo.35.190.61.62.xip.io:80/nginx]
86df2a1b653b: Pushed
bc5b41ec0cfa: Pushed
237472299760: Pushed
latest: digest: sha256:d98b66402922eccdbee49ef093edb2d2c5001637bd291ae0a8cd21bb4c36bebe size: 948

Verify the Push in Artifactory

Back in the Artifactory UI, select Artifacts from the menu.

Artifactory Artifacts

There you’ll see your nginx image and information about it.

Artifactory Pushed Image

Next Steps

If you have an Artifactory License and want to run a private registry, repeat this walkthrough using your own domain and an SSL certificate on the ingress. With those additional items complete, you’ll be able to use the private registry with any Docker or Kubernetes installation without having to tell the host that it has permission to talk to an insecure registry.

Cleanup

To clean up the resources that we used in this article, delete the Kubernetes cluster from Rancher and then delete the Rancher server from GCP:

gcloud compute –project=rancher-20 instances delete
rancher-instance –zone=europe-west2-c

You’ll also need to delete the public IP address reservation:

gcloud compute addresses delete artifactory-demo –global

Closing

JFrog Artifactory provides services that are at the core of a development lifecycle. You can store and retrieve almost any type of artifact that your development teams produce, and having these artifacts stored in a central, managed location makes Artifactory an important part of any IT infrastructure.

Rancher makes it easy to deploy Artifactory into a Kubernetes installation. In only a few minutes we had Artifactory up and running, and it actually took longer to configure Artifactory itself than it did to install it!

Rancher makes Kubernetes easy. Artifactory makes managing binary resources easy. Together they free you to focus on the things that matter for your business, and that freedom is what matters most.

Source

What is CI/CD?

The demands of modern software development combined with complexities of deploying to varied infrastructure can make creating applications a tedious process. As applications grow in size and scope, and development teams become more distributed and diverse, the overall process required to produce and release software quickly and consistently becomes more difficult.

To address these issues, teams began exploring new strategies to automate their build, test, and release processes to help deploy new changes to production faster. This lead to the development of continuous integration and continuous delivery.

In this guide we will explain what CI/CD is and how it helps teams produce well-tested, reliable software at a faster pace. Before exploring CI/CD and its benefits in depth, we should discuss some prerequisite technologies and practices that these systems build off of.

Automated Build Processes

In software development, the build process converts code that developers produce into useable pieces of software that can be executed. For compiled languages like Go or C, this stage involves running the source code through a compiler to produce a standalone binary file. For interpreted languages like Python or PHP, there is no compilation step, but the code may still need to be frozen at a specific point in time, bundled with dependencies, and packaged for easier distribution. These processes result in an artifact that is often called a “build” or “release”.

While developers can create builds manually, this has a number of disadvantages. The shift from active development to creating a build introduces a context switch, forcing individuals to halt more productive work and focusing on the build process. Furthermore, because each developer is producing artifacts on their own, inconsistencies are also likely to arise.

To address these concerns, many teams configure automated build pipelines. These systems monitor source code repositories and automatically kick off a preconfigured build process when changes are detected. This limits the amount of human involvement and ensures that a consistent process is followed on each build.

There are many build tools designed to help you automate these steps. For example, within the Java ecosystem, the following tools are popular:

  • Ant: Apache’s Ant is an open source Java library. Created in 2000, Ant is the original build tool in the Java space and is still frequently used today.
  • Maven: Apache’s Maven is a build automation tool written primarily with Java projects in mind. Unlike Apache Ant, Maven follows the philosophy of convention over configuration, requiring configuration only for the aspects of the build process that deviate from reasonable defaults.
  • Gradle: Reaching version 1.0 in 2012, Gradle tries to incorporate the strengths of both Ant and Maven by incorporating Maven’s modern features without losing the flexibility provided by Ant. Build instructions are written in a dynamic language called Groovy. Despite being a newer tool in this space, it’s seen widespread adoption.

Version Control

Most modern software development requires frequent collaboration within a shared codebase. Version control systems (VCS) are employed to help maintain project history, allow work on discrete features in parallel, and resolve conflicting changes. The VCS allows projects to easily adopt changes and to roll back in case of problems. Developers can work on projects on their local machines and use the VCS to manage the different branches of development.

Every change recorded in a VCS is called a commit. Each commit catalogs the changes to the codebase and includes metadata like a description that can be helpful when reviewing the commit history or merging updates.

Fig. 1 Distributed Version Control

Fig. 1: Diagram of distributed version control

While version control is a valuable tool to help manage many different changes within a single codebase, distributed development often introduces challenges. Developing in independent branches of the codebase without regularly merging into a shared integration branch can make it difficult to incorporate changes later on. To avoid this, developers started adopting a practice called continuous integration.

Continuous Integration (CI)

Continuous Integration (CI) is a process that allows developers to integrate work into a shared branch often, enhancing collaborative development. Frequent integration helps dissolve silos, reducing the size of each commit to lower the chance of merge conflicts.

A robust ecosystem of tools have been developed to encourage CI practices. These systems integrate with VCS repositories to automatically run build scripts and test suites when new changes are detected. Integration tests ensure that different components function together as a group, allowing teams to catch compatibility bugs early. Continuous integration produces builds that are thoroughly tested and reliable.

Fig. 2 Continuous Integration process

Fig. 2: Diagram of a continuous integration process

Continuous Delivery and Continuous Deployment (CD)

Continuous delivery and continuous deployment are two strategies that build off of the foundation that continuous integration provides. Continuous delivery extends the continuous integration process by deploying builds that pass the integration test suite to a pre-production environment. This makes it straightforward to evaluate each build in a production-like environment so that developers can easily validate bug fixes or test new features without additional work. Once deployed to the staging area, additional manual and automated testing is possible.

Continuous deployment takes this approach one step further. Once a build passes automated tests in a staging environment, a continuous deployment system can automatically deploy the build to production servers. In other words, every “green build” is live and available to customers for early feedback. This enables teams to release of new features and bug fixes instantly, backed by the guarantees provided by their testing processes.

Fig. 3 Roadmap for CI/CD Flow Diagram

Fig. 3: Diagram of a typical CI/CD development flow

Advantages of CI and CD

Continuous integration, delivery, and deployment provide some clear improvements to the software development process. Some of the primary benefits are outlined below.

Fast Feedback Loop

A fast feedback loop is essential to implementing a rapid development cycle. To receive timely feedback, it is essential that software reaches the end user quickly. When properly implemented, CI/CD provides a platform to achieve this goal by making it simple to update production deployments. By requiring each change to go through rigorous testing, CI helps reduce the risks associated with each build and consequently allows teams to release valuable features to customers quickly and easily.

Increased Visibility

CI/CD is usually implemented as a pipeline of sequential steps, visible to the entire team. As a result, each team members can track the state of build in the system and identify the build responsible for any test failures. By providing insight into the current state of the codebase, it is easier to plan the best course of action. This level of transparency offers a clear answer the question, “did my commit break the build?”

Simplified Troubleshooting

Since the goal of CI is to integrate and test every change made to the codebase, it is safer to make small commits and merge them into the shared code repository early. As a result, when a bug is found, it is easier to identify the change that introduced the problem. Afterwards, depending on the magnitude of the issue, the team can choose to either roll back the change or write and commit a fix, decreasing the overall time to resolution in production.

Higher Quality Software

Automating the build and deployment processes not only shortens the development cycle. It also helps teams produce higher quality software. By ensuring that each change is well-tested and deployed to at least one pre-production environment, teams can push changes to production with confidence. This is possible only when there is good test coverage of all levels of the codebase, from unit tests to more complex system tests.

Fewer Integration Issues

Because the automated test suite runs on the builds automatically produced with every commit, it is possible to catch and fix most integration issues early. This gives developers early insight into other work currently being done that might affect their code. It tests that code written by different contributors works together from the earliest possible moment instead of later when there may be additional side effects.

More Time Focused on Development

CI/CD systems rely on automation to produce builds and move new changes through the pipeline. Because manual intervention is not required, building and testing no longer require dedicated time from the development team. Instead, developers can concentrate on making productive changes to the codebase, confident that the automated systems will notify them of any problems.

Continuous Integration and Delivery Best Practices

Now that we’ve seen some of the benefits of using CI/CD, we can discuss some guidelines to help you implement these processes successfully.

Take Responsibility for the CI/CD Pipeline

Developers are responsible for the commits they make until the changes are deployed to pre-production. This means that the developer must ensure that their code is integrated properly and can be deployed at all times. If a change is committed that breaks these requirements, it is that developer’s duty to commit a fix rapidly to avoid impacting other people’s work. Build failures should halt the pipeline and block commits not involved in fixing the failure, making it essential to address the build problems quickly.

Ensure Consistent Deployments

The deployment process should not be manual. Instead, a pipeline should automate the deployment process to ensure consistency and repeatability. This reduces the chances of pushing broken builds to production and helps avoid one-off, untested configurations that are difficult to reproduce.

Commit the Codebase to Version Control

It is important that every change is committed to version control. This helps the team audit all proposed changes and lets the team revert problematic commits easily. It can also help preserve the integrity of configuration, scripts, databases, and documentation. Without version control, it is easy to lose or mishandle configuration and code changes, especially when multiple people are contributing to the same codebase.

Make Small, Incremental Changes

A crucial point to keep in mind is that the changes should be small. Waiting to introduce changes in larger batches delays feedback from testing and makes it more difficult to identify the root cause of problems.

Good Test Coverage

Since intent of CI/CD is to reduce manual testing, there should be a good automated test coverage throughout the codebase to ensure that the software is functioning as intended. Additionally, it is important to regularly clean up redundant or out-of-date tests to avoid affecting the pipeline.

The ratio of different types of tests in the test suite should reflect the “testing pyramid” model. The majority of the tests should be unit tests since they ensure basic functionality and are quick to execute. A smaller number of integration tests should follow to guarantee that components can operate together successfully. Finally, a small number regression, UI, system, and end-to-end tests should be included towards the end of the testing cycle to ensure that the build meets all of the behavioral requirements of the project. Tools like JaCoCo for Java projects can determine how much of the codebase is covered by the testing suite.

Fig. 4 Test Pyramid

Fig. 4: Diagram of test pyramid

What’s Next?

There are many different continuous integration and delivery tools available. Some examples include Jenkins, Travis CI, GoCD, CircleCI, Gitlab CI, Codeship, and TeamCity.

Source

Deploying and Scaling Jenkins on Kubernetes

Introduction

Jenkins is an open-source continuous integration and continuous delivery tool, which can be used to automate building, testing, and deploying software. It is widely considered the most popular automation server, being used by more than a million users worldwide. Some advantages of Jenkins include:

  • Open-source software with extensive community support
  • Java-based codebase, making it portable to all major platforms
  • A rich ecosystem of more than 1000 plugins

Jenkins works well with all popular Source Control Management systems (Git, SVN, Mercurial and CVS), popular build tools (Ant, Maven, Grunt), shell scripts and Windows batch commands, as well as testing frameworks and report generators. Jenkins plugins provide support for technologies like Docker and Kubernetes, which enable the creation and deployment of cloud-based microservice environments, both for testing as well as production deployments.

Jenkins supports the master-agent architecture (many build agents completing work scheduled by a master server) making it highly scalable. The master’s job is to schedule build jobs, distribute the jobs to agents for actual execution, monitor the agents, and get the build results. Master servers can also execute build job directly.

The agents’ task is to build the job sent by the master. A job can be configured to run on a particular type of agent, or if there are no special requirements, Jenkins can simply choose the next available agent.

Jenkins scalability provides many benefits:

  • Running many build plans in parallel
  • Automatically spinning up and removing agents to save costs
  • Distributing the load

Even if Jenkins includes scalability features out-of-the-box, the process of configuring scaling is not always straightforward. There are many options available to scale Jenkins, but one of the powerful options is to use Kubernetes.

What is Kubernetes?

Kubernetes is an open-source container orchestration tool. Its main purpose is to manage containerized applications on clusters of nodes by helping operators deploy, scale, update, and maintain their services, and providing mechanisms for service discovery. You can learn more about what Kubernetes is and what it can do by checking out the official documentation.

Kubernetes is one of the best tools for managing scalable, container-based workloads. Most applications, including Jenkins, can be containerized, which makes Kubernetes a very good option.

01

Project Goals

Before we begin, let’s take a moment and describe the system we are attempting to build.

We want to start by deploying a Jenkins master instance onto a Kubernetes cluster. We will use Jenkins’ kubernetes plugin to scale Jenkins on the cluster by provisioning dynamic agents to accommodate its current workloads. The plugin will create a Kubernetes Pod for each build by launching an agent based on a specific Docker image. When the build completes, Jenkins will remove the Pod to save resources. Agents will be launched using JNLP (Java Network Launch Protocol), so we the containers will be able to automatically connect to the Jenkins master once up and running.

Prerequisites and Setup

To complete this guide, you will need the following:

  • A Linux box to run Rancher: We will also use this to build custom Jenkins images. Follow the Rancher installation quick start guide to install Docker and Rancher on an appropriate host.
  • Docker Hub account: We will need an account with a container image repository to push the custom images for our Jenkins master and agents.
  • GCP account: We will provision our Kubernetes cluster on GCP. The free-tier of Google’s cloud platform should be enough to complete this guide.

Building Custom Images for Jenkins

Let’s start by building custom images for our Jenkins components and pushing them to Docker Hub.

Log in to the Linux server where you will be running Rancher and building images. If you haven’t already done so, install Docker and Rancher on the host by following the Rancher installation quick start guide. Once the host is ready, we can prepare our Dockerfiles.

Writing the Jenkins Master Dockerfile

We can begin by creating a file called Dockerfile-jenkins-master in the current directory to define the Jenkins master image:

[root@rancher-instance jenkins-kubernetes]# vi Dockerfile-jenkins-master

Inside, include the following Dockerfile build instructions. These instructions use the main Jenkins Docker image as a base and configure the plugins we will use to deploy onto a Kubernetes cluster:

FROM jenkins/jenkins:lts

# Plugins for better UX (not mandatory)
RUN /usr/local/bin/install-plugins.sh ansicolor
RUN /usr/local/bin/install-plugins.sh greenballs

# Plugin for scaling Jenkins agents
RUN /usr/local/bin/install-plugins.sh kubernetes

USER jenkins

Save and close the file when you are finished.

Writing the Jenkins Agent Dockerfiles

Next, we can create the Dockerfiles for our Jenkins agents. We will be creating two agent images to demonstrate how Jenkins can correctly identify the correct agent to provision for each job.

Create an empty file in the current directory. We will copy this to the image as an identifier for each agent we are building:

[root@rancher-instance jenkins-kubernetes]# touch empty-test-file

Now, create a new Dockerfile for the first agent image:

[root@rancher-instance jenkins-kubernetes]# vi Dockerfile-jenkins-slave-jnlp1

This image will copy the empty file to a unique name to identify the agent being used.

FROM jenkins/jnlp-slave

# For testing purpose only
COPY empty-test-file /jenkins-slave1

ENTRYPOINT [“jenkins-slave”]

Save and close the file when you are finished.

Finally, define a second agent. This is identical to the previous agent, but includes a different file identifier:

[root@rancher-instance jenkins-kubernetes]# vi Dockerfile-jenkins-slave-jnlp2FROM jenkins/jnlp-slave

# For testing purpose only
COPY empty-test-file /jenkins-slave2

ENTRYPOINT [“jenkins-slave”]

Save the file when you are finished.

You’re working directory should now look like this:

[root@rancher-instance jenkins-kubernetes]# ls -l
total 16
-rw-r–r–. 1 root root 265 Oct 21 12:58 Dockerfile-jenkins-master
-rw-r–r–. 1 root root 322 Oct 21 13:16 Dockerfile-jenkins-slave-jnlp1
-rw-r–r–. 1 root root 315 Oct 21 13:05 Dockerfile-jenkins-slave-jnlp2

Building the Images and Pushing to Docker Hub

With the Dockerfiles written, we are now ready to build and push the images to Docker Hub.

Let’s start by building the image for the Jenkins master:

Note: In the command below, replace <dockerhub_user> with your Docker Hub account name.

[root@rancher-instance jenkins-kubernetes]# docker build -f Dockerfile-jenkins-master -t <dockerhub_user>/jenkins-master .

Click for full command output

Sending build context to Docker daemon 12.29 kB
Step 1/5 : FROM jenkins/jenkins:lts
Trying to pull repository docker.io/jenkins/jenkins …
lts: Pulling from docker.io/jenkins/jenkins
05d1a5232b46: Pull complete
5cee356eda6b: Pull complete
89d3385f0fd3: Pull complete
80ae6b477848: Pull complete
40624ba8b77e: Pull complete
8081dc39373d: Pull complete
8a4b3841871b: Pull complete
b919b8fd1620: Pull complete
2760538fe600: Pull complete
bcb851da81db: Pull complete
eacbf73f87b6: Pull complete
9a7e396a0cbd: Pull complete
8900cde5602e: Pull complete
c8f62fde3f4d: Pull complete
eb91939ba069: Pull complete
b894a41fcbe2: Pull complete
b3c60e932390: Pull complete
18f663576636: Pull complete
4445e4b557b3: Pull complete
f09e9b4be8ed: Pull complete
e3abe5324295: Pull complete
432eff1ecbb4: Pull complete
Digest: sha256:d5c835407130a393becac222b979b120c675f8cd815fadd085adb76b216e4ce1
Status: Downloaded newer image for docker.io/jenkins/jenkins:lts
—> 9cff19ad8c8b
Step 2/5 : RUN /usr/local/bin/install-plugins.sh ansicolor
—> Running in ff752eeb107d

Creating initial locks…
Analyzing war…
Registering preinstalled plugins…
Using version-specific update center: https://updates.jenkins.io/2.138…
Downloading plugins…
Downloading plugin: ansicolor from https://updates.jenkins.io/2.138/latest/ansicolor.hpi
> ansicolor depends on workflow-step-api:2.12;resolution:=optional
Skipping optional dependency workflow-step-api

WAR bundled plugins:

Installed plugins:
ansicolor:0.5.2
Cleaning up locks
—> a018ec9e38e6
Removing intermediate container ff752eeb107d
Step 3/5 : RUN /usr/local/bin/install-plugins.sh greenballs
—> Running in 3505e21268b2

Creating initial locks…
Analyzing war…
Registering preinstalled plugins…
Using version-specific update center: https://updates.jenkins.io/2.138…
Downloading plugins…
Downloading plugin: greenballs from https://updates.jenkins.io/2.138/latest/greenballs.hpi

WAR bundled plugins:

Installed plugins:
ansicolor:0.5.2
greenballs:1.15
Cleaning up locks
—> 0af36c7afa67
Removing intermediate container 3505e21268b2
Step 4/5 : RUN /usr/local/bin/install-plugins.sh kubernetes
—> Running in ed0afae3ac94

Creating initial locks…
Analyzing war…
Registering preinstalled plugins…
Using version-specific update center: https://updates.jenkins.io/2.138…
Downloading plugins…
Downloading plugin: kubernetes from https://updates.jenkins.io/2.138/latest/kubernetes.hpi
> kubernetes depends on workflow-step-api:2.14,apache-httpcomponents-client-4-api:4.5.3-2.0,cloudbees-folder:5.18,durable-task:1.16,jackson2-api:2.7.3,variant:1.0,kubernetes-credentials:0.3.0,pipeline-model-extensions:1.3.1;resolution:=optional
Downloading plugin: workflow-step-api from https://updates.jenkins.io/2.138/latest/workflow-step-api.hpi
Downloading plugin: apache-httpcomponents-client-4-api from https://updates.jenkins.io/2.138/latest/apache-httpcomponents-client-4-api.hpi
Downloading plugin: cloudbees-folder from https://updates.jenkins.io/2.138/latest/cloudbees-folder.hpi
Downloading plugin: durable-task from https://updates.jenkins.io/2.138/latest/durable-task.hpi
Downloading plugin: jackson2-api from https://updates.jenkins.io/2.138/latest/jackson2-api.hpi
Downloading plugin: variant from https://updates.jenkins.io/2.138/latest/variant.hpi
Skipping optional dependency pipeline-model-extensions
Downloading plugin: kubernetes-credentials from https://updates.jenkins.io/2.138/latest/kubernetes-credentials.hpi
> workflow-step-api depends on structs:1.5
Downloading plugin: structs from https://updates.jenkins.io/2.138/latest/structs.hpi
> kubernetes-credentials depends on apache-httpcomponents-client-4-api:4.5.5-3.0,credentials:2.1.7,plain-credentials:1.3
Downloading plugin: credentials from https://updates.jenkins.io/2.138/latest/credentials.hpi
Downloading plugin: plain-credentials from https://updates.jenkins.io/2.138/latest/plain-credentials.hpi
> cloudbees-folder depends on credentials:2.1.11;resolution:=optional
Skipping optional dependency credentials
> plain-credentials depends on credentials:2.1.5
> credentials depends on structs:1.7

WAR bundled plugins:

Installed plugins:
ansicolor:0.5.2
apache-httpcomponents-client-4-api:4.5.5-3.0
cloudbees-folder:6.6
credentials:2.1.18
durable-task:1.26
greenballs:1.15
jackson2-api:2.8.11.3
kubernetes-credentials:0.4.0
kubernetes:1.13.0
plain-credentials:1.4
structs:1.17
variant:1.1
workflow-step-api:2.16
Cleaning up locks
—> dd19890f3139
Removing intermediate container ed0afae3ac94
Step 5/5 : USER jenkins
—> Running in c1066861d5a3
—> 034e27e479c5
Removing intermediate container c1066861d5a3
Successfully built 034e27e479c5

When the command returns, check the newly created image:

[root@rancher-instance jenkins-kubernetes]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
<dockerhub_user>/jenkins-master latest 034e27e479c5 16 seconds ago 744 MB
docker.io/jenkins/jenkins lts 9cff19ad8c8b 10 days ago 730 MB

Log in to Docker Hub using the credentials of your account:

[root@rancher-instance jenkins-kubernetes]# docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don’t have a Docker ID, head over to https://hub.docker.com to create one.
Username:
Password:
Login Succeeded

Now, push the image to your Docker Hub account:

Note: In the command below, be sure to substitute your own Docker Hub account again.

[root@rancher-instance jenkins-kubernetes]# docker push <dockerhub_user>/jenkins-master

Click for full command output

The push refers to a repository [docker.io/calinrus/jenkins-master]
b267c63b5961: Pushed
2cd1dc56ef56: Pushed
e99d7d8d116f: Pushed
8d117101392a: Mounted from jenkins/jenkins
c2607b4e8ae4: Mounted from jenkins/jenkins
81e4bc7cb1f1: Mounted from jenkins/jenkins
8bac294d4ee8: Mounted from jenkins/jenkins
707f669f3d58: Mounted from jenkins/jenkins
ac2b51b56ac6: Mounted from jenkins/jenkins
1b2b61bef21f: Mounted from jenkins/jenkins
efe1c25100f5: Mounted from jenkins/jenkins
8e656983ccf7: Mounted from jenkins/jenkins
ba000aef226d: Mounted from jenkins/jenkins
a046c3cdf994: Mounted from jenkins/jenkins
67e27eb293e8: Mounted from jenkins/jenkins
bdd1835d949d: Mounted from jenkins/jenkins
84bbcb8ef932: Mounted from jenkins/jenkins
0d67aa2185d5: Mounted from jenkins/jenkins
3499b696191f: Pushed
3b2a1688b8f3: Pushed
b7c56a9790e6: Mounted from jenkins/jenkins
ab016c9ea8f8: Mounted from jenkins/jenkins
2eb1c9bfc5ea: Mounted from jenkins/jenkins
0b703c74a09c: Mounted from jenkins/jenkins
b28ef0b6fef8: Mounted from jenkins/jenkins
latest: digest: sha256:6b2c8c63eccd795db5b633c70b03fe1b5fa9c4a3b68e3901b10dc3af7c3549f0 size: 5552

You will need to repeat similar commands to build the two images for the Jenkins JNLP agents:

Note: Substitute your Docker Hub account name for <dockerhub_user> in the commands below.

docker build -f Dockerfile-jenkins-slave-jnlp1 -t <dockerhub_user>/jenkins-slave-jnlp1 .
docker push <dockerhub_user>/jenkins-slave-jnlp1

docker build -f Dockerfile-jenkins-slave-jnlp2 -t <dockerhub_user>/jenkins-slave-jnlp2 .
docker push <dockerhub_user>/jenkins-slave-jnlp2

If everything was successful, you should see something like this in your Docker Hub account:

02

Using Rancher to Deploy a Cluster

Now that our images are published, we can use Rancher to help us deploy a GKE cluster. If you set up Rancher earlier, you should be able to log into your instance by visiting your server’s IP address with a web browser.

Next, create a new GKE cluster. You will need to log in to your Google Cloud account to create a service account with the appropriate access. Follow the Rancher documentation on creating a GKE cluster to learn how to create a service account and the provision a cluster with Rancher.

Deploying Jenkins to the Cluster

As soon as the cluster is ready, we can deploy the Jenkins master and create some services. If you are familiar with kubectl, you can achieve this from command line, but you can easily deploy all of the components you need through Rancher’s UI.

Regardless of how you choose to submit workloads to your cluster, create the following files on your local computer to define the objects you need to create.

Start by creating a file to define the Jenkins deployment:

[root@rancher-instance k8s]# vi deployment.yml

Inside, paste the following:

Note: Make sure to change <dockerhub_user> to your Docker Hub account name in the file below.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins
spec:
containers:
– name: jenkins
image: <dockerhub_user>/jenkins-master
env:
– name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false
ports:
– name: http-port
containerPort: 8080
– name: jnlp-port
containerPort: 50000
volumeMounts:
– name: jenkins-home
mountPath: /var/jenkins_home
volumes:
– name: jenkins-home
emptyDir: {}

Next, create a file to configure the two services we will create.

One will be a LoadBalancer service which will provision a public IP allowing us to access Jenkins from Internet. The other one will be a ClusterIP service needed for internal communication between master and agents that will be provisioned in the future:

[root@rancher-instance k8s]# vi service.yml

Inside, paste the following YAML structure:

apiVersion: v1
kind: Service
metadata:
name: jenkins
spec:
type: LoadBalancer
ports:
– port: 80
targetPort: 8080
selector:
app: jenkins

apiVersion: v1
kind: Service
metadata:
name: jenkins-jnlp
spec:
type: ClusterIP
ports:
– port: 50000
targetPort: 50000
selector:
app: jenkins

From Rancher, click on your managed cluster (called jenkins in this demo). In the upper-left menu, select the Default project and then select the Workloads tab.

03

From here, click Import YAML. On the page that follows, click the Read from a file button in the upper-right corner. Choose the local deployment.yml file you created on your computer and click Import.

04

Rancher will deploy a pod based on your Jenkins master image to the cluster:

06

Next, we need to configure a way to access the UI on the Jenkins master.

In Load Balancing tab, follow same process as you did to import the previous file. Click the Import YAML button, followed by the Read from a file button. Next, select the service.yml file from your computer and click the Import button:

07

Rancher will begin to create your services. Provisioning the load balancer may take a few minutes.

08

As soon as service is marked as Active, you can find its public IP address by clicking on the three vertical dots at the right end of the load balancer’s row and select View/Edit YAML. From here, scroll down to find the IP address under status > loadBalancer > ingress > ip:

09

We can access the Jenkins UI by typing this IP into a web browser:

10

Configuring Dynamic Build Agents

With the Jenkins master up and running, we can go ahead and configure dynamic build agents to automatically spin up Pods as necessary.

Disabling the Default Master Build Agents

In the Jenkins UI, under Build Executor Status on the left side, two executors are configured by default, waiting to pick up build jobs. These are provided by the Jenkins master.

The master instance should only be in charge of scheduling build jobs, distributing the jobs to agents for execution, monitoring the agents, and getting the build results. Since we don’t want our master instance to execute builds, we will disable these.

Click on Manage Jenkins followed by Manage Nodes.

13

Click the gear icon associated with the master row.

14_1

On the following page, set # of executors to 0 and click Save.

14_2

The two idle executors will be removed from the Build Executor Status on the left side of the UI.

Gathering Configuration Information

We need a few pieces of information to configure Jenkins to automatically provision build agents on our Kubernetes cluster. We need three pieces of information from our GCP account and one from our ClusterIP service.

In your GCP account, select Kubernetes Engine, followed by Clusters and then click on the name of your cluster. In the Details column, copy the Endpoint IP address for later reference. This is the URL we need to give Jenkins to connect to the cluster:

16

Next, click Show credentials to the right of the Endpoint. Copy the Username and Password for later reference.

17

Now, switch over to the Rancher UI. In the upper-left menu, select the Default project on Jenkins cluster. Select the Workloads tab in the upper navigation pane and click the Service Discovery tab on the page:

20-3

Click on the three vertical dots associated with the jenkins-jnlp row and click View/Edit YAML. Copy values in the spec > clusterIP and spec > ports > port for later reference.

Configuring the Jenkins Kubernetes Plugin

Back in the main Jenkins dashboard, click on Manage Jenkins, followed by Manage Plugins:

11

Click the Installed tab and check that the Kubernetes plugin is installed:

12

We can now configure the plugin. Go to Manage Jenkins and select Configure System:

18

Scroll to the Cloud section at the bottom of the page. Click on Add a new cloud and select Kubernetes.

19

On the form that follows, in the Kubernetes URL field, enter https:// followed by the cluster endpoint IP address you copied from your GCP account.

Under Credentials, click the Add button and select Jenkins. On the form that appears, enter the username and password you copied from your GCP account and click the Add button at the bottom.

When you return to the Kubernetes form, select the credentials you just added from the Credentials drop down menu and click the Test Connection button. If the configuration is correct, the test will show “Connection test successful”.

Next, in the Jenkins tunnel field, enter the IP address and port that you retrieved from the jenkins-jnlp service in the Rancher UI, separated by a colon:

20-1

Now, scroll down to the Images section at the bottom of the page, click the Add Pod Template button, and select Kubernetes Pod Template. Fill out the Name and Labels fields with unique values to identify your first agent. We will use the label to specify which agent image should be used to run each build.

Next, in the Containers field, click the Add Container button and select Container Template. In the section that appears, fill out the following fields:

  • Name: jnlp (this is required by the Jenkins agent)
  • Docker image: <dockerhub_user>/jenkins-slave-jnlp1 (make sure to change the Docker Hub username)
  • Command to run: Delete the value here
  • Arguments to pass to the command: Delete the value here

The rest of the fields can be left as they are.

21

Next, click the Add Pod Template button and select Kubernetes Pod Template again. Repeat the process for the second agent image you created. Make sure to change the values to refer to your second image where applicable:

22

Click the Save button at the bottom to save your changes and continue.

Testing Dynamic Build Jobs

Now that our configuration is complete, we can create some build jobs to ensure that Jenkins can scale on top of Kubernetes. We will create five build jobs for each of our Jenkins agents.

On the main Jenkins page, click New Item on the left side. Enter a name for the first build of your first agent. Select Freestyle project and click the OK button.

23

On the next page, in the Label Expression field, type the label you set for your first Jenkins agent image. If you click out of the field, a message will appear indicating that the label is serviced by a cloud:

24

Scroll down to the Build Environment section and check Color ANSI Console Output.

In the Build section, click Add build step and select Execute shell. Paste the following script in the text box that appears:

#!/bin/bash

RED=’33[0;31m’
NC=’33[0m’

result=`ls / | grep -e jenkins-slave1 -e jenkins-slave2`
echo -e “$Docker image is for $result $”

Click the Save button when you are finished.

25

Create another four jobs for the first agent by clicking New Item, filling out a new name, and using the Copy from field to copy from your first build. You can save each build without changes to duplicate the first build exactly.

Next, configure the first job for your second Jenkins agent. Click New Item, select a name for the first job for the second agent, and copy the job from your first agent again. This time, we will modify the fields on the configuration page before saving.

First, change the Label Expression field to match the label for your second agent.

Next, replace the script in the text box in the Build section with the following script:

#!/bin/bash

BLUE=’e[34m’
NC=’33[0m’

result=`ls / | grep -e jenkins-slave1 -e jenkins-slave2`
echo -e “$Docker image is for $result $”

Click Save when you are finished.

26

Create four more builds for your second agent by copying from the job we just created.

Now, go to the home screen and start all the ten jobs you just created by clicking on the icon on the far right side of each row. As soon as you start them, they will be queued for execution as indicated by the Build Queue section:

28

After a few seconds, Pods will begin to be created to execute the builds (you can verify this in Rancher’s Workload tab). Jenkins will create one Pod for each job. As each agent is started, it connects to the master and receives a job from the queue to execute.

2930

As soon as an agent finishes processing its job, it is automatically removed from the cluster:

31

To check the status of our jobs, we can click on one from each agent. Click the build from the Build History and then click Console Output. Jobs executed by the first agent should specify that the jenkins-slave1 Docker image was used, while builds executed by the second agent should indicate that the jenkins-slave2 image was used:

3233

If you see the output above, Jenkins is configured correctly and functioning as intended. You can now begin to customize your Kubernetes-backed build system to help your team test and release software.

Conclusion

In this article, we configured Jenkins to automatically provision build agents on demand by connecting it with a Kubernetes cluster managed by Rancher. To achieve this, we completed the following steps:

  • Created a cluster using Rancher
  • Created custom Docker images for the Jenkins master and agents
  • Deployed the Jenkins master and an L4 LoadBalancer service to the Kubernetes cluster
  • Configured the Jenkins kubernetes plugin to automatically spawn dynamic agents on our cluster.
  • Tested a scenario using multiple build jobs with dedicated agent images

The main purpose of this article was to highlight the basic configuration necessary to set up a Jenkins master and agent architecture. We saw how Jenkins launched agents using JNLP and how the containers automatically connected to the Jenkins master to receive instructions. To achieve this, we used Rancher to create the cluster, deploy a workload, and monitor the resulting Pods. Afterwards, we relied on the Jenkins Kubernetes plugin to glue together all of the different components.

Source

Introduction to Kubernetes Monitoring

Introduction

With over 40,000 stars on Github, more than 70,000 commits, and with major contributors like Google and Redhat, Kubernetes has rapidly taken over the container ecosystem to become the true leader of container orchestration platforms.

Understanding Kubernetes and Its Abstractions

At the infrastructure level, a Kubernetes cluster is a set of physical or virtual machines acting in a specific role. The machines acting in the role of Master act as the brain of all operations and are charged with orchestrating containers that run on all of the Nodes.

  • Master components manage the lifecycle of a pod, the base unit of deployment within a Kubernetes cluster. If a pod dies, the Controller creates a new one. If you scale the number of pod replicas up or down, the Controller creates or destroys pods to satisfy your request. The Master role includes the following components:
    • kube-apiserver – exposes APIs for the other master components.
    • etcd – a consistent and highly-available key/value store used for storing all internal cluster data.
    • kube-scheduler – uses information in the Pod spec to decide on which Node to run a Pod.
    • kube-controller-manager – responsible for Node management (detecting if a Node fails), pod replication, and endpoint creation.
    • cloud-controller-manager – runs controllers that interact with the underlying cloud providers.
  • Node components are worker machines in Kubernetes and are managed by the Master. A node may be a virtual machine (VM) or physical machine, and Kubernetes runs equally well on both types of systems. Each node contains the necessary components to run pods:
    • kubelet: handles all communication between the Master and the node on which it is running. It interfaces with the container runtime to deploy and monitor containers.
    • kube-proxy: maintains the network rules on the host and handles transmission of packets between pods, the host, and the outside world.
    • container runtime: responsible for running containers on the host. The most popular engine is Docker, although Kubernetes supports container runtimes from rkt, runc and others.

01

From a logical perspective, a Kubernetes deployment is comprised of various components, each serving a specific purpose within the cluster.

  • Pods are the basic unit of deployment within Kubernetes. A pod consists of one or more containers that share the same network namespace and IP address. Best practices recommend that you create one pod per application component so you can scale and control them separately.
  • Services provide a consistent IP address in front of a set of pods and a policy that controls access to them. The set of pods targeted by a service is often determined by a label selector. This makes it easy to point the service to a different set of pods during upgrades or blue/green deployments.
  • ReplicaSets are controlled by deployments and ensure that the desired number of pods for that deployment are running.
  • Namespaces define a logical namespace for resources such as pods and services. They enable resources to use the same names, whereas resources in a single namespace must have unique names. Rancher uses namespaces with its role-based access control to provide a secure separation between namespaces and the resources running inside of them.
  • Metadata marks containers based on their deployment characteristics.

02

Monitoring Kubernetes

Multiple services and namespaces can be spread across the infrastructure. As seen above, each of the services are made of pods, which can have one or more containers inside. With so many moving parts, monitoring even a small Kubernetes cluster can present a challenge. It requires a deep understanding of the application architecture and functionality in order to monitor it effectively.

Kubernetes ships with tools for monitoring the cluster:

  • Probes actively monitor the health of a container. If the probe determines that a container is no longer healthy, the probe will restart it.
  • cAdvisor is an open source agent that monitors resource usage and analyzes the performance of containers. Originally created by Google, cAdvisor is now integrated with the Kubelet. It collects, aggregates, processes and exports metrics such as CPU, memory, file and network usage for all containers running on a given node.
  • The kubernetes dashboard is an add-on which gives an overview of the resources running on your cluster. It also gives a very basic means of deploying and interacting with those resources.

Kubernetes has tremendous capability for automatically recovering from failures. It can restart pods if a process crashes, and it will redistribute pods if a node fails. However, for all of its power, there are times when it cannot fix a problem. In order to detect those situations, we need additional monitoring.

Layers Of Monitoring

Infrastructure

All clusters should have monitoring of the underlying server components because problems at the server level will show up in the workloads.

What to monitor?
  • CPU utilization. Monitoring the CPU will reveal both system and user consumption, and it will also show iowait. When running clusters in the cloud or with any network storage, iowait will indicate bottlenecks waiting for storage reads and writes (i/o processes). An oversubscribed storage framework can impact performance.
  • Memory usage. Monitoring memory will show how much memory is in use and how much is available, either as free memory or as cache. Systems that run up against memory limits will begin to swap (if swap is available on the system), and swapping will rapidly degrade performance.
  • Disk pressure. If a system is running write-intensive services like etcd or any datastore, running out of disk space can be catastrophic. The inability to write data will result in corruption, and that corruption can transfer to real-world losses. Technologies like LVM make it trivial to grow disk space as needed, but keeping an eye on it is imperative.
  • Network bandwidth. In today’s era of gigabit interfaces, it might seem like you can never run out of bandwidth. However, it doesn’t take more than a few aberrant services, a data breach, system compromise, or DOS attack to eat up all of the bandwidth and cause an outage. Keeping awareness of your normal data consumption and the patterns of your application will help you keep costs down and also aid in capacity planning.
  • Pod resources. The Kubernetes scheduler works best when it knows what resources a pod needs. It can then assure that it places pods on nodes where the resources are available. When designing your network, consider how many nodes can fail before the remaining nodes can no longer run all of the desired resources. Using a service such as a cloud autoscaling group will make recovery quick, but be sure that the remaining nodes can handle the increased load for the time that it takes to bring the failed node back online.

Kubernetes Services

All of the components that make up a Kubernetes Master or Worker, including etcd, are critical to the health of your applications. If any of these fail, the monitoring system needs to detect the failure and either fix it or send an alert.

Internal Services

The final layer is that of the Kubernetes resources themselves. Kubernetes exposes metrics about the resources, and we can also monitor the applications directly. Although we can trust that Kubernetes will work to maintain the desired state, if it’s unable to do so, we need a way for a human to intervene and fix the issue.

Monitoring with Rancher

In addition to managing Kubernetes clusters running anywhere, on any provider, Rancher will also monitor the resources running inside of those clusters and send alerts when they exceed defined thresholds.

There are already dozens of tutorials on how to deploy Rancher. If you don’t already have a cluster running, pause here and visit our quickstart guide to spin one up. When it’s running, return here to continue with monitoring.

The cluster overview gives you an idea of the resources in use and the state of the Kubernetes components. In our case, we’re using 78% of the CPU, 26% of the RAM and 11% of the maximum number of pods we can run within the cluster.

04

When you click on the Nodes tab, you’ll see additional information about each of the nodes running in the cluster, and when you click on a particular node, the view focuses on the health of that one member.

0506

The Workloads tab shows the pods running in your cluster. If you don’t have anything running, launch a workload running the nginx image and scale it up to multiple replicas.

When you select the name of the workload, Rancher presents a page that shows information about it. At the top of this page it will show you each of the pods, which node they’re on, their IP address, and their state. Clicking on any individual pod takes us one level deeper, where now we see detailed information about only that pod. The hamburger menu icon in the top right corner lets us interact with the pod, and through this we can execute a shell, view the logs, or delete the pod.

07080910

Other tabs show information about different Kubernetes resources, including Load Balancing for ingress or services of type LoadBalancer, Service Discovery for other service types, and Volumes for information about any volumes configured in the cluster.

1112

Use Prometheus for Monitoring

The information visible in the Rancher UI is useful for troubleshooting, but it’s not the best way to actively track the state of the cluster throughout every moment of its life. For that we’ll use Prometheus, a sibling project of Kubernetes under the care and guidance of the Cloud Native Computing Foundation. We’ll also use Grafana, a tool for converting time-series data into beautiful graphs and dashboards.

Prometheus is an open-source application for monitoring systems and generating alerts. It can monitor almost anything, from servers to applications, databases, or even a single process. In the Prometheus lexicon it monitors targets, and each unit of a target is called a metric. The act of retrieving information about a target is known as scraping. Prometheus will scrape targets at designated intervals and store the information in a time-series database. Prometheus has its own scripting language called PromQL.

Grafana is also open source and runs as a web application. Although frequently used with Prometheus, it also supports backend datastores such as InfluxDB, Graphite, Elasticsearch, and others. Grafana makes it easy to create graphs and assemble those graphs into dashboards. Those dashboards can be protected by a strong authentication and authorization layer, and they can also be shared with others without giving them access to the server itself. Grafana makes heavy use of JSON for its object definitions, which makes its graphs and dashboards extremely portable and easy to use with version control.

Rancher includes both Prometheus and Grafana in its application catalog, so we can deploy them with a few clicks.

Install Prometheus and Grafana

Visit the Catalog Apps page for your cluster and search for Prometheus. Installing it will also install Grafana and AlertManager. For this article it’s sufficient to leave everything at its defaults, but for a production deployment, read the information under Detailed Descriptions to see just how much configuration is available within the chart.

1314

When you click Launch, Rancher will deploy the applications into your cluster, and after a few minutes, you’ll see all of the workloads in an Active state under the prometheus namespace.

15

The defaults set up a Layer7 ingress using xip.io, and we can see this on the Load Balancing tab. Clicking on the link will open the Grafana dashboard.

16

The Prometheus installation also deployed several dashboards into Grafana, so immediately we can see information about the cluster and start to view its performance over time.

1718192021222324

Conclusion

Kubernetes works tirelessly to keep your applications running, but that doesn’t free you from the obligation of staying aware of how they’re doing. As soon as you begin to rely on Kubernetes to do work for you, the responsible thing to do is deploy a monitoring system that keeps you informed and empowers you to make decisions.

Prometheus and Grafana will do this for you, and when you use Rancher, the time it would normally take to deploy these two applications is reduced to mere minutes.

Calin Rus

Calin Rus

github

Source

A bright 2019 for OpenFaaS

Let’s talk about the impact of working on OpenFaaS full-time for the past year and some important updates for 2019 including how you can help and get involved.

OpenFaaS 2017/18 Sticker

Looking back

Over 12 months ago I joined VMware to work full-time on OpenFaaS in the Open Source Program Office. While there I built out a team to complement the work of the community and to accelerate the development of both OpenFaaS (the main project) and to execute on my vision for OpenFaaS Cloud.

Over the year I travelled to, and spoke at, around a dozen events and conferences around the world making many friends & connections in the tech community.

Here are some of the things I’m most proud of over that time:

Moving on

This has been an amazing year for the project and a key part of that has been due to my ability to focus as a full-time employee with a salary from VMware whilst keeping OpenFaaS independent. I am very thankful to have had the opportunity to grow and for those who championed my work and supported me during my time there. As of today I will be moving on from VMware.

Historically the project has had strong interest from customers and potential sponsors. The community is pushing the project forward rapidly too. That gives me great pride and means I have the opportunity to consider many ways to keep working on the project myself.

I’m excited about what the next year will bring for our project and community. Let’s make 2019 an even brighter year for OpenFaaS.

Let’s talk?

If you are interested in a conversation about hiring me to support & sponsor my Open Source work then email me at: alex@openfaas.com

There are several ways you can support the project, even if you don’t have the resources to hire me full-time. You can also donate as an individual or as a sponsor through Patreon, OpenCollective or a one-off via PayPal. Individual backers and sponsors enjoy different tiers of benefits as outlined at: https://www.openfaas.com/donate/

Connect with me on GitHub LinkedIn Twitter

If you’re still figuring out where to place Serverless & FaaS in your work, or whether it’s even relevant to your world catch up on all the latest with my video from GOTO Copenhagen and then get connected with the community using the links below.

Source

Kubernetes on the Edge

Today we announced a partnership with Arm to create a Kubernetes-based platform for IoT, edge, and data center nodes, all powered by Arm servers. The platform includes a port of Rancher Kubernetes Engine (RKE) and RancherOS to Arm Neoverse-based servers. We have also enhanced Rancher 2.1 so that a single Rancher server can manage mixed x86 and Arm 64 nodes. Customers can therefore have x86 clusters running in the data center and Arm clusters running on the edge.

Rancher and Arm are working jointly on a Smart City project in China. In that project, Arm servers are installed in buildings across large organizations. These ARM servers are used to collect data from various sensors including human presence, ambient temperature, and air quality. Sensor data is then processed in centralized data centers which is then used to coordinate power distribution and HVAC controls. Each edge node runs a standalone Arm Kubernetes cluster. Standard x86 Kubernetes clusters running in central data centers are used to process and analyze a large amount of data collected from edge nodes. A single Rancher server is used to manage all x86 and Arm Kubernetes clusters.

We learned a lot in the process of planning and executing this project. In the beginning, I did not understand why this customer wanted to run an independent Kubernetes cluster on each edge node. It seemed to be quite wasteful to run a separate instance of etcd, Kubernetes master, and Kubelet on each edge node. Why couldn’t we manage all the edge nodes as one large Kubernetes cluster? It turned out edge nodes had spotty network connectivity and therefore were not able to form a resilient multi-node Kubernetes cluster. Then why did the customer want to setup a Kubernetes cluster on the edge in the first place? Why couldn’t they just run standard Linux nodes and deploy rpm packages to the edge nodes? It turned out the software stack they were trying to deploy was quite sophisticated. It involved multiple services and required service discovery. The components had to be updated from time to time. Kubernetes was a great platform to manage application deployment.

The main challenge this customer had was that they needed a platform that can manage multiple Kubernetes clusters. Rancher suited their needs very well. Even better, we enhanced Rancher 2.1 so that it can manage mixed x86 and Arm clusters. Today, we simply rebuild edge clusters when they fail. In Rancher 2.2, we will be able to backup and restore RKE clusters, so that users no longer need to worry about losing etcd databases on edge node failures.

Kubernetes was designed to be a scalable platform. Our experience shows that it also works well as a single-node edge computing platform. We do wish, however, that Kubernetes itself consumed less resources. The single-node Kubernetes platform worked well on nodes with more than 8GB of memory, but it struggled on 4GB nodes. We are exploring ways to create a special-purpose Kubernetes distro that consumes less resources. Darren Shepherd, our Chief Architect, has been working on a minimal Kubernetes distro. It is still a work in progress. I hope, when he’s done, we will be able to fit the entire Kubernetes platform (etcd, master, kubelet) in less than 1GB of memory, so that our edge-solution can work on 4GB nodes.

If you are interested in exploring Kubernetes on the edge or reduce the footprint of your Kubernetes distros, please reach out to us at info@rancher.com. We’d love to work with you. We are super excited about the potential for applying Kubernetes technology on the edge. Kubernetes holds the potential to be the common platform for the cloud and for the edge.

Source

Who is keeping your cloud native stack secure?

Who is keeping your cloud native stack secure?

Survey after survey is showing that 70% of organizations are using cloud infrastructure. The last bits of the late majority will be joining this year and the laggards will come within a few years. Regardless of the stage of a company’s cloud native journey, security still remains an issue. The same survey states that security ranks as first priority with enterprises.

At Giant Swarm we pay close attention to the need for security. Last year we shared a 5 part series about security on our blog (I, II, III, IV, V). It presented some of the basics and our philosophy when it comes to security.

So…we talk the talk, great! What is important to our customers is that we also walk the walk.

In December 2018, a severe vulnerability was discovered in the Kubernetes API server. It allowed an unauthenticated user to perform privilege escalation and gain full admin privileges on a cluster.

The details of the vulnerability were discussed at length in the Kubernetes community. The chain of events is well documented across GitHub and Google Groups. Other contributors to the Kubernetes ecosystem provided analyses of the problem. One could easily find information about the problem, its identification and suggested mitigation.

The recommendation was to upgrade Kubernetes and new releases that included the fix were created for all active versions (v1.10.11, v1.11.5, v1.12.3). Earlier versions, did not receive an upgrade, so their upgrade deficit grew to include a security vulnerability.

At Giant Swarm, we were ready to upgrade all our customers to the secure version the next day. Regardless of the Kubernetes version, or the cloud provider. Customers on AWS, Azure, and on-premises, were all proactively notified of the vulnerability and its solution. Most of our customers don’t have Kubernetes APIs exposed to the public internet. Still, all benefited from a quick and transparent upgrade that allowed them to keep running their businesses – threat free.

This incident highlights how important it is to have several layers of security. But also that only with an automated update system, as well as the ability to quickly test and release upgrades, you and your business can really be safe.

Want to find out how the Giant Swarm Infrastructure deploys and scales Kubernetes Clusters? Request your free trial here by taking our survey and find out if you’re eligible.

Source