Adopting a GKE Cluster with Rancher 2.0

Rancher 2.0 is out and
odds are, you’re wondering what’s so shiny and new about it. Well,
here’s a huge selling point for the next big Rancher release;
Kubernetes cluster adoption! That’s right, we here at Rancher wanted
more kids, so we decided it was time to adopt. In all seriousness
though, this feature helps make Rancher more relevant to developers who
already have Kubernetes clusters deployed and are looking for a new way
to manage them. One of the most powerful aspects of this feature is that
it allows you to build a cluster from multiple cloud container engines,
including GKE, and keep them all under one management roof. In order to
adopt a GKE cluster into Rancher 2.0, you’ll first need an instance of
Rancher 2.0 running. To bootstrap a 2.0 server, check out this
guide here. Stop
before selecting to Add Hosts and pop right back here when you’re
done.

Acquiring your Kubectl Command

Now that you have a Rancher 2.0 server up and running, you’ll be
presented with a page like the one below:

Rancher 2.0 Home PageRancher 2.0 Home Page

Of course, we want to select Use Existing Kubernetes as we’re trying
to adopt an existing GKE cluster. On the next page, we’re presented
with a Host Registration URL menu, where we want to provide the
publicly accessible hostname and port for our Rancher server. As I set
up my host using a domain name in the form rancher.<domain>.<tld>,
Rancher has already found the address for the site and defaults to the
proper hostname. If you’re using a different network setup, just know
that this hostname has to be accessible by all the machines in the
Kubernetes cluster we are adopting. Click Save when you’re finished
making your choice.

Host Registration URLHost Registration URL

Now we are presented with a kubectl command which can be copied to
your clipboard by clicking the little clipboard icon to the right of the
command. Copy the command and save it in a notepad to use later.

Kubectl ApplyKubectl Apply

Important note: if you already have a cluster in GKE, you can skip
the following section and go straight to Adopting a GKE
Cluster

Creating a GKE Cluster

Next, hop on over to your GKE control panel and, if you don’t yet have
a GKE cluster up and running, click the Create Cluster button.

Create ClusterCreate Cluster

The cluster settings can be configured however you wish. I’ll be using
the name eric-rancher-demo, the us-central1-a zone,
version 1.7.5-gke.1 (default) and 1vCPU, 2GB RAM machines. Leave the
OS as the Container-Optimized OS(cos) and all other settings at the
default. My finished settings looks like this:

Container Cluster Settings
[]

You might want to grab some coffee after clicking Create, as GKE takes
a while (5-10 minutes) to stand up your Kubernetes cluster.

Adopting a GKE Cluster

Now that we have both a Rancher 2.0 server stood up and a GKE cluster
running and ready, we want to configure gcloud utils and kubectl to
connect to our GKE cluster. In order to install gcloud utils on Debian
and Ubuntu based machines, follow the steps
outlined here on
Google’s website. For all other distros, find your
guide here. Once we get
the message gcloud has now been configured!, we can move onto the next
step. To install kubectl all we need to do is
type gcloud components install kubectl. Now that we
have kubectl installed, in order to connect to our cluster, we want to
click on the Connect to the cluster link at the top of the page in
GKE.

Connect to the clusterConnect to the cluster

A window will pop up providing instructions
to Configure kubectl command line access by running the following command.

Copy that command and paste it into your terminal window. In this case:

$ gcloud container clusters get-credentials eric-rancher-demo –zone us-central1-a –project rancher-dev
> Fetching cluster endpoint and auth data.
> kubeconfig entry generated for eric-rancher-demo.

Our kubectl is now hooked up to the cluster. We should be able to
simply paste the command from the Rancher 2.0 setup we did earlier, and
Rancher will adopt the cluster.

$ kubectl apply -f http://rancher.<domain>.<tld>/v3/scripts/2CFC9454A034E7C3E367:1483142400000:feQRTz4WmIemlAUSy4O37vuF0.yaml
> namespace “cattle-system” created
> serviceaccount “rancher” created
> clusterrolebinding “rancher” created
> secret “rancher-credentials-39124a03” created
> pod “cluster-register-39124a03” created
> daemonset “rancher-agent” created

Note: You cannot adopt a Kubernetes cluster if it has been previously
adopted by a Rancher 2.0 server unless you delete the
namespace cattle-system from the Kubernetes installation. You can do
this by typing kubectl delete –namespace cattle-system and waiting 5
minutes for the namespace to clear out.

Source

Local Kubernetes for Windows – MiniKube vs Docker Desktop

Moving your application into a Kubernetes cluster presents two major challenges. The first one is the adoption of Kubernetes deployments as an integral part of your Continuous Delivery pipelines. Thankfully this challenge is already solved using the native Codefresh-Kubernetes integration that also includes the GUI dashboard, giving you a full status of your cluster.

The second challenge for Kubernetes adoption is the way developers work locally on their workstations. In most cases, a well-designed 12-factor application can be developed locally without the need for a full cluster. Sometimes, however, the need for a cluster that is running locally is imperative especially when it comes to integration tests or any other scenario where the local environment must represent the production one.

There are several ways to run a Kubernetes cluster locally and in this article, we will examine the following solutions for Windows (future blog posts with cover Linux and Mac):

A local machine Kubernetes solution can help developers to configure and run a Kubernetes cluster in their local development environments and test their application during all development phases, without investing significant effort to configure and manage a Kubernetes cluster.

We are evaluating these solutions and providing a short comparison based on ease of installation, deployment, and management. Notice that Minikube is available in all major platforms (Windows, Mac, Linux). Docker for Windows works obviously only on Windows and even there it has some extra requirements.

Windows considerations

Docker-For-Windows has recently added native Kubernetes integration. To use it you need a very recent OS version (Windows 10 Pro). If you have an older version (e.g. Windows 7) or a non-Pro edition (e.g. Home) then Minikube is the only option.

Docker-for-windows uses Type-1 hypervisor, such as Hyper-V, which are better compared to Type-2 hypervisors, such as VirtualBox, while Minikube supports both hypervisors. Unfortunately, there are a couple of limitations in which technology you are using, since you cannot have Type-1 or Type-2 hypervisors running at the same time on your machine:

  • If you are running virtual machines on your desktop, such as Vagrant, then you will not be able to run them if you enable type-1 hypervisors.
  • If you want to run Windows containers, then using docker-for-windows is the only option you have.
  • Switching between these two hypervisors requires a machine restart.
  • To use Hyper-V hypervisor you need to have installed Windows 10 Pro edition on your development machine.

Depending on your needs and your development environment, you need to make a choice between docker-for-windows and Minikube.

Both solutions can be installed either manually or by using the Chocolatey package manager for Windows. Installation of Chocolatey is easy, just use the following command from PowerShell in administrative mode:

Set-ExecutionPolicy Bypass -Scope Process -Force; iex ((New-Object System.Net.WebClient).DownloadString(‘https://chocolatey.org/install.ps1’))

Complete installation instructions for Chocolatey can be found in the documentation.

Docker on Windows with Kubernetes support

If you want to run Windows containers then Docker-For-Windows is the only possible choice. Minikube will only run Linux based containers (in a VM).

This means that for Windows containers the considerations mentioned previously are actually hard requirements. If you want to run Windows Containers then:

  • You need to run Windows 10 Pro
  • You need to enable the hyper-v hypervisor

In addition, at the time of writing, Kubernetes is only available in Docker for Windows 18.06 CE Edge. Docker for Windows 18.06 CE Edge includes a standalone Kubernetes server and client, as well as Docker CLI integration. The Kubernetes server runs locally within your Docker instance as a single node cluster, and it is pre-configured in terms of clusters, users and contexts.

You have two options to install docker-for-windows, either download from the Docker Store, or use Chocolatey package manager. In the case that you are using Chocolatey (recommended), then you can install docker-for-windows with the following command:

choco install docker-for-windows -pre

Hint: If Hyper-V is available but not enabled, then you can enable Hyper-V using the following command in PowerShell with administrative support. Note that enabling/disabling hyper-v hypervisor requires a restart of your local machine.

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All

After the successful installation of docker-for-windows, you can verify that you have installed Kubernetes by executing the following command in Windows PowerShell:

You can view the Kubernetes configuration details (like name, port, context) using the following command:

Management

When Kubernetes support is enabled, you can deploy your workloads in parallel on Kubernetes, Swarm, and as standalone containers. Note that enabling or disabling the Kubernetes server does not affect your other workloads. If you are working with multiple Kubernetes clusters and different environments you will be familiar with switching contexts. You can view contexts using the kubectl config command:

kubectl config get-contexts

Set the context to use docker-for-desktop:

kubectl config use-context docker-for-desktop

Unfortunately, Kubernetes does not come by default with a dashboard and you need to enable it with the following command:

kubectl apply -f

https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml

To view the dashboard in your web browser run:

And navigate to your Kubernetes Dashboard at: http://localhost:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy

Deployment

Deploying an application is very straightforward. In this example, we install a cluster of nginx servers, using the following commands:

kubectl run nginx –image nginx

kubectl expose deployment nginx –port 80 –target-port 80 –name nginx

Once Kubernetes has finished downloading the containers, you can see them by using the command:

You can use the dashboard, as mentioned above, to verify that nginx is installed and your cluster is in working condition. You can deploy any other Kubernetes application you have developed in a similar manner.

Kubernetes on Windows using minikube

Another option of running Kubernetes locally is to use Minikube. In general, Minikube is a vbox instance running Linux and docker-daemon pre-installed. This is actually the only option if your machine does not satisfy the requirements mentioned in the first part of this article.

The main advantage of Minikube for Windows is that it supports several drivers including Hyper-V and VirtualBox, and you can use most of the Kubernetes add-ons. You can find a list of Minikube add-ons here.

Installation

Instead of manually installing all the packages for Minikube, you can install all prerequisites at once using the Chocolatey package manager. To install Minikube you can use the following command in the PowerShell:

choco install minikube -y

Option 1 – Hyper-V Support

To start Minikube cluster with hyper-v support, you need to first create an external network switch based on physical network adapters (Ethernet or Wi-fi). The following steps must be followed:

Step 1: Identify physical network adapters ( Ethernet and/or Wifi) using the command:

Step 2: Create external-virtual-switch using the following command

New-VMSwitch -Name “myCluster” -AllowManagement $True -NetAdapterName “<adapter_name>”

Finally, to start the Kubernetes cluster use the following command:

minikube start –vm-driver=hyperv — hyperv-virtual-switch=myCluster

If the last command was successful, then you can use the following command to see the Kubernetes cluster:

Finally, if you want to delete the created cluster, then you can achieve it with the following command:

Option 2 – VirtualBox Support

You can also use Minikube in an alternative mode where a full Virtual machine will be used in the form of Virtualbox. To start a Minikube cluster in this mode, you need to execute the following command:

Note that we need to disable hyper-v support in order for Minikube to install to virtualbox. A Virtualbox installation is required. Disabling the hyper-v hypervisor can be done with the following command:

Disable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V-All

Note that when you are using Minikube without a local Docker daemon (docker-for-windows) you need to instruct Docker CLI to send the commands to the remote docker daemon installed in the Minikube virtual machine and not to the local one, with the command docker ps, as shown in the figure below.

Management

After successfully starting a Minikube cluster, you have created a Minikube context called “Minikube”, which is set by default during startup. You can switch between any context using the command:

kubectl config use-context minikube

Furthermore, to access the Kubernetes dashboard, you need to execute/run the following command:

Additional information on how to configure and manage Minikube Kubernetes clusters can be found in the documentation.

Deployment

Deploying an application is similar for both cases (Hyper-V or VirtualBox). For example, you can deploy, expose, and scale a service by using the expected Kubernetes commands:

kubectl run my-nginx –image=nginx –port=80 // deploy

kubectl expose deployment my-nginx –type=NodePort //for exposing the service

Kubectl scale –replicas=3 deployment/my-nginx

You can navigate your Minikube cluster, either by visiting the Kubernetes dashboard or by using kubectl.

Conclusions

After looking at both solutions here are our results…

Minikube is a mature solution available for all major operating systems. Its main advantage is that it provides a unified way of working with a local Kubernetes cluster regardless of the operating system. It is perfect for people that are using multiple OS machines and have some basic familiarity with Kubernetes and Docker.

Pros:

  • Mature solution
  • Works on Windows (any version and edition), Mac, and Linux
  • Multiple drivers that can match any environment
  • Can work with or without an intermediate VM on Linux (vmdriver=none)
  • Installs several plugins (such as dashboard) by default
  • Very flexible on installation requirements and upgrades

Cons:

  • Installation and removal not as streamlined as other solutions
  • Can conflict with local installation of other tools (such as Virtualbox)

Docker for Windows is a solution exclusively for Windows with some strict requirements. Its main advantage is the user installation/experience and easy switch between Windows and Linux Containers.

Pros:

  • Very easy installation for beginners
  • The best solution for running Windows containers
  • Integrated Docker and Kubernetes solution

Cons:

  • Requires Windows 10 Pro edition and Hyper V
  • Cannot use simultaneously with Virtualbox, Vagrant etc
  • Relatively new, possibly unstable
  • The sole solution for running Windows containers

Let us know in the comments, which local Kubernetes solution you are using and why.

Source

Why Is Securing Kubernetes so Difficult?

Why Is Securing Kubernetes so Difficult?

If you’re already familiar with Kubernetes, the question in the title will probably resonate deep within your very being. And if you’re only just getting started on your cloud native journey, and Kubernetes represents a looming mountain to conquer, you’ll quickly come to realise the pertinence of the question.

Security is hard at the best of times, but when your software applications are composed of a multitude of small, dynamic, scalable, distributed microservices running in containers, then it gets even harder. And it’s not just the ephemerality of the environment that ratchets up the difficulties, it’s also the adoption of new workflows and toolchains, all of which bring their own new security considerations.

Let’s dig a little deeper.

Skills

Firstly, Kubernetes, and some of the other tools that are used in the delivery pipeline of containerised microservices, are complex. They have a steep learning curve, are subject to an aggressive release cycle with frequent change, and require considerable effort to keep on top of all of the nuanced aspects of platform security. If the team members responsible for security don’t understand how the platform should be secured, or worse, nobody has been assigned the responsibility, then its conceivable that glaring security holes could exist in the platform. At the very best, this could prove embarrassing, and at worst, could have pernicious consequences.

Focus

In the quest to become innovators in their chosen market, or to be nimbler in response to external market forces (for example, customer demands or competitor activity), organizations new and old, small and large, are busy adopting DevOps practices. The focus is on the speed of delivery of new features and fixes, with the blurring of the traditional lines between development and operations. It’s great that we consider the operational aspects as we develop and define our integration, testing and delivery pipeline, but what about security? Security shouldn’t be an afterthought; it needs to be an integral part of the software development lifecycle, considered at every step in the process. Sadly, this is often not the case, but there is a growing recognition of the need for security to ‘shift left’, and to be accommodated in the delivery pipeline. The practice is coined DevSecOps or continuous security.

Complexity

We’ve already alluded to the fact that Kubernetes and its symbiotic tooling, is complex in nature, and somewhat difficult to master. But it gets worse, because there are multiple layers in the Kubernetes stack, each of which has its own security considerations. It’s simply not enough to lock down one layer, whilst ignoring the other layers that make up the stack. This would be a bit like locking the door, whilst leaving the windows wide open.

Having to consider and implement multiple layers of security, introduces more complexity. But it also has a beneficial side effect; it provides ‘defence in depth’, such that if one security mechanism is circumvented by a would-be attacker, another mechanism in the same or another layer, can intervene and render the attack ineffective.

Securing All the Layers

What are the layers, then, that need to be secured in a Kubernetes platform?

First, there is an infrastructure layer, that comprises of machines and the networking connections between them. The machines may consist of physical or abstracted hardware components, and will run an operating system and (usually) the Docker Engine.

Second, there is further infrastructure layer that is composed of the Kubernetes cluster components; the control plane components running on the master node(s), and the components that interact with container workloads running on worker nodes.

The next layer deals with applying various security controls to Kubernetes, in order to control access to and from within the cluster, define policy for running container workloads, and for providing workload isolation.

Finally, a workload security layer deals with the security, provenance, and integrity of the container workloads, themselves. This security layer should not only deal with the tools that help to manage the security of the workloads, but should also address how those tools are incorporated into the end-to-end workflow.

Some Common Themes

It’s useful to know that there are some common security themes that run through most of the layers that need our consideration. Recognizing them in advance, and taking a consistent approach in their application, can help significantly in implementing security policy.

  • Principle of Least Privilege – a commonly applied principle in wider IT security, its concern is with limiting the access users and application services have to available resources, such that the access provided is just sufficient to perform the assigned function. This helps to prevent privilege escalation; if a container workload is compromised, for example, and it’s been deployed with just enough privileges for it to perform its task, the fallout from the compromise is limited to the privileges assigned to the workload.
  • Software Currency – keeping software up to date is crucial in the quest to keep platforms secure. It goes without saying that security-related patches should be applied as soon as is practically possible, and other software components should be exercised thoroughly in a test environment before being applied to production services. Some care needs to be taken when deploying brand new major releases (e.g. 2.0), and as a general rule, it’s not wise to deploy alpha, beta, or release candidate versions to production environments. Interestingly, this doesn’t necessarily hold true for API versions associated with Kubernetes objects. The API for the commonly used Ingress object, for example, has been at version v1beta1 since Kubernetes v1.1.
  • Logging & Auditing – having the ability to check back to see what or who instigated a particular action or chain of actions, is extremely valuable in maintaining the security of a platform. The logging of audit events should be configured in all layers of the platform stack, including the auditing of Kubernetes cluster activity using audit policy. Audit logs should be collected and shipped to a central repository using a log shipper, such as Fluentd or Filebeat, where they can be stored for subsequent analysis using a tool such as Elasticsearch, or a public cloud equivalent.
  • Security vs. Productivity Trade Off – in some circumstances, security might be considered a hindrance to productivity; in particular, developer productivity. The risk associated with allowing the execution of privileged containers, for example, in a cluster dedicated to development activities, might be a palatable one, if it allows a development team to move at a faster pace. The trade off between security and productivity (and other factors) will be different for a development environment, a production environment, and even a playground environment used for learning and trying things out. What’s unacceptable in one environment, may be perfectly acceptable in another. The risk associated with relaxing security constraints should not simply be disregarded, however; it should be carefully calculated, and wherever possible, be mitigated. The use of privileged containers for example, can be mitigated using Kubernetes-native security controls, such as RBAC and Pod Security Policies.

Series Outline

Configuring security on a Kubernetes platform is difficult, but not impossible! This is an introductory article in a series entitled Securing Kubernetes for Cloud Native Applications, which aims to lift the lid on aspects of security for a Kubernetes platform. We can’t cover every topic and every facet, but we’ll aim to provide a good overview of the security requirements in each layer, as well as some insights from our experience of running production-grade Kubernetes clusters for our customers.

The series of articles is comprised of the following:

  • Why Is Securing Kubernetes so Difficult? (this article)
  • Securing the Base Infrastructure of a Kubernetes Cluster
  • Securing the Configuration of Kubernetes Cluster Components
  • Applying Best Practice Security Controls to a Kubernetes Cluster
  • Managing the Security of Kubernetes Container Workloads

Source

Using Vagrant to Emulate Rancher Environments

I spend a large amount of my time helping clients implement Rancher
successfully. As Rancher is involved in just about every vertical, I
come across a large number of different infrastructure configurations,
including (but not limited to!) air-gapped, proxied, SSL, HA Rancher
Server, and non-HA Rancher Server.

Scenario & Criteria

What I wanted was a way to quickly emulate an environment to allow me to
more closely test or replicate an issue. Now this could be done in a
number of ways, but the solution I wanted had to meet the following
criteria:

  • Run locally — Yes, it can be done in the cloud, but running
    services in the cloud costs money, so I wanted to keep the cost
    down.
  • Be local host OS agnostic — So others can use it, I didn’t
    want to tie it to Windows, MacOS or Linux.
  • Emulate multiple infrastructure scenarios — As in the criteria
    above, but I must also be able to use it as a basis for doing local
    application development and testing of containers.
  • Minimise bandwidth
  • Be easy to configure — So that anyone can use it.

The Solution

To fulfill the first two criteria, I needed a hypervisor (preferably
free) that could run on all platforms locally. Having had a reasonable
amount of experience with
VirtualBox (and it meeting
my preferably free criterion), I decided to use it. The third criterion
was to allow me to emulate environments locally that had the following
configurations:

  • Air gap — Where there was no connection to the internet to
    install Rancher Server and nodes
  • Proxied — Where the Internet access was via a proxy server
  • SSL — Where the connection to the Rancher Server SSL
    terminated
  • HA — The ability to stand up Rancher Server in a HA
    configuration with an externalized database and a configurable
    number of nodes

To meet the fourth criterion to minimise bandwidth, I decided to run a
registry mirror. It allowed me to destroy the setup and start afresh
quickly as all the VMs had Docker engine set to pull via the mirror. Not
only does it save bandwidth, but it also significantly speeds up
rebuilds. Files for the mirror persist to the local disk to preserve
them between rebuilds. For the final criterion of easy configuration, I
decided that I was going to use Vagrant. For those reading who haven’t
used Vagrant:

  • It’s open-source software that helps you build and maintain
    portable, virtual software development environments.
  • It provides the same, easy workflow regardless of your role as a
    developer, operator, or designer. It leverages a declarative
    configuration file which describes all your software requirements,
    packages, operating system configuration, users, and more.
  • It works on Mac, Linux, Windows, and more. Remote development
    environments force users to give up their favorite editors and
    programs. Vagrant works with tools on your local system with which
    you’re already familiar.

After you download Vagrant, you can
specify that it consumes its config from a separate file. For this
setup, all configurable options are externalized into a config.yaml
file that is parsed at startup. This means that you can configure and
use it without having to have a deep, technical understanding of
Vagrant. I also added an NFS server to the solution, running on the
master node so that services could be tested with persistent storage.
So, what does the final solution look like?

The master node runs a bunch of supporting services like MYSQL and
HAProxy to help all of this hang together. In true Docker style, all of
these supporting services are containerised!

Minimum Setup

To run this solution, the setup will create a minimum of three VMs.
There is a master, a minimum of one Rancher Server, and one node. Below
is an example of the config.yaml file with the main parts that you
will change highlighted:

You can find more detail on the config options in our repository for
this setup
.

Quick Start

For those of you wanting a Quick Start, you need the following software
installed:

Then, it is as simple as dropping to a command prompt and running the
following commands:

git clone https://github.com/rancher/vagrant
cd vagrant
vagrant up

Rancher Server will start running Cattle with three nodes. This Quick
Start also supports Rancher 2.0, so if
you want to check out the Tech Preview with the minimum of effort, run:

git clone https://github.com/rancher/vagrant
cd vagrant
git checkout 2.0
vagrant up

Thanks to my colleague James, for helping me enhance this solution to
what it has become.

About the Author

Chris Urwin works
as a field engineer for Rancher Labs based out of the UK. He spends his
days helping our enterprise clients get the most out of Rancher, and his
nights wishing he had more hair on his head!

Source

Reducing Your AWS Spend with AutoSpotting and Rancher

Back in older times, B.C. as in Before Cloud, to put a service live you
had to:

  1. Spend months figuring out how much hardware you needed
  2. Wait at least eight weeks for your hardware to arrive
  3. Allow another four weeks for installation
  4. Then, configure firewall ports
  5. Finally, add servers to config management and provision them

All of this was in an organised company!

The Now

The new norm is to use hosted instances. You can scale these up and down
based on requirements and demand. Servers are available in a matter of
seconds. With containers, you no longer care about actual servers. You
only care about compute resource. Once you have an orchestrator like
Rancher, you don’t need to worry
about maintaining scale or setting where containers run, as Rancher
takes care of all of that. Rancher continuously monitors and assesses
the requirements that you set and does its best to ensure that
everything is running. Obviously, we need some compute resource, but it
can run for days or hours. The fact is, with containers, you pretty much
don’t need to worry.

Reducing Cost

So, how can we take advantage of the flexibility of containers to help
us reduce costs? There are a couple of things that you can do. Firstly
(and this goes for VMs as well as containers), do you need all your
environments running all the time? In a world where you own the kit and
there is no cost advantage to shutting down environments versus keeping
them running, this practice was accepted. But in the on-demand world,
there is a cost associated with keeping things running. If you only
utilise a development or testing environment for eight hours a day, then
you are paying four times as much by keeping it running 24 hours a day!
So, shutting down environments when you’re not using them is one way to
reduce costs. The second thing you can do (and the main reason behind
this post) is using Spot Instances.
Not heard of them? In a nutshell, they’re a way of getting cheap
compute resource in AWS. Interested in saving up to 80% of your AWS EC2
bill? Then keep reading. The challenge with Spot Instances is that they
can terminate after only two minutes’ notice. That causes problems for
traditional applications, but containers handle this more fluid nature
of applications with ease. Within AWS, you can directly request Spot
Instances, individually or in a fleet, and you set a maximum price for
the instance. Once you breach this price, AWS gives you two minutes and
then terminates the instance.

AutoSpotting

What if you could have an Auto Scaling Group (ASG) with an On-Demand
Instance type to which you could revert if you breached the Spot
Instance price? Step forward an awesome piece of open-source software
called AutoSpotting. You can find the source and more information on
GitHub. AutoSpotting works by
replacing On-Demand Instances from within an ASG with individual Spot
Instances. AutoSpotting takes a copy of the launch config of the ASG and
starts a Spot Instance (of equivalent spec or more powerful) with the
exact same launch config. Once this new instance is up and running,
AutoSpotting swaps out one of the On-Demand Instances in the ASG with
this new Spot Instance, in the process terminating the more expensive
On-Demand Instance. It will continue this process until it replaces all
instances. (There is a configuration option that allows you to specify
the percentage of the ASG that you want to replace. By default, it’s
100%.) AutoSpotting isn’t application aware. It will only start a
machine with an identical configuration. It doesn’t perform any
graceful halting of applications. It purely replaces a virtual instance
with a cheaper virtual instance. For these reasons, it works great for
Docker containers that are managed by an orchestrator like Rancher. When
a compute instance disappears, then Rancher takes care of maintaining
the scale. To facilitate a smoother termination, I’ve created a helper
service, AWS-Spot-Instance-Helper, that monitors to see if a host is
terminating. If it is, then the helper uses the Rancher evacuate
function to more gracefully transition running containers from the
terminating host. This helper isn’t tied to AutoSpotting, and anyone
who is using Spot Instances or fleets with Rancher can use it to allow
for more graceful terminations. Want an example of what it does to the
cost of running an environment?

Can you guess which day I implemented it? OK, so I said up to 80%
savings but, in this environment, we didn’t replace all instances at the
point when I took this measurement. So, why are we blogging about it
now? Simple: We’ve taken it and turned it into a Rancher Catalog
application so that all our Rancher AWS users can easily consume it.

3 Simple Steps to Saving Money

Step 1

Go to the Catalog > Community and select AutoSpotting.

Step 2

Fill in the AWS Access Key and Secret Key. (These are the only
mandatory fields.) The user must have the following AWS permissions:

autoscaling:DescribeAutoScalingGroups

autoscaling:DescribeLaunchConfigurations

autoscaling:AttachInstances

autoscaling:DetachInstances

autoscaling:DescribeTags

autoscaling:UpdateAutoScalingGroup

ec2:CreateTags

ec2:DescribeInstances

ec2:DescribeRegions

ec2:DescribeSpotInstanceRequests

ec2:DescribeSpotPriceHistory

ec2:RequestSpotInstances

ec2:TerminateInstances

Optionally, set the Tag Name. By default, it will look for
spot-enabled. I’ve slightly modified the original code to allow the
flexibility of running multiple AutoSpotting containers in an
environment. This modification allows you to use multiple policies in
the same AWS account. Then, click Launch.

Step 3

Add the tag (user-specified or spot-enabled, with a value of
true) to any AWS ASGs on which you want to save money. Cheap (and
often more powerful) Spot Instances will gradually replace your
instances. To deploy the AWS-Spot-Instance-Helper service, simply
browse to the Catalog > Community and launch the application.

Thanks goes out to Cristian Măgherușan-Stanciu and other
contributors

for writing such a great piece of open-source software.

About the Author

Chris Urwin works
as a field engineer for Rancher Labs based out of the UK, helping our
enterprise clients get the most out of Rancher.

Source

Running Highly Available WordPress with MySQL on Kubernetes

Take a deep dive into Best Practices in Kubernetes Networking

From overlay networking and SSL to ingress controllers and network security policies, we’ve seen many users get hung up on Kubernetes networking challenges. In this video recording, we dive into Kubernetes networking, and discuss best practices for a wide variety of deployment options.

Watch the video

WordPress is a popular platform for editing and publishing content for
the web. In this tutorial, I’m going to walk you through how to build
out a highly available (HA) WordPress deployment using Kubernetes.
WordPress consists of two major components: the WordPress PHP server,
and a database to store user information, posts, and site data. We need
to make both of these HA for the entire application to be fault
tolerant. Running HA services can be difficult when hardware and
addresses are changing; keeping up is tough. With Kubernetes and its
powerful networking components, we can deploy an HA WordPress site and
MySQL database without typing a single IP address (almost). In this
tutorial, I’ll be showing you how to create storage classes, services,
configuration maps, and sets in Kubernetes; run HA MySQL; and hook up an
HA WordPress cluster to the database service. If you don’t already have
a Kubernetes cluster, you can spin one up easily on Amazon, Google, or
Azure, or by using Rancher Kubernetes Engine
(RKE)
on any servers.

Architecture Overview

I’ll now present an overview of the technologies we’ll use and their
functions:

  • Storage for WordPress Application Files: NFS with a GCE Persistent
    Disk Backing
  • Database Cluster: MySQL with xtrabackup for parity
  • Application Level: A WordPress DockerHub image mounted to NFS
    Storage
  • Load Balancing and Networking: Kubernetes-based load balancers and
    service networking

The architecture is organized as shown below:

Diagram

Creating Storage Classes, Services, and Configuration Maps in Kubernetes

In Kubernetes, stateful sets offer a way to define the order of pod
initialization. We’ll use a stateful set for MySQL, because it ensures
our data nodes have enough time to replicate records from previous pods
when spinning up. The way we configure this stateful set will allow the
MySQL master to spin up before any of the slaves, so cloning can happen
directly from master to slave when we scale up. To start, we’ll need to
create a persistent volume storage class and a configuration map to
apply master and slave configurations as needed. We’re using persistent
volumes so that the data in our databases aren’t tied to any specific
pods in the cluster. This method protects the database from data loss in
the event of a loss of the MySQL master pod. When a master pod is lost,
it can reconnect to the xtrabackup slaves on the slave nodes and
replicate data from slave to master. MySQL’s replication handles
master-to-slave replication but xtrabackup handles slave-to-master
backward replication. To dynamically allocate persistent volumes, we
create the following storage class utilizing GCE Persistent Disks.
However, Kubernetes offers a variety of persistent volume storage
providers:

# storage-class.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
zone: us-central1-a

Create the class and deploy with this
command: $ kubectl create -f storage-class.yaml. Next, we’ll create
the configmap, which specifies a few variables to set in the MySQL
configuration files. These different configurations are selected by the
pods themselves, but they give us a handy way to manage potential
configuration variables. Create a YAML file named mysql-configmap.yaml
to handle this configuration as follows:

# mysql-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql
labels:
app: mysql
data:
master.cnf: |
# Apply this config only on the master.
[mysqld]
log-bin
skip-host-cache
skip-name-resolve
slave.cnf: |
# Apply this config only on slaves.
[mysqld]
skip-host-cache
skip-name-resolve

Create the configmap and deploy with this
command: $ kubectl create -f mysql-configmap.yaml. Next, we want to
set up the service such that MySQL pods can talk to one another and our
WordPress pods can talk to MySQL, using mysql-services.yaml. This also
enables a service load balancer for the MySQL service.

# mysql-services.yaml
# Headless service for stable DNS entries of StatefulSet members.
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
ports:
– name: mysql
port: 3306
clusterIP: None
selector:
app: mysql

With this service declaration, we lay the groundwork to have a multiple
write, multiple read cluster of MySQL instances. This configuration is
necessary because each WordPress instance can potentially write to the
database, so each node must be ready to read and write. To create the
services above, execute the following command:
$ kubectl create -f mysql-services.yaml At this point, we’ve created
the volume claim storage class which will hand persistent disks to all
pods that request them, we’ve configured the configmap that sets a few
variables in the MySQL configuration files, and we’ve configured a
network-level service that will load balance requests to the MySQL
servers. This is all just framework for the stateful sets, where the
MySQL servers actually operate, which we’ll explore next.

Configuring MySQL with Stateful Sets

In this section, we’ll be writing the YAML configuration for a MySQL
instance using a stateful set. Let’s define our stateful set:

  • Create three pods and register them to the MySQL service.
  • Define the following template for each pod:
  • Create an initialization container for the master MySQL server
    named init-mysql.

    • Use the mysql:5.7 image for this container.
    • Run a bash script to set up xtrabackup.
    • Mount two new volumes for the configuration and configmap.
  • Create an initialization container for the master MySQL server
    named clone-mysql.

    • Use the Google Cloud Registry’s xtrabackup:1.0 image for this
      container.
    • Run a bash script to clone existing xtrabackups from the
      previous peer.
    • Mount two new volumes for data and configuration.
    • This container effectively hosts the cloned data so the new
      slave containers can pick it up.
  • Create the primary containers for the slave MySQL servers.
    • Create a MySQL slave container and configure it to connect to
      the MySQL master.
    • Create a xtrabackup slave container and configure it to
      connect to the xtrabackup master.
  • Create a volume claim template to describe each volume to be created
    as a 10GB persistent disk.

The following configuration defines behavior for masters and slaves of
our MySQL cluster, offering a bash configuration that runs the slave
client and ensures proper operation of a master before cloning. Slaves
and masters each get their own 10GB volume which they request from the
persistent volume storage class we defined earlier.

apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
serviceName: mysql
replicas: 3
template:
metadata:
labels:
app: mysql
spec:
initContainers:
– name: init-mysql
image: mysql:5.7
command:
– bash
– “-c”
– |
set -ex
# Generate mysql server-id from pod ordinal index.
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=$
echo [mysqld] > /mnt/conf.d/server-id.cnf
# Add an offset to avoid reserved server-id=0 value.
echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
# Copy appropriate conf.d files from config-map to emptyDir.
if [[ $ordinal -eq 0 ]]; then
cp /mnt/config-map/master.cnf /mnt/conf.d/
else
cp /mnt/config-map/slave.cnf /mnt/conf.d/
fi
volumeMounts:
– name: conf
mountPath: /mnt/conf.d
– name: config-map
mountPath: /mnt/config-map
– name: clone-mysql
image: gcr.io/google-samples/xtrabackup:1.0
command:
– bash
– “-c”
– |
set -ex
# Skip the clone if data already exists.
[[ -d /var/lib/mysql/mysql ]] && exit 0
# Skip the clone on master (ordinal index 0).
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=$
[[ $ordinal -eq 0 ]] && exit 0
# Clone data from previous peer.
ncat –recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
# Prepare the backup.
xtrabackup –prepare –target-dir=/var/lib/mysql
volumeMounts:
– name: data
mountPath: /var/lib/mysql
subPath: mysql
– name: conf
mountPath: /etc/mysql/conf.d
containers:
– name: mysql
image: mysql:5.7
env:
– name: MYSQL_ALLOW_EMPTY_PASSWORD
value: “1”
ports:
– name: mysql
containerPort: 3306
volumeMounts:
– name: data
mountPath: /var/lib/mysql
subPath: mysql
– name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 500m
memory: 1Gi
livenessProbe:
exec:
command: [“mysqladmin”, “ping”]
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
exec:
# Check we can execute queries over TCP (skip-networking is off).
command: [“mysql”, “-h”, “127.0.0.1”, “-e”, “SELECT 1”]
initialDelaySeconds: 5
periodSeconds: 2
timeoutSeconds: 1
– name: xtrabackup
image: gcr.io/google-samples/xtrabackup:1.0
ports:
– name: xtrabackup
containerPort: 3307
command:
– bash
– “-c”
– |
set -ex
cd /var/lib/mysql

# Determine binlog position of cloned data, if any.
if [[ -f xtrabackup_slave_info ]]; then
# XtraBackup already generated a partial “CHANGE MASTER TO” query
# because we’re cloning from an existing slave.
mv xtrabackup_slave_info change_master_to.sql.in
# Ignore xtrabackup_binlog_info in this case (it’s useless).
rm -f xtrabackup_binlog_info
elif [[ -f xtrabackup_binlog_info ]]; then
# We’re cloning directly from master. Parse binlog position.
[[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
rm xtrabackup_binlog_info
echo “CHANGE MASTER TO MASTER_LOG_FILE=’$’,
MASTER_LOG_POS=$” > change_master_to.sql.in
fi

# Check if we need to complete a clone by starting replication.
if [[ -f change_master_to.sql.in ]]; then
echo “Waiting for mysqld to be ready (accepting connections)”
until mysql -h 127.0.0.1 -e “SELECT 1”; do sleep 1; done

echo “Initializing replication from clone position”
# In case of container restart, attempt this at-most-once.
mv change_master_to.sql.in change_master_to.sql.orig
mysql -h 127.0.0.1 <<EOF
$(<change_master_to.sql.orig),
MASTER_HOST=’mysql-0.mysql’,
MASTER_USER=’root’,
MASTER_PASSWORD=”,
MASTER_CONNECT_RETRY=10;
START SLAVE;
EOF
fi

# Start a server to send backups when requested by peers.
exec ncat –listen –keep-open –send-only –max-conns=1 3307 -c
“xtrabackup –backup –slave-info –stream=xbstream –host=127.0.0.1 –user=root”
volumeMounts:
– name: data
mountPath: /var/lib/mysql
subPath: mysql
– name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 100m
memory: 100Mi
volumes:
– name: conf
emptyDir: {}
– name: config-map
configMap:
name: mysql
volumeClaimTemplates:
– metadata:
name: data
spec:
accessModes: [“ReadWriteOnce”]
resources:
requests:
storage: 10Gi

Save this file as mysql-statefulset.yaml.
Type kubectl create -f mysql-statefulset.yaml and let Kubernetes
deploy your database. Now, when you call $ kubectl get pods, you
should see three pods spinning up or ready that each have two containers
on them. The master pod is denoted as mysql-0 and the slaves follow
as mysql-1 and mysql-2. Give the pods a few minutes to make sure
the xtrabackup service is synced properly between pods, then move on
to the WordPress deployment. You can check the logs of the individual
containers to confirm that there are no error messages being thrown. To
do this, run $ kubectl logs -f -c <container_name> The master
xtrabackup container should show the two connections from the slaves
and no errors should be visible in the logs.

Deploying Highly Available WordPress

The final step in this procedure is to deploy our WordPress pods onto
the cluster. To do this, we want to define a service for WordPress and a
deployment. For WordPress to be HA, we want every container running the
server to be fully replaceable, meaning we can terminate one and spin up
another with no change to data or service availability. We also want to
tolerate at least one failed container, having a redundant container
there to pick up the slack. WordPress stores important site-relevant
data in the application directory /var/www/html. For two instances of
WordPress to serve the same site, that folder has to contain identical
data. When running WordPress in HA, we need to share
the /var/www/html folders between instances, so we’ll define an NFS
service that will be the mount point for these volumes. The following
configuration sets up the NFS services. I’ve provided the plain English
version below:

  • Define a persistent volume claim to create our shared NFS disk as a
    GCE persistent disk at size 200GB.
  • Define a replication controller for the NFS server which will ensure
    at least one instance of the NFS server is running at all times.
  • Open ports 2049, 20048, and 111 in the container to make the NFS
    share accessible.
  • Use the Google Cloud Registry’s volume-nfs:0.8 image for the NFS
    server.
  • Define a service for the NFS server to handle IP address routing.
  • Allow necessary ports through that service firewall.

# nfs.yaml
# Define the persistent volume claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs
labels:
demo: nfs
annotations:
volume.alpha.kubernetes.io/storage-class: any
spec:
accessModes: [ “ReadWriteOnce” ]
resources:
requests:
storage: 200Gi


# Define the Replication Controller
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-server
spec:
replicas: 1
selector:
role: nfs-server
template:
metadata:
labels:
role: nfs-server
spec:
containers:
– name: nfs-server
image: gcr.io/google_containers/volume-nfs:0.8
ports:
– name: nfs
containerPort: 2049
– name: mountd
containerPort: 20048
– name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
– mountPath: /exports
name: nfs-pvc
volumes:
– name: nfs-pvc
persistentVolumeClaim:
claimName: nfs


# Define the Service
kind: Service
apiVersion: v1
metadata:
name: nfs-server
spec:
ports:
– name: nfs
port: 2049
– name: mountd
port: 20048
– name: rpcbind
port: 111
selector:
role: nfs-server

Deploy the NFS server using $ kubectl create -f nfs.yaml. Now, we need
to run $ kubectl describe services nfs-server to gain the IP address
to use below. Note: In the future, we’ll be able to tie these
together using the service names, but for now, you have to hardcode the
IP address.

# wordpress.yaml
apiVersion: v1
kind: Service
metadata:
name: wordpress
labels:
app: wordpress
spec:
ports:
– port: 80
selector:
app: wordpress
tier: frontend
type: LoadBalancer

apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 20G
accessModes:
– ReadWriteMany
nfs:
# FIXME: use the right IP
server: <IP of the NFS Service>
path: “/”

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs
spec:
accessModes:
– ReadWriteMany
storageClassName: “”
resources:
requests:
storage: 20G

apiVersion: apps/v1beta1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
– image: wordpress:4.9-apache
name: wordpress
env:
– name: WORDPRESS_DB_HOST
value: mysql
– name: WORDPRESS_DB_PASSWORD
value: “”
ports:
– containerPort: 80
name: wordpress
volumeMounts:
– name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
– name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: nfs

We’ve now created a persistent volume claim that maps to the NFS
service we created earlier. It then attaches the volume to the WordPress
pod at the /var/www/html root, where WordPress is installed. This
preserves all installation and environments across WordPress pods in the
cluster. With this configuration, we can spin up and tear down any
WordPress node and the data will remain. Because the NFS service is
constantly using the physical volume, it will retain the volume and
won’t recycle or misallocate it. Deploy the WordPress instances
using $ kubectl create -f wordpress.yaml. The default deployment only
runs a single instance of WordPress, so feel free to scale up the number
of WordPress instances
using $ kubectl scale –replicas=<number of replicas> deployment/wordpress.
To obtain the address of the WordPress service load balancer,
type $ kubectl get services wordpress and grab the EXTERNAL-IP field
from the result to navigate to WordPress.

Resilience Testing

OK, now that we’ve deployed our services, let’s start tearing them
down to see how well our HA architecture handles some chaos. In this
approach, the only single point of failure left is the NFS service (for
reasons explained in the Conclusion). You should be able to demonstrate
testing the failure of any other services to see how the application
responds. I’ve started with three replicas of the WordPress service and
the one master and two slaves on the MySQL service. First, let’s kill
all but one WordPress node and see how the application reacts:
$ kubectl scale –replicas=1 deployment/wordpress Now, we should see a
drop in pod count for the WordPress deployment. $ kubectl get pods We
should see that the WordPress pods are running only 1/1 now. When
hitting the WordPress service IP, we’ll see the same site and same
database as before. To scale back up, we can
use $ kubectl scale –replicas=3 deployment/wordpress. We’ll again
see that data is preserved across all three instances. To test the MySQL
StatefulSet, we can scale down the number of replicas using the
following: $ kubectl scale statefulsets mysql –replicas=1 We’ll see
a loss of both slaves in this instance and, in the event of a loss of
the master in this moment, the data it has will be preserved on the GCE
Persistent Disk. However, we’ll have to manually recover the data from
the disk. If all three MySQL nodes go down, you’ll not be able to
replicate when new nodes come up. However, if a master node goes down, a
new master will be spun up and via xtrabackup, it will repopulate with
the data from a slave. Therefore, I don’t recommend ever running with a
replication factor of less than three when running production databases.
To conclude, let’s talk about some better solutions for your stateful
data, as Kubernetes isn’t really designed for state.

Conclusions and Caveats

You’ve now built and deployed an HA WordPress and MySQL installation on
Kubernetes! Despite this great achievement, your journey may be far from
over. If you haven’t noticed, our installation still has a single point
of failure: the NFS server sharing the /var/www/html directory between
WordPress pods. This service represents a single point of failure
because without it running, the html folder disappears on the pods
using it. The image I’ve selected for the server is incredibly stable
and production ready, but for a true production deployment, you may
consider
using GlusterFS to
enable multi-read multi-write to the directory shared by WordPress
instances. This process involves running a distributed storage cluster
on Kubernetes, which isn’t really what Kubernetes is built for, so
despite it working, it isn’t a great option for long-term
deployments. For the database, I’d personally recommend using a managed
Relational Database service to host the MySQL instance, be it Google’s
CloudSQL or AWS’s RDS, as they provide HA and redundancy at a more
sensible price and keep you from worrying about data integrity.
Kubernetes isn’t really designed around stateful applications and any
state built into it is more of an afterthought. Plenty of solutions
exist that offer much more of the assurances one would look for when
picking a database service. That being said, the configuration presented
above is a labor of love, a hodgepodge of Kubernetes tutorials and
examples found across the web to create a cohesive, realistic use case
for Kubernetes and all the new features in Kubernetes 1.8.x. I hope your
experiences deploying WordPress and MySQL using the guide I’ve prepared
for you are a bit less exciting than the ones I had ironing out bugs in
the configurations, and of course, I wish you eternal uptime. That’s
all for now. Tune in next time when I teach you to drive a boat using
only a Myo gesture band and a cluster of Linode instances running Tails
Linux.

About the Author

Eric Volpert is a
student at the University of Chicago and works as an evangelist, growth
hacker, and writer for Rancher Labs. He enjoys any engineering
challenge. He’s spent the last three summers as an internal tools
engineer at Bloomberg and a year building DNS services for the Secure
Domain Foundation with CrowdStrike. Eric enjoys many forms of music
ranging from EDM to High Baroque, playing MOBAs and other action-packed
games on his PC, and late-night hacking sessions, duct taping APIs
together so he can make coffee with a voice command.

Source

Welcome to the Era of Immutable Infrastructure

With the recent “container revolution,” a seemingly new idea became
popular: immutable infrastructure. In fact, it wasn’t particularly new,
nor did it specifically require containers. However, it was through
containers that it became more practical, understandable, and got the
attention of many in the industry. So, what is immutable
infrastructure? I’ll attempt to define it as the practice of making
infrastructure changes only in production by replacing components
instead of modifying them. More specifically, it means once we deploy a
component, we don’t modify (mutate) it. This doesn’t mean the component
(once deployed) is without any change in state; otherwise, it wouldn’t
be a very functional software component. But, it does mean that as the
operator we don’t introduce any change outside of the program’s
original API/design. Take for example this not too uncommon scenario.
Say our application uses a configuration file that we want to change. In
the dynamic infrastructure world, we might have used some scripting or a
configuration management tool to make this change. It would make a
network call to the server in question (or more likely many of them),
and execute some code to modify the file. It might also have some way of
knowing about the dependencies of that file that might need to be
altered as a result of this change (say a program needing a restart).
These relationships could become complex over time, which is why many CM
tools came up with a resource dependency model that helps to manage
them. The trade-offs between the two approaches are pretty simple.
Dynamic infrastructure is a lot more efficient with resources such as
network and disk IO. Because of this efficiency, it’s traditionally
faster than immutable because it doesn’t require pushing as many bits
or storing as many versions of a component. Back to our example of
changing a file. You could traditionally change a single file much
faster than you could replace the entire server. Immutable
infrastructure, on the other hand, offers stronger guarantees about the
outcome. Immutable components can be prebuilt before deploy, and build
once and then reused, unlike dynamic infrastructure which has logic that
needs to be evaluated in each instance. This leaves opportunity for
surprises about the outcome, as some of your environment might be in a
different state that you expect, causing errors in your deployment.
It’s also possible that you simply make a mistake in your configuration
management code, but you aren’t able to sufficiently replicate
production locally to test that outcome and catch the mistake. After
all, these configuration management languages themselves are complex. In
an article from ACM Queue, an Association for
Computing Machinery (ACM) magazine, engineers at Google articulated this
challenge well:

“The result is the kind of inscrutable ‘configuration is code’ that
people were trying to avoid by eliminating hard-coded parameters in
the application’s source code. It doesn’t reduce operational
complexity or make the configurations easier to debug or change; it
just moves the computations from a real programming language to a
domain-specific one, which typically has weaker development tools
(e.g., debuggers, unit test frameworks, etc).”

Trade-offs of efficiency have long been central to computer engineering.
However, the economics (both technological and financial) of these
decisions change over time. In the early days of programming, for
instance, developers were taught to use short variable names to save a
few bytes of precious memory at the expense of readability. Dynamic
linking libraries were developed to solve the space limitation of early
hard disk drives so that programs could share common C libraries instead
of each requiring their own copies. Both these things changed in the
last decade due to changes in the power of computer systems where now a
developer’s time is far more expensive than the bytes we save from
shortening our variables. New languages like Golang and Rust have even
brought back the statically compiled binary because it’s not worth the
headache of dealing with platform compatibility because of the wrong
DLL. Infrastructure management is at a similar crossroad. Not only has
the public cloud and virtualization made replacing a server (virtual
machine) orders of magnitude faster, but tools like Docker have created
easy to use tooling to work with pre-built server runtimes and efficient
resource usage with layer caching and compression. These features have
made immutable infrastructure practical because they are so lightweight
and frictionless. Kuberentes arrived on the scene not long after Docker
and took the torch further towards this goal, creating an API of “cloud
native” primitives that assume and encourage an immutable philosophy.
For instance, the ReplicaSet assumes that at any time in the lifecycle
of our application we can (and might need to) redeploy our application.
And, to balance this out, Pod Disruption Budgets tell Kubernetes how the
application will tolerate being redeployed. This confluence of
advancement has brought us to the era of immutable infrastructure. And
it’s only going to increase as more companies participate. Today’s
tools
have made it easier than ever to
embrace these patterns. So, what are you waiting for?

About the Author

William Jimenez
is a curious solutions architect at Rancher Labs in Cupertino, CA, who
enjoys solving problems with computers, software, and just about any
complex system he can get his hands on. He enjoys helping others make
sense of difficult problems. In his free time, he likes to tinker with
amateur radio, cycle on the open road, and spend time with his family
(so they don’t think he forgot about them).

Source

Using Kubernetes API from Go

Take a deep dive into Best Practices in Kubernetes Networking

From overlay networking and SSL to ingress controllers and network security policies, we’ve seen many users get hung up on Kubernetes networking challenges. In this video recording, we dive into Kubernetes networking, and discuss best practices for a wide variety of deployment options.

Watch the video

Last month I had the great pleasure of attending Kubecon 2017, which took place in Austin, TX. The conference was super informative, and deciding on what session to join was really hard as all of them were great. But what deserves special recognition is how well the organizers respected the attendees’ diversity of Kubernetes experiences. Support is especially important if you are new to the project and need advice (and sometimes encouragement) to get started. Kubernetes 101 track sessions were a good way to get more familiar with the concepts, tools and the community. I was very excited to be a speaker on 101 track, and this blog post is a recap of my session Using Kubernetes APIs from Go

In this article we are going to learn what makes Kubernetes a great platform for developers, and cover the basics of writing a custom controller for Kubernetes in the Go language using the client-go library.

Kubernetes is a platform

Kubernetes can be liked for many reasons. As a user, you appreciate its features richness, stability and performance. As a contributor, the Kubernetes open source community is not only large, but approachable and responsive. But what really makes Kubernetes appealing to a third party developer is its extensibility. The project provides so many ways to add new features, extend existing ones without disrupting the main code base. And thats what makes Kubernetes a platform. Here are some ways to extend Kubernetes: On the picture, you can see that every Kuberentes cluster component can be extended in a certain way, whether it is a Kubelet, or API server. Today we are going to focus on a “Custom Controller” way, I’ll refer to it as Kubernetes Controller or simply a Controller from now on.

What exactly is Kubernetes Controller?

The most common definition for controller is “Code that brings current state of the system to the desired state”. But what exactly does it mean? Lets look at Ingress controller example. Ingress is a Kubernetes resource that lets you define external access to the services in cluster, typically in HTTP and usually with the Load Balancing support. But Kubernetes core code has no ingress implementation. The implementation gets covered by the third party controllers that would:

  • Watch ingress/services/endpoints resource events (Create/Update/Remove)
  • Program internal or external Load Balancer
  • Update Ingress with the Load Balancer address

The “desired” state of the ingress is the IP Address pointing to the functioning Load Balancer programmed with the rules defined by the user in Ingress specification. And external ingress controller is responsible for bringing the ingress resource to this state. The implementation of the controller for the same resource, as well as the way to deploy them, can vary. You can pick nginx controller and deploy it on every node in your cluster as a Daemon Set, or you can chose to run your ingress controller outside of Kubernetes cluster and program F5 as a Load Balancer. There are no strict rules, Kubernetes is flexible in that way.

Client-go

There are several ways to get information about Kubernetes cluster and its resources. You can do it using Dashboard, kubectl, or using programmatic access to Kubernetes APIs. Client-go is the most popular library used by the tools written in Go. There are clients for many other languages out there (java, python, etc). Although if you want to write your very first controller, I encourage you to try go/client-go. Kubernetes is written in Go, and I find it easier to develop a plugin in the same language the main project is written.

Lets build…

The best way to get familiar with the platforms and tools around it, is to write something. Lets start simple, and implement a controller that:

  • Monitors Kubernetes nodes
  • Alerts when storage occupied by images on the node, changes

The code source can be found here.

Ground work

Setup the project

As a developer, I like to sneak a peek at the tools my peers use to make their life easier. Here I’m going to share 3 favorite tools of mine that are gonna help us with our very first project.

  1. go-skel – skeleton for Go microservices Just run ./skel.sh test123, and it will create the skeleton for the new go project test123.
  2. trash – Go vendor management tool. There are many go dependencies management tools out there, but trash has been proved to be simple to use and great when it comes to transient dependencies management.
  3. dapper – a tool to wrap any existing build tool in an consistent environment

Add client-go as a dependency

In order to use client-go code, we have to pull it as a dependency to our project. Add it to vendor.conf: And run trash. It will automatically pull all the dependencies defined in vendor.conf to the vendor folder of the project. Make sure client-go version is compatible with the Kubernetes version of your cluster.

Create a client

Before creating a client that is going to talk to Kubernetes API, we have to decide how we want to run our tool: inside or outside the Kubernetes cluster. When run inside the cluster, your application is containerized and gets deployed as Kubernetes Pod. It gives you certain perks – you can chose the way to deploy it (Daemon set to run on every node, or as a Deployment with n replicas), configure the healthcheck for it, etc. When your application runs outside of the cluster, you have to manage it yourself. Lets make our tool flexible, and support both ways of defining the client based on the config flag: We are going to use outside of cluster mode while debugging the app as this way you do not have to build the image every time and redeploy as kubernetes Pod. Once app is tested, we can build and image and deploy it in cluster. As you can see on the screen shot, the config is being built, and passed to kubernetes.NewForConfig to generate the client.

Play with basic CRUDs

For our tool, we need to monitor Nodes. It is a good idea to get familiar with the way to do CRUD operations using client-go before implementing the logic: Screen shot above displays how to do:

  • List nodes named “minikube” which can be achieved by passing FieldSelector filter to the command.
  • Update the node with the new annotation
  • Delete the node with the gracePeriod=10 seconds – meaning that the removal will happen only after 10 seconds since the command is issued.

All that is done using the clientset we’ve created on the previous step. We would need information about the images on the node; it can be retrieved by accessing corresponding field:

Watch/Notify using Informer

Now we know how to fetch the nodes from Kubernetes APIs and get images information from it. How do we monitor the changes to images’ size? The most simple way would be to periodically poll the nodes, calculate the current images storage capacity and compare it with the result from the previous poll. The downside to that – we execute the list call to fetch all the nodes, no matter if there were changes to them or not, and that can be expensive especially if your poll interval is small. What we really want is – to be notified when the node gets changed, and only then do our logic. Thats where client-go Informer comes to the rescue.

On this example, we create the Informer for the Node object by passing the watchList instruction on how to monitor the Node, object type api.Node and 30 seconds as a resync period instructing to periodically poll the node even when there were no changes to it – a nice way to fall back on in case the update event gets dropped by some reason. And as a last argument, we are passing 2 call back functions – handleNodeAdd and handleNodeUpdate. Those callbacks will have an actual logic that has to be triggered on the node’s changes – find out whether the storage occupied by images on the node got changed. The NewInformer gives back 2 objects – controller and store. Once the controller is started, the watch on node.update and node.add will start, and the callback functions will get called. The store is in memory cache which gets updated by the informer, and you can fetch the node object from the cache instead of calling Kubernetes APIs directly: As we have a single controller in our project, using regular Informer is fine enough. But if your future project ends up having several controllers for the same object, using SharedInformer is more recommended. So instead of creating multiple regular informers – one per controller – you can register one Shared informer, and let each controller register its own set of callbacks, and get back a shared cache in return which will reduce memory footprint:

Deployment time

Now it is time to deploy and test the code! For the first run, we are simply building a go binary and run it in out of cluster mode: To change the message output, deploy a pod using an image which is not presented on the node yet. Once basic functionality is tested, it is time to try running it in cluster mode. For that, we have to create the image first. Define the Dockerfile: And create an image using docker build . It will generate the image that you can use to deploy the pod in Kubernetes. Now your application can be run as a Pod in Kubernetes cluster. Here is an example of deployment definition, and on the screen shot above I’m using it to deploy our app: So we have:

  • Created go project
  • Added client-go package dependencies to it
  • Created a client to talk to Kubernetes api
  • Defined an Informer that would watch node object changes, and execute callback function once that happens
  • Implemented an actual logic in the callback definition.
  • Tested the code by running the binary in outside of cluster, and then deployed it inside the cluster

If you have any comments or questions on the topic, please feel free to share them with me ! AvatarAlena Prokharchyk twitter: @lemonjet github: https://github.com/alena1108

Alena Prokharchyk

Alena Prokharchyk

Software Engineer

Source

CICD Debates: Drone vs Jenkins

cicd-dronejenkins

Introduction

Jenkins has been the industry standard CI tool for years. It contains a multitude of functionalities, with almost 1,000 plugins in its ecosystem, this can be daunting to some who appreciate simplicity. Jenkins also came up in a world before containers, though it does fit nicely into the environment. This means that there is not a particular focus on the things that make containers great, though with the inclusion of Blue Ocean and pipelines, that is rapidly changing.

Drone is an open source CI tool that wears simple like a badge of honor. It is truly Docker native; meaning that all actions take place within containers. This makes it a perfect fit for a platform like Kubernetes, where launching containers is an easy task.

Both of these tools walk hand in hand with Rancher, which makes standing up a robust Kubernetes cluster an automatic process. I’ve used Rancher 1.6 to deploy a K8s 1.8 cluster on GCE; as simple as can be.

Build a CI/CD Pipeline with Kubernetes and Rancher 2.0

Recorded Online Meetup of best practices and tools for building pipelines with containers and kubernetes.

Watch the training

This article will take Drone deployed on Kubernetes (on Rancher), and compare it to Jenkins across three categories:

  • Platform installation and management
  • Plugin ecosystem
  • Pipeline details

In the end, I’ll stack them up side by side and try to give a recommendation. As usually is the case however, there may not be a clear winner. Each tool has its core focus, though by nature there will be overlap.

Prereqs

Before getting started, we need to do a bit of set up. This involves setting up Drone as an authorized Oauth2 app with a Github account. You can see the settings I’ve used here. All of this is contained within the Drone documentation.

There is one gotcha which I encountered setting up Drone. Drone maintains a passive relationship with the source control repository. In this case, this means that it sets up a webhook with Github for notification of events. The default behavior is to build on push and PR merge events. In order for Github to properly notify Drone, the server must be accessible to the world. With other, on-prem SCMs, this would not be the case, but for the example described here it is. I’ve set up my Rancher server on GCE, so that it is reachable from Github.com.

Drone installs from a container through a set of deployment files, just like any other Kubernetes app. I’ve adapted the deployment files found in this repo. Within the config map spec file, there are several values we need to change. Namely, we need to set the Github-related values to ones specific to our account. We’ll take the client secret and client key from the setup steps and place them into this file, as well as the username of the authorized user. Within the drone-secret file, we can place our Github password in the appropriate slot.

This is a major departure from the way Jenkins interacts with source code. In Jenkins, each job can define its relationship with source control independent of another job. This allows you to pull source from a variety of different repositories, including Github, Gitlab, svn, and others. As of now, Drone only supports git-based repos. A full list is available in the documentation, but all of the most popular choices for git-based development are supported.

We also can’t forget our Kubernetes cluster! Rancher makes it incredibly easy to launch and manage a cluster. I’ve chosen to use latest stable version of Rancher, 1.6. We could’ve used the new Rancher 2.0 tech preview, but constructing this guide worked best with the stable version. however, the information and steps to install should be the same, so if you’d like to try it out with newer Rancher, go ahead!

Task 1 – Installation and Management

Launching Drone on Kubernetes and Rancher is as simple as copy paste. I used the default K8s dashboard to launch the files. Uploading them one by one, starting with the namespace and config files, will get the ball rolling. Here are some of the deployment files I used. I pulled from this repository and made my own local edits. This repo is owned by a frequent Drone contributor, and includes instructions on how to launch on GCE, as well as AWS. The Kubernetes yaml files are the only things we need here. To replicate, just edit the ConfigMap file with your specific values. Check out one of my files below.

yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: drone-server
namespace: drone
spec:
replicas: 1
template:
metadata:
labels:
app: drone-server
spec:
containers:
– image: drone/drone:0.8
imagePullPolicy: Always
name: drone-server
ports:
– containerPort: 8000
protocol: TCP
– containerPort: 9000
protocol: TCP
volumeMounts:
# Persist our configs in an SQLite DB in here
– name: drone-server-sqlite-db
mountPath: /var/lib/drone
resources:
requests:
cpu: 40m
memory: 32Mi
env:
– name: DRONE_HOST
valueFrom:
configMapKeyRef:
name: drone-config
key: server.host
– name: DRONE_OPEN
valueFrom:
configMapKeyRef:
name: drone-config
key: server.open
– name:DRONE_DATABASE_DRIVER
valueFrom:
configMapKeyRef:
name: drone-config
key: server.database.driver
– name: DRONE_DATABASE_DATASOURCE
valueFrom:
configMapKeyRef:
name: drone-config
key: server.database.datasource
– name: DRONE_SECRET
valueFrom:
secretKeyRef:
name: drone-secrets
key: server.secret
– name: DRONE_ADMIN
valueFrom:
configMapKeyRef:
name: drone-config
key: server.admin
– name: DRONE_GITHUB
valueFrom:
configMapKeyRef:
name: drone-config
key: server.remote.github
– name: DRONE_GITHUB_CLIENT
valueFrom:
configMapKeyRef:
name: drone-config
key: server.remote.github.client
– name: DRONE_GITHUB_SECRET
valueFrom:
configMapKeyRef:
key: server.remote.github.secret
– name: DRONE_DEBUG
valueFrom:
configMapKeyRef:
name: drone-config
key: server.debug

volumes:
– name: drone-server-sqlite-db
hostPath:
path: /var/lib/k8s/drone
– name: docker-socket
hostPath:
path: /var/run/docker.sock

Jenkins can be launched in much the same way. Because it is deployable in a Docker container, you can construct a similar deployment file and launch on Kubernetes. Here’s an example below. This file was taken from the GCE examples repo for the Jenkins CI server.

yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins
namespace: jenkins
spec:
replicas: 1
template:
metadata:
labels:
app: master
spec:
containers:
– name: master
image: jenkins/jenkins:2.67
ports:
– containerPort: 8080
– containerPort: 50000
readinessProbe:
httpGet:
path: /login
port: 8080
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 2
failureThreshold: 5
env:
– name: JENKINS_OPTS
valueFrom:
secretKeyRef:
name: jenkins
key: options
– name: JAVA_OPTS
value: ‘-Xmx1400m’
volumeMounts:
– mountPath: /var/jenkins_home
name: jenkins-home
resources:
limits:
cpu: 500m
memory: 1500Mi
requests:
cpu: 500m
memory: 1500Mi
volumes:
– name: jenkins-home
gcePersistentDisk:
pdName: jenkins-home
fsType: ext4
partition: 1

Launching Jenkins is similarly easy. Because of the simplicity of Docker and Rancher, all you need to do is take the set of deployment files and paste them into the dashboard. My preferred way is using the Kubernetes dashboard for all management purposes. From here, I can upload the Jenkins files one by one to get the server up and running.

Managing the Drone server comes down to configurations passed when launching. Hooking up to Github involved adding OAuth2 tokens, as well as (in my case) a username and password to access a repository. changing this would involve either granting organization access through GIthub, or relaunching the server with new credentials. This could possibly hamper development, as it means that Drone cannot handle more than one source provider. As mentioned above, Jenkins allows for any number of source repos, with the caveat that each job only uses one.

Task 2 – Plugins

Plugins in Drone are very simple to configure and manage. In fact, there isn’t much you need to do to get one up and running. The ecosystem is considerably smaller than that for Jenkins, but there are still plugins for almost every major tool available. There are plugins for most major cloud providers, as well as integrations with popular source control repos. As mentioned before, containers in Drone are first class citizens. This means that each plugin and executed task is also a container.
Jenkins is the undisputed king of plugins. If you can think of the task, there is probably a plugin to accomplish it. There are at last glance, almost 1000 plugins available for use. The downside of this is that it can sometimes be difficult to determine, out of a selection of similar looking plugins, which one is the best choice for what you’re trying to accomplish

There are docker plugins for building pushing and images, AWS and K8s plugins for deploying to clusters, and various others. Because of the comparative youth of the Drone platform, there are a great deal fewer plugins available here than for Jenkins. That does not however, take away from their effectiveness and ease of use. A simple stanza in a drone.yml file will automatically download, configure, and run a selected plugin, with no other input needed. And remember, because of Drone’s relationship with containers, each plugin is maintained within an image. There are no extra dependencies to manage; if the plugin creator has done their job correctly, everything will be contained within that container.

When I built the drone.yml file for the simple node app, adding a Docker plugin was a breeze. There were only a few lines needed, and the image was built and pushed to a Dockerhub repo of my choosing. In the next section, you can see the section labeled docker. This stanza is all that’s needed to configure and run the plugin to build and push the Docker image.

Task 3

The last task is the bread and butter of any CI system. Drone and Jenkins are both designed to build apps. Originally, Jenkins was targeted towards java apps, but over the years the scope has expanded to include anything you could compile and execute as code. Jenkins even excels at new pipelines and cron-job like scheduled tasks. However, it is not container native, though it does fit very well into the container ecosystem.

yaml
pipeline:
build:
image: node:alpine
commands:
– npm install
– npm run test
– npm run build
docker:
image: plugins/docker
dockerfile: Dockerfile
repo: badamsbb/node-example
tags: v1

For comparison, here’s a Jenkinsfile for the same app.

groovy
#!/usr/bin/env groovy
pipeline {
agent {
node {
label ‘docker’
}
}
tools {
nodejs ‘node8.4.0’
}
stages {
stage (‘Checkout Code’) {
steps {
checkout scm
}
}
stage (‘Verify Tools’){
steps {
parallel (
node: {
sh “npm -v”
},
docker: {
sh “docker -v”
}
)
}
}
stage (‘Build app’) {
steps {
sh “npm prune”
sh “npm install”
}
}
stage (‘Test’){
steps {
sh “npm test”
}
}
stage (‘Build container’) {
steps {
sh “docker build -t badamsbb/node-example:latest .”
sh “docker tag badamsbb/node-example:latest badamsbb/node-example:v$”
}
}
stage (‘Verify’) {
steps {
input “Everything good?”
}
}
stage (‘Clean’) {
steps {
sh “npm prune”
sh “rm -rf node_modules”
}
}
}
}

While this example is verbose for the sake of explanation, you can see that accomplishing the same goal, a built Docker image, can be more involved than with Drone. In addition, what’s not pictured is the set up of the interactions between Jenkins and Docker. Because Jenkins is not Docker native, agent must be configured ahead of time to properly interact with the Docker daemon. This can be confusing to some, which is where Drone comes out ahead. It is already running on top of Docker; this same Docker is used to run its tasks.

Conclusion

Drone is a wonderful piece of CI software. It has quickly become a very popular choice for wanting to get up and running quickly, looking for a simple container-native CI solution. The simplicity of it is elegant, though as it is still in a pre-release status, there is much more to come. Adventurous engineers may be willing to give it a shot in production, and indeed many have. In my opinion, it is best suited to smaller teams looking to get up and running quickly. Its small footprint and simplicity of use lends itself readily to this kind of development.

However, Jenkins is the tried and true powerhouse of the CI community. It takes a lot to topple the king, especially one so entrenched in his position. Jenkins has been very successful at adapting to the market, with Blue Ocean and container-based pipelines making strong cases for its staying power. Jenkins can be used by teams of all sizes, but excels at scale. Larger organizations love Jenkins due to its history and numerous integrations. It also has distinct support options, either active community support for open source, or enterprise-level support through CloudBees But as with all tools, both Drone and Jenkins have their place within the CI ecosystem.

Bio

Brandon Adams
Certified Jenkins Engineer, and Docker enthusiast. I’ve been using Docker since the early days, and love hearing about new applications for the technology. Currently working for a Docker consulting partner in Bethesda, MD.

Source

Expanding User Support with Office Hours

Today’s developer has an almost overwhelming amount of resources available for learning. Kubernetes development teams use StackOverflow, user documentation, Slack, and the mailing lists. Additionally, the community itself continues to amass an awesome list of resources.

One of the challenges of large projects is keeping user resources relevant and useful. While documentation can be useful, great learning also happens in Q&A sessions at conferences, or by learning with someone whose explanation matches your learning style. Consider that learning Kung Fu from Morpheus would be a lot more fun than reading a book about Kung Fu!

We as Kubernetes developers want to create an interactive experience: where Kubernetes users can get their questions answered by experts in real time, or at least referred to the best known documentation or code example.

Having discussed a few broad ideas, we eventually decided to make Kubernetes Office Hours a live stream where we take user questions from the audience and present them to our panel of contributors and expert users. We run two sessions: one for European time zones, and one for the Americas. These streaming setup guidelines make office hours extensible—for example, if someone wants to run office hours for Asia/Pacific timezones, or for another CNCF project.

To give you an idea of what Kubernetes office hours are like, here’s Josh Berkus answering a question on running databases on Kubernetes. Despite the popularity of this topic, it’s still difficult for a new user to get a constructive answer. Here’s an excellent response from Josh:

It’s often easier to field this kind of question in office hours than it is to ask a developer to write a full-length blog post. [Editor’s note: That’s legit!] Because we don’t have infinite developers with infinite time, this kind of focused communication creates high-bandwidth help while limiting developer commitments to 1 hour per month. This allows a rotating set of experts to share the load without overwhelming any one person.

We hold office hours the third Wednesday of every month on the Kubernetes YouTube Channel. You can post questions on the #office-hours channel on Slack, or you can submit your question to Stack Overflow and post a link on Slack. If you post a question in advance, you might get better answers, as volunteers have more time to research and prepare. If a question can’t be fully solved during the call, the team will try their best to point you in the right direction and/or ping other people in the community to take a look. Check out this page for more details on what’s off- and on topic as well as meeting information for your time zone. We hope to hear your questions soon!

Special thanks to Amazon, Bitnami, Giant Swarm, Heptio, Liquidweb, Northwestern Mutual, Packet.net, Pivotal, Red Hat, Weaveworks, and VMWare for donating engineering time to office hours.

And thanks to Alan Pope, Joe Beda, and Charles Butler for technical support in making our livestream better.

Source