New Jetstackers in 2018 // Jetstack Blog

8/Mar 2018

By Hannah Morris

Introduction

As ever, the Jetstack team are incredibly busy. Recent months have seen back-to-back Kubernetes consulting, training and open source development, as more and more companies adopt Kubernetes in order to meet the demands of their business.

It has to be said that at Jetstack we are scaling to meet the demands of our business: Just 3 months into 2018, and we have already grown by 3 members! We are delighted to welcome to our team Matt (yes, another!), Charlie and Simon, who all recently joined our London office.

Matt Turner

Solutions Engineer, London

Matt

Matt has been a software engineer for 10 years, following a degree in Computer Science. He started with embedded systems, moved to development tools, then orchestrated VMs for Cisco. Most recently at Skyscanner, he introduced Kubernetes and cloud-native practices. His idea of “full-stack” is POSIX, Kubernetes, and now Istio.

It would be true to say that Matt enjoys travelling: He has been on holiday to Chernobyl, caving in Cuba, and he went to Comicon San Diego last year (outfitless, apparently). He once helped a friend move to Sweden by driving there in an £800 van. When asked about his music taste, Matt expresses his regret at not having seen S Club 7 live, but he has seen Steps and the Vengaboys. And Slayer. Oh, and he once tuned an engine so much it melted.

Charlie Egan

Solutions Engineer, London

charlie

Charlie comes from an application development background, but he’s been tinkering with containers on the side for a few years. He’s interested in how container tech can improve the developer experience and reduce anxiety around deploying software, and is looking forward to working with the team at Jetstack.

Away from the keyboard, Charlie enjoys ‘Parkrunning’ and posting pretty pictures on Instagram. He also makes a good homemade soup, which has helped him through his first weeks at Jetstack HQ during the snowy spells. Charlie is from Scotland and makes fairly regular trips home on the Caledonian Sleeper, which he promotes at almost every opportunity.

Simon Parker

Head of Growth, London

Simon

Simon has over 11 years experience in the industry working for companies that include IBM, Accenture and Cloud Sherpas. He has been helping new startups disrupt and large enterprise organisations adapt to remain relevant through digital transformation. Although primarily focused on building strategic partnerships with customers, marketing and business growth, Simon is a technologist at heart and can often be found spinning up containers to test open source solutions to his problems!

In previous roles Simon travelled extensively around Europe, but he also managed to escape from working life long enough to spend some time with an indigenous tribe in Peru. He is a self-proclaimed Adrenaline Junkie, and is currently the proud owner of a Suzuki motorbike. He also sits on the board for two charities, where he shares his passion for technology in helping them leverage building tools with AI/Machine learning and releasing a mobile app.

Join us!

hello@jetstack.io

The Jetstack team is growing as we take on a variety of challenging customer projects, training programmes and open source development. We have an exciting and busy road ahead, and if you’d like to be a part of it please contact us.

We are currently looking for Solutions and Operations Engineers in UK/EU. Read about what it’s like to be a Solutions Engineer at Jetstack.

Source

Seattle Business Names Heptio a Top Place to Work – Heptio

Congrats team! Seattle Business’ Washington’s 100 Best Companies to Work For has recognized Heptio as #5 in its 2018 list of great workplaces. This vote came directly from our employees and I am so incredibly proud to be a part of this amazing team.

At Heptio, we’re working hard to build a culture with intention. In hand with our technical expertise, our team and values have been a growth accelerant. Even as we quadrupled headcount to support our customers’ journeys, we’ve been careful to cultivate a culture that we believe is helping us grow the right way — starting from our values. (1) Carry the fire, (2) Honest technology and (3) We before me

These values permeate everything we do, including a couple of areas where we believe Heptio stands apart:

Attract diverse perspectives [We before me]. We place a huge emphasis on fostering diversity inside and outside our company. Our founders built diversity into the foundation of this company and it has been a focus from the beginning. We take equal pay seriously and built our compensation bands early in the company lifecycle to ensure we enforce equal pay for equal work. We think fostering a diverse community is also important so have sponsored diversity panels at conferences and had Heptonians talk about what it means to them. For KubeCon Austin 2017, our own Kris Nova fundraised trips for 100 diversity attendees. And for each download of our Gartner Cool Vendor report, we make a donation to Black Girls Code.

Obsess over communication [Honest technology]. Our CEO, Craig McLuckie, sends an email to all Heptionians every Sunday. We hold an all-hands every second Monday where we talk about everything going well and things we need to work on. We also make sure to talk about how we are spending our money, everyone is an owner so it is important we all understand how we are doing. Each Wednesday, the team, including remote folks, gathers in-person and via conferencing for an hour-long water cooler break where we shoot the breeze. When we do a breakfast celebration at HQ, we send coffee gift cards to remote Heptionians so they can toast with us. We’re not perfect, but we obsess over communication to create a strong connection with all employees.

Have a purpose [Carry the fire]. Heptionians have a purpose and are motivated to drive change. We donate ideas and projects to keep Kubernetes open source because we believe this model best serves the user community. We draw inspiration from our rapid growth ‒ facilitated by partners and customers who value our approach. Our collaborations with Amazon and Microsoft underscore the appeal of open source Kubernetes even to tech titans who have deep resources to build proprietary alternatives. And our customers share incredible feedback on the tools, training, services and support we bring to unlock the full potential of Kubernetes.

A key to shaping our culture is working with others who can be themselves. Given our values we have attracted some amazing, outspoken people, and so we work together to create an environment where we can engage others and advocate for the causes that matter in our lives. We’re growing fast and we need more people who care deeply about their work and life. If you are interested in joining us on our mission, please take a look at our openings.

Ok, in true Heptio fashion, that’s enough time celebrating … we’ve got more work to do.

Source

Compare Docker for Windows options

As part of Dockercon 2017, there was an announcement that Linux containers can run as hyperv container in Windows server. This announcement made me to take a deeper look into Windows containers. I have worked mostly with Linux containers till now. In Windows, I have mostly used Docker machine or Toolbox till now. I recently tried out other methods to deploy containers in Windows. In this blog, I will cover different methods to run containers in Windows, technical internals on the methods and comparison between the methods. I have also covered Windows Docker base images and my experiences trying the different methods to run Docker containers in Windows. The 3 methods that I am covering are Docker Toolbox/Docker machine, Windows native containers, hyper-v containers.

Docker Toolbox

Docker Toolbox runs Docker engine on top of boot2docker VM image running in Virtualbox hypervisor. We can run Linux containers on top of the Docker engine. I have written few blogs(1, 2) about Docker Toolbox before. We can run Docker Toolbox on any Windows variants.

Windows Native containers

Folks familiar with Linux containers know that Linux containers uses Linux kernel features like namespaces, cgroups. To containerize Windows applications, Docker engine for Windows needs to use the corresponding Windows kernel features. Microsoft worked with Docker to make this happen. As part of this effort, changes were made both on Docker and Windows side. This mode allows Windows containers to run directly on Windows server 2016. Windows server 2016 has the necessary container primitives that allows native Windows containers to run on it. Going forward, Microsoft will port this functionality to other flavors of Windows.

hyper-v containers

Windows Hyper-v container is a windows server container that runs in a VM. Every hyper-v container creates its own VM. This means that there is no kernel sharing between the different hyper-v containers. This is useful for cases where additional level of isolation is needed by customers who don’t like the traditional kernel sharing done by containers. The same Docker image and CLI can be used to manage hyper-v containers. Creation of hyper-v containers is specified as a runtime option. There is no difference when building or managing containers between windows server and hyper-v container. Startup times for hyper-v container is higher than windows native container since a new lightweight VM gets created each time. 1 common question that comes up is how is hyper-v container different from a general VM with virtualbox or hyper-v hypervisor and running container on top of it? Following are some differences as I see it:

  • hyper-v container is very light-weight. This is because of the light-weight OS and other optimizations.
  • hyper-v containers do not appear as VMs inside hyper-v and cannot be managed by regular hyper-v tools.
  • The same Docker CLI can be used to manage hyper-v containers. To some extent, this is true with Docker Toolbox and Docker machine. With hyper-v containers, its more integrated and becomes a single step process.

There are 2 modes of hyper-v container.

  1. Windows hyper-v container – Here, hyper-v container runs on top of Windows kernel. Only Windows containers can be run in this mode.
  2. Linux hyper-v container – Here, hyper-v container runs on top of Linux kernel. This mode was not available earlier and it was introduced as part of Dockercon 2017. Any Linux flavor can be used as the base kernel. Docker’s Linuxkit project can be used to build the Linux kernel needed for the hyper-v container. Only Linux containers can be run in this mode.

We cannot use Docker Toolbox and hyper-v containers at the same time. Virtualbox cannot run when “Docker for Windows” is installed.

Following picture shows illustration of different Windows container modes

windows_container_types

Following table captures the difference between different Windows Container modes

Windows mode/Feature Toolbox Windows native container hyper-v container
OS Type Any Windows flavor Windows 2016 server Windows 10 pro, Windows 2016 server
hypervisor/VM Virtualbox hypervisor No seperate VM for container VM runs inside hyper-v
Windows container Not possible Yes Possible in Windows hyper-v container
Linux container Yes Not possible Possible in Linux hyper-v container
Startup time Higher compared to windows native and hyper-v containers Least among the 3 options Between Toolbox and windows native containers

Hands-on

If you are using Windows 10 pro or Windows server 2016, you can install Docker for Windows from here. This installs Docker CE version and runs Docker for Windows in hyper-v mode. We can install using either the stable or edge channel. Docker for Windows was available earlier only for Windows 10. The edge channel added Docker for Windows for Windows server 2016 just recently. Once “Docker for Windows” is installed, we can switch between Linux and Windows mode with just a click of a button. As of now, Linux mode uses mobyLinuxVM, this will change later to hyper-v linux container mode. In order to run Hyper-V containers, the Hyper-V role has to be enabled in Windows. If the Windows host is itself a Hyper-V virtual machine, nested virtualization will need to be enabled before installing the Hyper-V role. For more details, please refer these 2 references(1, 2). As shown in the example of reference, we can start hyper-v containers by just specifying a run-time option in Docker.

docker run -it –isolation=hyperv microsoft/nanoserver cmd

If you are using Windows server 2016, Docker EE edition can be installed using the procedure here. I would advise using Docker EE for Windows server 2016 rather than using hyper-v container.

I have tried Docker Toolbox in Windows 7 Enterprise version. Docker Toolbox can be run in any version of Windows. Docker Toolbox installation also installs Virtualbox if its not already installed. Docker Toolbox can be installed from here. For Docker Toolbox hands-on example, please refer to my earlier blog here.

I tried out Windows native containers and hyper-v containers in Azure cloud. After I created a Windows 2016 server, I used the following commands to install Docker engine. These commands have to be executed from powershell in administrator mode.

Install-Module -Name DockerMsftProvider -Repository PSGallery -Force
Install-Package -Name docker -ProviderName DockerMsftProvider
Restart-Computer -Force

Following are some example Windows containers I tried:

docker run microsoft/dotnet-samples:dotnetapp-nanoserver
docker run -d –name myIIS -p 80:80 microsoft/iis

Since Azure uses hypervisor to host compute VM and the fact that nested virtualization is not supported in Azure, Docker for Windows cannot be used with Windows server 2016 in Azure.
I got following error when I started “Docker for Windows” in Linux mode.

Unable to write to the database. Exit code: 1
at Docker.Backend.ContainerEngine.Linux.DoStart(Settings settings) in C:gopathsrcgithub.comdockerpinatawinsrcDocker.BackendContainerEngineLinux.cs:line 243
at Docker.Backend.ContainerEngine.Linux.Start(Settings settings) in C:gopathsrcgithub.comdockerpinatawinsrcDocker.BackendContainerEngineLinux.cs:line 120
at Docker.Core.Pipe.NamedPipeServer.<>c__DisplayClass8_0.b__0(Object[] parameters) in C:gopathsrcgithub.comdockerpinatawinsrcDocker.CorepipeNamedPipeServer.cs:line 44
at Docker.Core.Pipe.NamedPipeServer.RunAction(String action, Object[] parameters) in C:gopathsrcgithub.comdockerpinatawinsrcDocker.CorepipeNamedPipeServer.cs:line 140

I was still able to use hyper-v containers in Azure in Windows mode in Windows server 2016. I am still not fully clear how this mode overcame the nested virtualization problem.

From Azure perspective, I would like to see these changes from Microsoft:

  • Azure supporting nested virtualization.
  • Allowing Windows 10 in Azure without MSDN subscription.

There was an announcement earlier this week at Microsoft Build conference that Azure will support nested virtualization in selected VM sizes. This is very good.

Windows base image

Every container has a base image that contains the needed packages and libraries. Windows containers supports 2 base images:

  1. microsoft/windowsservercore – a full blown Windows server with full .NET Framework support. The size is around 9 GB.
  2. microsoft/nanoserver – a minimal Windows server and .NET Core Framework. The size is around 600 MB.

Following picture from here shows the compatibility between Windows server OS, Container type and Container base image.

baseimage

As we can see from the picture, with hyper-v container, we can use only nanoserver container base image.

FAQ

Can I run Linux containers in Windows?

  • The answer depends on which Docker windows mode you are using. With Toolbox and hyper-v Linux containers, Linux containers can be run in Windows. With Windows native container mode, Linux containers cannot be run in Windows.

Which Docker for Windows mode should I use?

  • For development purposes, if there is a need to use both Windows and Linux containers, hyper-v container can be used. For production purposes, we should use Windows native container. If there is a need to have better kernel isolation for additional security, hyper-v container can be used. If you have a version of Windows that is neither Windows 10 or Windows server 2016, Docker Toolbox is the only option available.

Can we run Swarm mode and Overlay network with Windows containers?

  • Swarm mode support was added recently in Windows containers. Multiple containers across Windows hosts can talk over the Overlay network. This needs Windows server update as mentioned in the link here. The same link also talks about a mixed mode Swarm cluster with Windows and Linux nodes. We can have a mix of Windows and Linux containers talking to each other over the Swarm cluster. Using Swarm constraints scheduling feature, we can place Windows containers in Windows nodes and Linux containers in Linux nodes.

Is there an additional Docker EE license needed for Windows server 2016?

  • According to the article here, it is not needed. It is better to check as this might change. Obviously, Windows license has to be taken care separately.

References

Source

NextCloudPi docker for Raspberry Pi – Own your bits

Note: some of this information is outdated, check a newer release here

I would like to introduce my NextCloud ARM container for the Raspberry Pi.

It only weights 475 MB, and it is shares codebase with NextCloudPi, so it has the same features:

  • Raspbian 9 Jessie
  • Nextcloud 13.0.1
  • Apache 2.4.25, with HTTP2 enabled
  • PHP 7.0
  • MariaDB 10
  • Automatic redirection to HTTPS
  • ACPU PHP cache
  • PHP Zend OPcache enabled with file cache
  • HSTS
  • Cron jobs for Nextcloud
  • Sane configuration defaults
  • Secure
  • Small, only 475 MB in disk, 162 MB compressed download.

With this containerization, the user no longer requires to start from scratch in order to run NextCloud in their RPi, as opposed from flashing the NextCloudPi image. It also opens new possibilities for easy upgrading and sandboxing for extra security.

It can be run in any system other that Raspbian, as long as it supports docker.

Some of the extras will be added soon, where it makes sense.

Installation

If you haven’t yet, install docker in your Raspberry Pi.

curl -sSL get.docker.com | sh

Adjust permissions. Assuming you want to manage it with the user pi

sudo usermod -aG docker pi

newgrp docker

 

Optionally, store containers in an external USB drive. Change the following line (adjust accordingly)

ExecStart=/usr/bin/dockerd -g /media/USBdrive/docker -H fd://

Reload changes

systemctl daemon-reload

systemctl restart docker

You can check that it worked with

$ docker info | grep Root

Docker Root Dir: /media/USBdrive/docker

Usage

The only parameter that we need is the trusted domain that we want to allow.

DOMAIN=192.168.1.130 # example for allowing an IP

DOMAIN=myclouddomain.net # example for allowing a domain

docker run -d -p 443:443 -p 80:80 -v ncdata:/data –name nextcloudpi ownyourbits/nextcloudpi $DOMAIN

After a few seconds, you can access from your browser just typing the IP or URL in the navigation bar of your browser. It will redirect you to the HTTPS site.

The admin user is ncp, and the default password is ownyourbits. Login to create users, change default password and other configurations.

Other than that, we could map different ports if we wanted to. Note that a volume ncdata will be created where configuration and data will persist.

For example, you could wrap a script like this to allow your current local IP

#!/bin/bash

# Initial Trusted Domain

IFACE=$( ip r | grep “default via” | awk ‘{ print $5 }’ )

IP=$( ip a | grep “global $IFACE” | grep -oP ‘d(.d)’ | head -1 )

docker run -d -p 443:443 -p 80:80 -v ncdata:/data –name nextcloudpi ownyourbits/nextcloudpi $IP

If you ever need direct access to your storage, you can find out where your files are located.

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

 

$ docker inspect nextcloudpi

“Mounts”: [

{

“Type”: “volume”,

“Name”: “ncdata”,

“Source”: “/media/USBdrive/docker/volumes/ncdata/_data”,

“Destination”: “/data”,

“Driver”: “local”,

“Mode”: “z”,

“RW”: true,

“Propagation”: “”

}

],

 

You can in this way alter your config.php

Details

The container consists of 3 main layers, totalling 476 MB.

A benefit of docker layers is that we can sometimes just update the upper layers, or provide updates on top of the current layout.

Code

The build code is now part of the NextCloudPi repository.

You can build it yourself in a Raspbian ARM environment with.

git clone https://github.com/nextcloud/nextcloudpi.git

make -C nextcloudpi

dockerhub

Source

Using Helm without Tiller – Giant Swarm

What You Yaml is What You Get

When starting with Kubernetes, learning how to write manifests and bringing them to the apiserver is usually the first step. Most probably kubectl apply is the command for this.

The nice thing here is that all the things you want to run on the cluster are described precisely and you can easily inspect what will be sent to the apiserver.

After the joy of understanding how this works, it quickly becomes cumbersome to copy your manifests around and edit the same fields over different files to get a slightly adjusted deployment out.

The obvious solution to this is templating, and Helm is the most well-known solution in the Kubernetes ecosystem to help out with this. Most how-tos directly advise you to install the clusterside Tiller component and unfortunately this comes with a bit of operational overhead and even more importantly you also need to take care to secure access to Tiller, since it is a component running in your cluster with full admin rights.

If you want to see what actually will be sent to the cluster you can leave out Tiller and use Helm locally just for the templating and using kubectl apply in the end.

There is no need for Tiller and there are roughly three steps to follow:

  1. Fetching the chart templates
  2. Rendering the template with configurable values
  3. Applying the result to the cluster

This way you benefit from the large amount of maintained Charts the community is building, but have all the building blocks of an application in front of you. When keeping them in a git repo it is easy to compare changes from new releases with the current manifests you used to deploy on your cluster. This approach might nowadays be called GitOps.

A possible directory structure could look like this:

kubernetes-deployment/
charts/
values/
manifests/

For the following steps the helm client needs to be installed locally.

Fetching the chart templates

To fetch the source code of the charts the url to the repository is needed, also the chart name and the wanted version:

helm fetch
–repo https://kubernetes-charts.storage.googleapis.com
–untar
–untardir ./charts
–version 5.5.3
prometheus

After this the template files can be inspected under ./charts/prometheus.

Rendering the template with configurable values

The default values.yaml should be copied to a different location for editing so it is not overwritten when updating the chart source.

cp ./charts/prometheus/values.yaml
./values/prometheus.yaml

The copied prometheus.yaml can now be adjusted as needed. To render the manifests from the template source with the potentially edited values file:

helm template
–values ./values/prometheus.yaml
–output-dir ./manifests
./charts/prometheus

Applying the result to the cluster

Now the resulting manifests can be thoroughly inspected and finally be applied to the cluster:

kubectl apply –recursive –filename ./manifests/prometheus

Conclusion

With just the standard helm command we can closely check the whole chain from the charts content to the app coming up on our cluster. To make these steps even more easy I have put them in a simple plugin for helm and named it nomagic.

Caveats

There might be dragons. It might be, that an application needs different kinds of resources that depend on each other. For example applying a Deployment that references a ServiceAccount won’t work until that is present. As a workaround the filename for the ServiceAccounts manifest unter manifests/ could be prefixed with 1- since kubectl apply progresses over files in alphabetical order. This is not needed in setups with Tiller, so it is usually not considered in the upstream charts. Alternatively run kubectl apply twice to create all independent objects in the first run. The dependent ones will show up after the second run.

And obviously you lose features that Tiller itself provides. According to the Helm 3 Design Proposal these will be provided in the long run by the Helm client itself and an optional Helm controller. With the release of Helm 3 the nomagic plugin won’t be needed, but it also might not function any more since plugins need to be implemented in Lua. So grab it while it’s useful!

Please share your thoughts about this, other caveats or ideas to improve.

And as always: If you’re interested in how Giant Swarm can run your Kubernetes on-prem or in the cloud contact us today to get started.

Further Reading

Source

Docker for Mac with Kubernetes support

Jan 29, 2018

by Catalin Jora

During DockerCon Copenhagen, Docker announced support and integration for Kubernetes, alongside Swarm. The first integration is in the Docker for Mac, where you can run now a 1 node Kubernetes cluster. This allows you to deploy apps with Docker-compose files to that local Kubernetes cluster via the docker cli. In this blogpost, I’ll cover what you need to know about this integration and how to make the most out of it.

While a lot of computing workload moves to the cloud, the local environment is still relevant. This is the first place where software is built, executed and where (unit) tests run. Docker, helped us to get rid of the famous “it works on my machine” by automating the repetitive and error-prone tasks. But unless you’re into building “hello world” apps, you’ll have to manage the lifecycle of a bunch of containers that need to work together. Thus, you’ll need management for your running containers, commonly called nowadays orchestration.

All major software orchestration platforms have their own “mini” distribution that can run on a developer machine. If you work with Mesos you have minimesos (container based), for Kubernetes there is minikube (virtual machine). RedHat offers both a virtual machine (minishift) and a container based tool (oc cli) for their K8s distribution (Openshift). Docker has compose, swarm-mode orchestration and since recently also supports Kubernetes (for now only in Docker for Mac).

If you’re new to Kubernetes you’ll wanna familiarize with the basic concepts using this official Kubernetes tutorial we build together with Google, Remembertoplay and Katacoda.

Enabling Kubernetes in Docker for Mac, will install a containerized distribution of Kubernetes and it’s cli (kubectl), which will allow you to interact with the cluster. On resource level, the new cluster will use whatever Docker for Mac has available for use.

The release is in beta (at the time of writing the article) and available via the Docker Edge channel. Once you’re logged in with your Docker account, you can enable Kubernetes via the dedicated menu from the UI:

kubernetes docker mac

At this point, if you never connected to a Kubernetes cluster on your Mac, you’re good to go. Kubectl will point to the new (and only) configured cluster. If this is not the case, you’ll need to point kubectl to the right cluster. Docker for Mac will not change your default Kubernetes context. You’ll need to manually switch the context to ‘docker-for-desktop’:

kubectl config get-contexts

CURRENT NAME CLUSTER AUTHINFO NAMESPACE

docker-for-desktop docker-for-desktop-cluster docker-for-desktop

* minikube minikube minikube

kubectl config use-context docker-for-desktop

Switched to context “docker-for-desktop”.

Going to kubectl utility now, you should be able to run commands towards the new cluster:

kubectl cluster-info

Kubernetes master is running at https://localhost:6443

KubeDNS is running at https://localhost:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy

Note: You may have already another kubectl installed on your machine (E.g. installed via gcloud utility if you used GKE before, or as a stand-alone program if you used minikube). Docker will install automatically a new kubectl binary in /usr/local/bin/. You’ll need to decide which one you’ll keep.

Ok, let’s try to install our first apps using a Docker-compose file. Yes, the previous sentence is correct. If you want to deploy apps to your new local Kubernetes cluster using the docker cli, docker-compose file is the only way. If you already have some Kubernetes manifests you plan to deploy, you can do it using the known way, with kubectl.

We’re using here the demo-app from the official docker-page about Kubernetes:

wget https://raw.githubusercontent.com/jocatalin/k8s-docker-mac/master/docker-compose.yaml

docker stack deploy -c docker-compose.yaml hello-k8s-from-docker

Stack hello-k8s-from-docker was updated

Waiting for the stack to be stable and running…

– Service db has one container running

– Service words has one container running

– Service web has one container running

Stack hello-k8s-from-docker is stable and running

If you’re familiar with Docker swarm, there is nothing new about the command. What’s different is that the stack was deployed to our new Kubernetes cluster. The command generated deployments, replica sets, pods and services for the 3 applications defined in the compose file:

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

 

kubectl get all

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE

deploy/db 1 1 1 1 39s

deploy/web 1 1 1 1 39s

deploy/words 1 1 1 1 39s

NAME DESIRED CURRENT READY AGE

rs/db-794c8bc8d9 1 1 1 39s

rs/web-54cbf7d7fb 1 1 1 39s

rs/words-575cd67dff 1 1 1 39s

NAME READY STATUS RESTARTS AGE

po/db-794c8bc8d9-mrw79 1/1 Running 0 39s

po/web-54cbf7d7fb-mx4c7 1/1 Running 0 39s

po/words-575cd67dff-ddgw2 1/1 Running 0 39s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

svc/db ClusterIP None 55555/TCP 39s

svc/web LoadBalancer 10.96.17.42 80:31420/TCP 39s

svc/words ClusterIP None 55555/TCP 39s

Doing any change in the docker-compose file and re-deploying it (e.g. change the number of replicas, change the image version) will update the Kubernetes app accordingly. The concept of namespaces is supported as well via the –namespace parameter. Deleting the application stacks can also be done via the docker cli.

Will Docker for Mac allow you to deploy with the docker cli the compose-files on other Kubernetes cluster? No, it won’t. Trying to deploy the same file on another cluster will return this error:

could not find compose.docker.com api.

Install it on your cluster first

The integration is implemented at API level. Open a proxy to the Docker for mac Kubernetes cluster:

kubectl proxy

Starting to serve on 127.0.0.1:8001

Going in the browser to http://localhost:8001 will reveal some new API’s that are (most probably) responsible for this compose to k8s manifest translation:

“/apis/compose.docker.com”

“/apis/compose.docker.com/v1beta1”

So at this point, if you want to deploy the same application stack on other clusters, you need to use something like Kompose to convert docker-compose files to Kubernetes manifests (it didn’t work for my example), or write the manifests by hand.

There are a few advantages here if we’re comparing this implementation with minikube:

  • If you’re new to Kubernetes, you can deploy and run a local cluster without any other tools or Kubernetes knowledge
  • You can reuse the docker-compose files and deploy apps both on Swarm and Kubernetes (think POC’s or migrations user cases)
  • You’ll have one registry for local docker images and the Kubernetes cluster (not the case with minikube for example)

There are also some disadvantages:

  • The Kubernetes version is hardcoded
  • It’s more or less a “read-only” Kubernetes that you can’t really change
  • Mixing the terminologies (use docker cli to deploy to k8s) can become somehow confusing

If you’re completely new to Kubernetes but you’re familiar with Docker, this approach allows you to get a pick at what K8s can do from a safe zone (docker cli). But for application debugging and writing manifests, you’ll need to learn some Kubernetes.

In the docker style, the implementation is clean, simple and user-friendly. Will this make minikube obsolete? For casual users probably yes. But, if you want to run a specific version of Kubernetes, specific add-ons or for more advanced use cases, minikube is still the way to go. Further integration will come into the Docker stack, so for enterprise and Windows, keep an eye here.

Source

Introduction to Kubernetes | Rancher Labs

A Detailed Overview of Rancher’s Architecture

This newly-updated, in-depth guidebook provides a detailed overview of the features and functionality of the new Rancher: an open-source enterprise Kubernetes platform.

Get the eBook

Knowing the benefits of containers – a consistent runtime environment, small size on disk, low overhead, and isolation, to name just a few – you pack your application into a container and are ready to run it. Meanwhile, your colleagues do the same and are ready to run their containerized applications too. Suddenly you need a way to manage all the running containers and their lifecycles: how they interconnect, what hardware they run on, how they get their data storage, how they deal with errors when containers stop running for whatever reason, and more.

Here’s where Kubernetes comes into play.

In this article we’re going to look at what Kubernetes is, how it solves the container orchestration problem, take the theory behind it and bind that theory directly with hands-on tasks to help you understand every part of it.

Kubernetes: A history

Kubernetes, also known as k8s (k … 8 letters … and s) or kube, is a word in greek that means governor, helmsman or captain. The play on nautical terminology is apt, since ships and large vessels carry vast amounts of real-life containers, and the captain or helmsman is the one in charge of the ship. Hence, the analogy of Kubernetes as the captain, or orchestrator, of containers through the information technology space.

Kubernetes started as an open source project by Google in 2014, based on 15 years of Google’s experience running containers. It has seen enormous growth and widespread adoption, and has become the default go-to system for managing containers. Several years later and we have production-ready Kubernetes releases that are already used by small and big companies alike, from development to production.

Kubernetes momentum

With over 40,000 stars on github, over 60,000 commits in 2018, and with more pull requests and issue comments than any other project on Github, Kubernetes has grown very rapidly. Some of the reasons behind its growth are its scalability and robust design patterns – more on that later. Large software companies have published their use of Kubernetes in these case studies.

What Kubernetes has to offer

Let’s see what the features are that attract so much interest in Kubernetes.

At its core, Kubernetes is a container-centric management environment. It orchestrates computing, networking, and storage infrastructure on behalf of user workloads. This provides much of the simplicity of Platform-as-a-Service (PaaS) with the flexibility of Infrastructure-as-a-Service (IaaS), and enables portability across infrastructure providers. Kubernetes is not a mere orchestration system. In fact, it eliminates the need for orchestration. The technical definition of orchestration is execution of a defined workflow: first do A, then B, then C. In contrast, Kubernetes is comprised of a set of independent, composable control processes that continuously drive the current state towards the provided desired state. It shouldn’t matter how you get from A to C. Centralized control is also not required. This results in a system that is easier to use and more powerful, robust, resilient, and extensible.

Kubernetes Concepts

To work with Kubernetes, you use Kubernetes API objects to describe your cluster’s desired state: what applications or other workloads you want to run, what container images they use, the number of replicas, what network and disk resources you want to make available, and more. You set your desired state by creating objects using the Kubernetes API, typically via the command-line interface, kubectl. You can also use the Kubernetes API directly to interact with the cluster and set or modify your desired state.

Once you’ve set your desired state, the Kubernetes Control Plane works to make the cluster’s current state match the desired state. To do so, Kubernetes performs a variety of tasks automatically –- such as starting or restarting containers, scaling the number of replicas of a given application, and more.

The basic Kubernetes objects include:

In addition, Kubernetes contains a number of higher-level abstractions called Controllers. Controllers build upon the basic objects, and provide additional functionality and convenience features. They include:

Let’s describe these one by one, and afterward we’ll try them with some hands-on exercises.

Node

A Node is a worker machine in Kubernetes, previously known as a minion. A node may be a virtual machine (VM) or physical machine, depending on the cluster. Each node contains the services necessary to run pods and is managed by the master components. You can think of a Node like this: a Node to the Pod is like a Hypervisor to VMs.

Pod

A Pod is the basic building block of Kubernetes – the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents a unit of deployment: a single instance of an application in Kubernetes, which might consist of either a single container or a small number of containers that are tightly grouped and that share resources.

Docker is the most common container runtime used in a Kubernetes Pod, but Pods support other container runtimes as well.

Pods in a Kubernetes cluster can be used in two main ways:

  • Pods that run a single container. The “one-container-per-Pod” model is the most common Kubernetes use case; in this case, you can think of a Pod as a wrapper around a single container, and Kubernetes manages Pods rather than the containers directly.
  • Pods that run multiple containers that need to work together. A Pod might encapsulate an application composed of multiple co-located containers that are tightly coupled and need to share resources. These co-located containers might form a single cohesive unit of service – one container serving files from a shared volume to the public, while a separate “sidecar” container refreshes or updates those files. The Pod wraps these containers and storage resources together as a single manageable entity.

Pods provide two kinds of shared resources for their constituent containers: networking and storage.

  • Networking: Each Pod is assigned a unique IP address. Every container in a Pod shares the network namespace, including the IP address and network ports. Containers inside a Pod can communicate with one another using localhost. When containers in a Pod communicate with entities outside the Pod, they must coordinate how they use the shared network resources (such as ports).
  • Storage: A Pod can specify a set of shared storage volumes. All containers in the Pod can access the shared volumes, allowing those containers to share data. Volumes also allow persistent data in a Pod to survive in case one of the containers within needs to be restarted.

Service

Kubernetes Pods are mortal, they are born and they die, they are not resurrected. Even if each Pod gets its own ip address, you cannot rely that it will be stable over time. This creates a problem, if a set of Pods (let’s say backend) provides functionality to another set of Pods (lets say frontent) inside a Kubernetes cluster, how those frontends can keep a reliable communication to backend pods?

Here’s where Services come into play.

A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them – sometimes called a micro-service. The set of Pods targeted by a Service is (usually) determined by a Label Selector.

For example if you have a backend application with 3 Pods, those pods are fungible, frontends do not care which backend they use. While the actual Pods that compose the backend set may change, the frontend clients should not need to be aware of that or keep track of the list of backends themselves. The Service abstraction enables this decoupling.

For applications in the same Kubernetes cluster, Kubernetes offers a simple Endpoints API that is updated whenever the set of Pods in a Service changes. For apps outside the cluster, Kubernetes offers a Virtual-IP-based bridge to Services which redirects to the backend Pods.

Volume

On-disk files in a Container are ephemeral, which presents some problems for non-trivial applications when running in Containers. First, when a Container crashes, it will be restarted by Kubernetes, but the files will be lost – the Container starts with a clean state. Second, when running multiple Containers together in a Pod it is often necessary to share files between those Containers. The Kubernetes Volume abstraction solves both of these problems.

At its core, a volume is just a directory, possibly with some data in it, which is accessible to the Containers in a Pod. How that directory comes to be, the medium that backs it, and the contents of it are determined by the particular volume type used.

A Kubernetes volume has explicit lifetime, the same as the Pod that creates it. As a conclusion, a volume outlives any Containers that run inside the Pod, and data is preserved across Container restarts. Normally, when a Pod ceases to exist, the volume will cease to exist, too. Kubernetes supports multiple types of volumes, and a Pod can use any number of them simultaneously.

Namespaces

Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces.

Namespaces provide a scope for names. Names of resources need to be unique within a namespace, but not across namespaces.

It is not necessary to use multiple namespaces just to separate slightly different resources, such as different versions of the same software: use labels to distinguish resources within the same namespace.

ReplicaSet

A ReplicaSet ensures that a specified number of pod replicas are running at any one time. In other words, a ReplicaSet makes sure that a pod or a homogeneous set of pods is always up and available. However, a Deployment is a higher-level concept that manages ReplicaSets and provides declarative updates to Pods along with a lot of other useful features. Therefore, it is recommended to use Deployments instead of directly using ReplicaSets, unless you require custom update orchestration or don’t require updates at all.

This actually means that you may never need to manipulate ReplicaSet objects, use a Deployment instead.

Deployment

A Deployment controller provides declarative updates for Pods and ReplicaSets.

You describe a desired state in a Deployment object, and the Deployment controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.

StatefulSets

A StatefulSet is used to manage stateful applications, it manages the deployment and scaling of a set of Pods and provides guarantees about the ordering and uniqueness of these Pods.

A StatefulSet operates under the same pattern as any other Controller. You define your desired state in a StatefulSet object, and the StatefulSet controller makes any necessary updates to get there from the current state. Like a Deployment , a StatefulSet manages Pods that are based on an identical container spec. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of their Pods. These Pods are created from the same spec, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling.

DaemonSet

A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created.

Some typical uses of a DaemonSet are:

  • running a cluster storage daemon, such as glusterd, ceph, on each node.
  • running a logs collection daemon on every node, such as fluentd or logstash.
  • running a node monitoring daemon on every node, such as Prometheus Node Exporter or collectd.

Job

A job creates one or more pods and ensures that a specified number of them successfully terminate. As Pods successfully complete, the job tracks the successful completions. When a specified number of successful completions is reached, the job itself is complete. Deleting a Job will cleanup the Pods it created.

A simple case is to create one Job object in order to reliably run one Pod to completion. The Job object will start a new Pod if the first Pod fails or is deleted (for example due to a node hardware failure or a node reboot).

Operational Challenges

Now that you’ve seen the objects used in Kubernetes, its obvious that there’s a ton of information to understand to properly use Kubernetes. A few of the challenges that come to mind when trying to use Kubernetes include:

  • How to deploy consistently across different infrastructures?
  • How to implement and manage access control across multiple clusters (and namespaces)?
  • How to integrate with a central authentication system?
  • How to partition clusters to more efficiently use resources?
  • How to manage multi-tenancy, multiple dedicated and shared clusters?
  • How to create highly available clusters?
  • How to ensure that security policies are enforced across clusters/namespaces?
  • How to monitor so there’s sufficient visibility to detect and troubleshoot issues?
  • How to keep up with Kubernetes development, that is moving at a very fast pace?

Here’s where Rancher can help you. Rancher is an open source container manager used to run Kubernetes in production. Below are some features that Rancher provides:

  • easy-to-use interface for kubernetes configuration and deployment;
  • infrastructure management across multiple clusters and clouds;
  • automated deployment of the latest kubernetes release;
  • workload, RBAC, policy and project management;
  • 24×7 enterprise-grade support.

Rancher becomes your single point of control for multiple clusters running on pretty much any infrastructure that can run Kubernetes:

Rancher overview

Hands-on with Rancher and Kubernetes

Now let’s see how you can use the Kubernetes objects described previously with Rancher’s help. To start, you will need a Rancher instance. Please follow this guide to start one in a few minutes and create a Kubernetes cluster with it.

After starting your cluster, you should see your cluster’s resources in Rancher:

Rancher Kubernetes resources

To start with the first Kubernetes Object – the Node, on the top menu, click on Nodes. You should see a nice overview of the Nodes that form your Kubernetes cluster:

Rancher Kubernetes nodes

There you can also see the number of pods already deployed to each node from your Kubernetes Cluster. Those pods are used by Kubernetes and Rancher internal systems. Normally you shouldn’t have to deal with those.

Let’s proceed with an example of Pod. To do that, go to the Default project of your Kubernetes cluster and you should land on the Workloads tab. Let’s deploy a workload. Click on Deploy and set the Name and the Docker image to be nginx, leave everything else with their default values and clik Launch.

Once created, the Workloads tab should show the nginx Workload.

Rancher Kubernetes Workload

If you click on nginx workload, you will see that under the hood, Rancher actually created a Deployment, just as recommended by Kubernetes to manage ReplicaSets and you will also see the Pod created by that ReplicaSet:

Rancher Workload View

Now you have a Deployment, that makes sure that our desired state is correctly represented in the cluster. Let’s play a little bit with it and scale this Workload to 3, by clicking the + near Scale. Once you do that, you should instantly see 2 more Pods created and 2 more ReplicaSet scaling events. Try to delete one of the pods, by using the right-hand side menu of the Pod and notice how ReplicaSet is recreating it back, to match the desired state.

So you have your application up and running and it is scaled to 3 instances already, the question that comes to mind now is – how can you access it? Here we will try the Services Kubernetes object. To expose our nginx workload, we need to edit it, select Edit from the right-hand side menu of the Workload. You will be presented with the Deploy Workload page, filled already with your nginx workload’s details:

Rancher edit workload

Notice that you have 3 pods, next to Scalable Deployment, but when you started, the default was 1. This is a result of the scaling you’ve done just a bit earlier.

Now click on Add Port and fill the values as follows:

  • set the Publish the container port value to 80;
  • leave the Protocol to be TCP;
  • set the As a value to Layer-4 Load Balancer;
  • set the On listening port value to 80.

And confidently click on Upgrade. This will create an External Load Balancer in your cloud provider and will direct traffic to the nginx Pods in your Kubernetes Cluster. To test this, go again in the nginx workload overview page, and now you should see 80/tcp link right next to Endpoints:

Rancher external load balancer

If you click on 80/tcp it will take you to the external ip of the load balancer that you just created and should present you with a default nginx page, confirming that everything works as expected.

With this, you’ve covered most of the Kubernetes objects presented above. You can play around Rancher with Volumes and Namespaces and surely you’ll figure out how to use them properly via Rancher. As for StatefulSet, DaemonSet and Job, those are very similar to Deployments and in Rancher, you’d create one of those also from Workloads tab, by selecting the Workload type.

Some final thoughts

Let’s recap what you’ve done in the above hands-on exercises. You’ve created most of the Kubernetes objects we described:

  • you started with a kubernetes cluster in Rancher;
  • you then browsed cluster Nodes;
  • then you created a Workload;
  • then you’ve seen that a Workload actually created 3 separate Kubernetes objects: a Deployment that manages a ReplicaSet, that in turn, keeps the desired number of Pods running;
  • after that you scaled your Deployment and observed how that in turn changed the ReplicaSet and consequently scaled the number of Pods;
  • and lastly you created a Service of type Load Balancer, that is balancing client’s requests between the Pods.

And all that was easily done via Rancher, with point-and-click actions, without the need to install any software locally, to copy authentication configurations or to run command lines in a terminal, all that was needed – a browser. And that’s just the surface of Rancher. Pretty convenient I’d say.

Roman Doroschevici

Roman Doroschevici

github

Source

Getting acquainted with Kubernetes 1.10

Kubernetes 1.10 is here

Kubernetes, a leading open source project for automating deployment, scaling, and management of containerized applications, announced version 1.10 today. Among the key features of this release are support for the Container Storage Interface (CSI), API aggregation, a new mechanism for supporting hardware devices, and more.

It’s also the first release since CoreOS joined Red Hat. CoreOS already had the opportunity to work closely with our new Red Hat colleagues through the Kubernetes community and we now have the opportunity to redouble our efforts to help forward Kubernetes as an open source and community-first project.

The Kubernetes project gave a sneak peek at the feature list of Kubernetes 1.10 when the beta was released, but here we’ll take a closer look at some of the more significant developments. First, however, it may be helpful to give a quick refresher on how Kubernetes is developed and new features are added to the system.

From alpha to stable

As you may know, Kubernetes is a system composed of a number of components and APIs. Not all of them can be developed simultaneously or reach maturity at the same time. Because of this, Kubernetes releases include features that are considered alpha, beta, and stable quality. The Kubernetes community defines these features as:

  • Alpha features should be considered tentative. They may change dramatically by the time they’re considered production-ready, or they may be dropped entirely. These features are not enabled by default.
  • Beta features are considered well-tested and will not be dropped from Kubernetes, but they may yet change. These features are available by default.
  • Stable features are considered suitable for production use. Their APIs won’t change the way beta and alpha APIs are likely to, and it is often safe to assume they will be supported for “many subsequent versions” of Kubernetes.

As is usual, Kubernetes 1.10 includes a mix of features at each of these levels of maturity, and several of them merit special attention in this release.

API aggregation is stable

One feature that has graduated to stable in Kubernetes 1.10 is API aggregation, which allows Kubernetes community members to develop their own, custom API servers without making any changes to the core Kubernetes code repository. With this feature now stable, Kubernetes cluster admins can more confidently add these third-party APIs to their clusters in a production configuration. Our collective experience running API aggregation has helped identify and support the upstreams ability to graduate it to stable.

This is a powerful capability that allows developers to provide highly customized behaviors to Kubernetes clusters that return very different kinds of resources than the core Kubernetes APIs provide. This can be especially valuable for use cases where Custom Resource Definitions (CRDs), the primary Kubernetes extension mechanism, may not be fully featured enough.

Customization is something that has been requested by the community and the CoreOS team, now as the Red Hat team, has been focused on architecting Kubernetes in a way to make this possible. In November 2016, we introduced the Operator pattern, software that encodes domain knowledge and extends the Kubernetes API, enabling users to more easily deploy, configure, and manage applications. With API aggregation now considered stable, developers have more ways to use Kubernetes in unique, custom ways.

Standardized storage support

Support for the Container Storage Interface (CSI) specification has graduated to beta in Kubernetes 1.10, and it’s one of the more significant enhancements of this release. The goal of CSI is to create a standardized way for storage vendors to write plugins that will work across multiple container orchestration tools – including but not limited to Kubernetes. Among the capabilities it aims to provide are standardized ways to dynamically provision and deprovision storage volumes, attach or detach volumes from nodes, mount or unmount volumes from nodes, and so on.

Kubernetes was one of the first container orchestration tools to support CSI and the code can now be viewed as fairly mature. As a result, Kubernetes users can expect more storage options for their clusters, as the amount of development and integration work required of vendors is reduced.

A replacement for kube-dns

Most Kubernetes clusters use internal DNS for service discovery. And the default provider for this has been kube-dns, a Go wrapper around dnsmasq, which late last year suffered a slew of vulnerabilities. Work is being done to switch the default provider from Kube-DNS to CoreDNS, an independent project overseen by the CNCF that’s written in Go. As of Kubernetes 1.10, this work has moved into beta.

CoreDNS is built around plugins, and its goals include simplicity, speed, flexibility, and ease of service discovery, all of which are in keeping with the goals of the broader Kubernetes community, including the drive to move more functions out of the core Kubernetes code base and into their own projects.

Expanding support for performance sensitive workloads

Much work has been done across the community to support more performance-sensitive workloads in Kubernetes. For the 1.10 release, the DevicePlugins API has gone to beta. This is designed to provide the community a stable integration point for GPUs, high-performance networking interfaces, FPGAs, Infiniband, and many other types of devices, without requiring the vendor to add any custom code to the Kubernetes core repository.

Other advanced features have graduated to beta to better support CPU and memory sensitive applications. The static CPU pinning policy has graduated to beta to support CPU latency sensitive applications by pinning applications to particular cores. In addition, the cluster is able to schedule and isolate hugepages for those applications that demand them.

Pod security policy

Containers are just isolated processes on a host. Disable that isolation and you’re not running a container anymore.

Kubernetes offers several ways to enable privileged access to a host. These options are intended for workloads with special requirements, such as network plugins and host agents, but in the wrong hands they can also be extremely effective attack vectors against a node. Flip the privileged flag on a pod spec and a container can access all of the host’s devices. A workload with host networking enabled can get around network policy, while a workload that requests a host mount can gain access to the kubelet’s on-disk credentials.

Pod security policies are designed to reduce this attack surface by restricting the kinds of pods that can be run in a given namespace. Over the past couple of releases, the community has worked to get the feature to a usable state, and in 1.10 the API moves to its own API group from the deprecated extensions/v1beta1.

There is no true multi-tenancy without pod security policies. Over the next few releases, the community expects to see a measured rollout of PSP (similar to the RBAC rollout about a year ago starting in 1.6) as it attempts to improve the default security posture of Kubernetes. In the meantime, we encourage admins to enable this feature on their test clusters and begin experimenting with it. Your feedback can help improve Kubernetes security for everyone.

Adding identity to containers

Finally, one alpha-quality feature that’s worth calling attention to is the TokenRequest API, a replacement for service accounts, which gets us on the road to being able to assign identities to individual containers. Currently, multiple instances of the same container all share the same identity. Identifying them individually should facilitate the creation of policies that impact specific containers – for example, a policy could be created wherein containers not running on the user’s most locked-down, secured nodes could be denied TLS credentials.

TokenRequest also enables credentials targeted for services other than the API server. This can let applications safely attest to external services without hand over its Kubernetes credentials, and can harden use cases such as the Vault Kubernetes plugin.

As with alpha features, this is definitely a work in progress. But it is another important step toward hardening the security of Kubernetes clusters.

Onward and upward

As always, we congratulate the entire Kubernetes community for the hard work that went into making Kubernetes 1.10 one of the most feature-rich releases yet. A fast-moving open source projects, Kubernetes continues to mature and seek to adapt to meet the needs of its user base, thanks to the many contributors from across the ecosystem.

Work on Kubernetes 1.11 is already underway, with the release expected to ship in roughly three months. To have a look at what’s under development, or to get involved, join any of the many special interest groups (SIGs) where community collaboration take place. Red Hat and the CoreOS team are proud to work alongside the other members of the upstream community. Join the upstream community contributors, Cole Mickens and Stefan Schimanski, for a briefing about what’s new on March 28.

Source

Health checking gRPC servers on Kubernetes

Author: Ahmet Alp Balkan (Google)

gRPC is on its way to becoming the lingua franca for
communication between cloud-native microservices. If you are deploying gRPC
applications to Kubernetes today, you may be wondering about the best way to
configure health checks. In this article, we will talk about
grpc-health-probe, a
Kubernetes-native way to health check gRPC apps.

If you’re unfamiliar, Kubernetes health
checks

(liveness and readiness probes) is what’s keeping your applications available
while you’re sleeping. They detect unresponsive pods, mark them unhealthy, and
cause these pods to be restarted or rescheduled.

Kubernetes does not
support
gRPC health
checks natively. This leaves the gRPC developers with the following three
approaches when they deploy to Kubernetes:

options for health checking grpc on kubernetes today

  1. httpGet probe: Cannot be natively used with gRPC. You need to refactor
    your app to serve both gRPC and HTTP/1.1 protocols (on different port
    numbers).
  2. tcpSocket probe: Opening a socket to gRPC server is not meaningful,
    since it cannot read the response body.
  3. exec probe: This invokes a program in a container’s ecosystem
    periodically. In the case of gRPC, this means you implement a health RPC
    yourself, then write and ship a client tool with your container.

Can we do better? Absolutely.

Introducing “grpc-health-probe”

To standardize the “exec probe” approach mentioned above, we need:

  • a standard health check “protocol” that can be implemented in any gRPC
    server easily.
  • a standard health check “tool” that can query the health protocol easily.

Thankfully, gRPC has a standard health checking
protocol
. It
can be used easily from any language. Generated code and the utilities for
setting the health status are shipped in nearly all language implementations of
gRPC.

If you
implement
this health check protocol in your gRPC apps, you can then use a standard/common
tool to invoke this Check() method to determine server status.

The next thing you need is the “standard tool”, and it’s the
grpc-health-probe.


With this tool, you can use the same health check configuration in all your gRPC
applications. This approach requires you to:

  1. Find the gRPC “health” module in your favorite language and start using it
    (example Go library).
  2. Ship the
    grpc_health_probe
    binary in your container.
  3. Configure
    Kubernetes “exec” probe to invoke the “grpc_health_probe” tool in the
    container.

In this case, executing “grpc_health_probe” will call your gRPC server over
localhost, since they are in the same pod.

What’s next

grpc-health-probe project is still in its early days and it needs your
feedback. It supports a variety of features like communicating with TLS servers
and configurable connection/RPC timeouts.

If you are running a gRPC server on Kubernetes today, try using the gRPC Health
Protocol and try the grpc-health-probe in your deployments, and give
feedback
.

Further reading

Source

Hidden Gems – Jetstack Blog

27/Mar 2018

By Matthew Bates

Coming up to four years since its initial launch, Kubernetes is now at version 1.10. Congratulations to the many contributors and the release team on another excellent release!

At Jetstack, we push Kubernetes to its limits, whether engaging with customers on their own K8s projects, training K8s users of all levels, or contributing our open source developments to the K8s community. We follow the project day-to-day, and track its development closely.

You can read all about the headline features of 1.10 at the official blog post. But, in keeping with our series of release gem posts, we asked our team of engineers to share a feature of 1.10 that they find particularly exciting, and that they’ve been watching and waiting for (or have even been involved in!)

Device Plugins

Matt Turner

The Device Plugin system is now beta in Kubernetes 1.10. This essentially allows Nodes to be sized along extra, arbitrary dimensions. These represent any special hardware they might have over and above CPU and RAM capacity. For example, a Node might specify that it has 3 GPUs and a high-performance NIC. A Pod could then request one of those GPUs through the standard resources stanza, causing it to be scheduled on a node with a free one. A system of plugins and APIs handles advertising and initialising these resources before they are handed over to Pods.

nVidia has already made a plugin for managing their GPUs. A request for 2 GPUs would look like:

resources:
limits:
nvidia.com/gpu: 2

CoreDNS

Charlie Egan

1.10 makes cluster DNS a pluggable component. This makes it easier to use other tools for service discovery. One such option is CoreDNS, a fellow CNCF project, which has a native ‘plugin’ that implements the Kubernetes service discovery spec. It also runs as a single process that supports caching and health checks (meaning there’s no need for the dnsmasq or healthz containers in the DNS pod).

The CoreDNS plugin was promoted to beta in 1.10 and will eventually become the Kubernetes default. Read more about using CoreDNS here.

Pids per Pod limit

Luke Addison

A new Alpha feature in 1.10 is the ability to control the total number of PIDs per Pod. The Linux kernel provides the process number controller which can be attached to a cgroup hierarchy in order to stop any new tasks from being created after a specified limit is reached. This kernel feature is now exposed to cluster operators. This is vital for avoiding malicious or accidental fork bombs which can devastate clusters.

In order to enable this feature, operators should define SupportPodPidsLimit=true in the kubelet’s –feature-gates= parameter. The feature currently only allows operators to define a single maximum limit per Node by specifying the –pod-max-pids flag on the kubelet. This may be a limitation for some operators as this static limit cannot work for all workloads and there may be legitimate use cases for exceeding it. For this reason, we may see the addition of new flags and fields in the future to make this limit more dynamic; one possible addition is the ability of operators to specify a low and a high PID limit and allowing customers to choose which one they want to use by setting a boolean field on the Pod spec.

It will be very exciting to see how this feature develops in subsequent releases as it provides another important isolation mechanism for workloads.

Louis Taylor

1.10 adds alpha support for shared process namespaces in a pod. To try it out, operators must enable it with the PodShareProcessNamespace feature flag set on both the apiserver and kubelet.

When enabled, users can set shareProcessNamespace on a pod spec:

apiVersion: v1
kind: Pod
metadata:
name: shared-pid
spec:
shareProcessNamespace: true

Sharing the PID namespace inside a pod has a few effects. Most prominently, processes inside containers are visible to all other containers in the pod, and signals can be sent to processes across container boundaries. This makes sidecar containers more powerful (for example, sending a SIGHUP signal to reload configuration for an application running in a separate container is now possible).

CRD Sub-resources

Josh Van Leeuwen

With 1.10 comes a new alpha feature to include Subresources to Custom Resources, namely /status and /scale. Just like other resource types, they provide separate API endpoints to modify their contents. This not only means that your resource now interacts with systems such as HorizontalPodAutoscaler, but it also enables greater access control of user spec and controller status data. This is a great feature to ensure users are unable to change or destroy resource states that are needed by your custom controllers.

To enable both the /status and /scale subresources include the following into your Custom Resource Definition:

subresources:
status: {}
scale:
specReplicasPath: .spec.replicas
statusReplicasPath: .status.replicas
labelSelectorPath: .status.labelSelector

External Custom Metrics

Matt Bates

The first version of HPA (v1) was only able to scale based on observed CPU utilisation. Although useful for some cases, CPU is not always the most suitable or relevant metric to autoscale an application. HPA v2, introduced in 1.6, is able to scale based on custom metrics. Read more about Resource Metrics API, the Custom Metrics API and HPA v2 in this blog post in our Kubernetes 1.8 Hidden Gems series.

Custom metrics can describe metrics from the pods that are targeted by the HPA, resources (e.g. CPU or memory), or objects (say, a Service or Ingress). But these options are not suited to metrics that relate to infrastructure outside of a cluster. In a recent customer engagement, there was a desire to scale pods based on Google Cloud Pub/Sub queue length, for example.

In 1.10, there is now an extension (in alpha) to the HPA v2 API to support external metrics. So, for example, we may have an HPA to serve the aforementioned Pub/Sub autoscaling requirement that looks like the following:

kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta2
spec:
scaleTargetRef:
kind: ReplicationController
name: Worker
minReplicas: 2
maxReplicas: 10
metrics:
– type: External
external:
metricName: queue_messages_ready
metricSelector:
matchLabels:
queue: worker_tasks
targetAverageValue: 30

This HPA would require an add-on API server, registered as an APIService, which implements the Custom Metrics API and query Pub/Sub for the metric.

Custom kubectl ‘get’ and ‘describe’ output

James Munnelly

Kubernetes 1.10 brings a small but important change to the way the output for kubectl get and kubectl describe is generated.

In the past, third party extensions to Kubernetes like Cert-Manager and Navigator would always display something like this:

$ kubectl get certificates
NAME AGE
prod-tls 4h

With this change however, we can configure our extensions to display more helpful output when querying our custom resource types. For example:

$ kubectl get certificates
NAME STATUS EXPIRY ISSUER
prod-tls Valid 2018-05-03 letsencrypt-prod

$ kubectl get elasticsearchclusters
NAME HEALTH LEADERS DATA INGEST
logging Green 3/3 4/4 2/2

This brings a native feel to API extensions, and provides users an easy way to quickly identify meaningful data points about their resources at a glance.

Volume Scheduling and Local Storage

Richard Wall

We’re excited to see that local storage is promoted to a beta API group and volume scheduling is enabled by default in 1.10.

There are a couple of related API changes:

  1. PV has a new PersistentVolume.Spec.NodeAffinity field, whose value should contain the hostname of the local node.
  2. StorageClass has a new StorageClass.volumeBindingMode: WaitForFirstConsumer option, which makes Kubernetes delay the binding of the volume until it has considered and resolved all the pods scheduling constraints, including the constraints on the PVs that match the volume claim.

We’re already thinking about how we can use these features in a Navigator managed Cassandra Database on Kubernetes. With some tweaks to the Navigator code, it will now be much simpler to run C* nodes with their commit log and sstables on dedicated local SSDs. If you add a PV for each available SSD, and if the PV has the necessary NodeAffinity configuration, Kubernetes will factor the locations of those PVs into its scheduling decisions and ensure that C* pods are scheduled to nodes with an unused SSD. We’ll write more about this in an upcoming blog post!

PS I’d recommend reading Provisioning Kubernetes Local Persistent Volumes which describes a really elegant mechanism for automatically discovering and preparing local volumes using DaemonSet and the experimental local PV provisioner.

Source