Deploying configurable frontend web application containers

Sep 19, 2018

Alternative Text by José Moreira

The approach for deploying a containerised application typically involves building a Docker container image from a Dockerfile once and deploying that same image across several deployment environments (development, staging, testing, production).

If following security best practices, each deployment environment will require a different configuration (data storage authentication credentials, external API URLs) and the configuration is injected into the application inside the container through environment variables or configuration files. Our Hamish Hutchings takes a deeper look at 12-factor app in this blog post. Also, the possible configuration profiles might not be predetermined, if for example, the web application should be ready to be deployed both on public or private cloud (client premises), as it is also common for several configuration profiles to be added to source code and the required profile to be loaded at build time.

The structure of a web application project typically contains a ‘src’ directory with source code and executing npm run-script build triggers the Webpack asset build pipeline. Final asset bundles (HTML, JS, CSS, graphics, and fonts) are written to a dist directory and contents are either uploaded to a CDN or served with a web server (NGINX, Apache, Caddy, etc).

For context in this article, let’s assume the web application is a single-page frontend application which connects to a backend REST API to fetch data, for which the API endpoint will change across deployment environments. The backend API endpoint should, therefore, be fully configurable and configuration approach should support both server deployment and local development and assets are served by NGINX.

Deploying client-side web application containers requires a different configuration strategy compared to server-side applications containers. Given the nature of client-side web applications, there is no native executable that can read environment variables or configuration files in runtime, the runtime is the client-side web browser and configuration has to be hard-coded in the Javascript source code either by hard-coding values during the asset build phase or hard-coding rules (a rule would be to deduce current environment based on the domain name, ex: ‘staging.app.com’).

There is one OS process which is relevant to configuration in which reading values from environment values is useful, which is the asset build Node JS process and this is helpful for configuring the app for local development with local development auto reload.

For the configuration of the webapp across several different environments, there are a few solutions:

  1. Rebuild the Webpack assets on container start during each deployment with the proper configuration on the destination server node(s):
    • Adds to deployment time. Depending on deployment rate and size of the project, the deployment time overhead might be considerable.
    • Is prone to build failures at end of deployment pipeline even if the image build has been tested before.
    • Build phase can fail for example due to network conditions although this can probably be minimised by building on top of a docker image that already has all the dependencies installed.
    • Might affect rollback speed
  2. Build one image per environment (again with hardcoded configuration):
    • Similar solutions (and downsides) of solution #1 except that it adds clutter to the docker registry/daemon.
  3. Build image once and rewrite configuration bits only during each deployment to target environment:
    • Image is built once and ran everywhere. Aligns with the configuration pattern of other types of applications which is good for normalisation
    • Scripts that rewrite configuration inside the container can be prone to failure too but they are testable.

I believe solution #1 is viable and, in some cases, simpler and probably required if the root path where the web application is hosted needs to change dynamically, ex.: from ‘/’ to ‘/app’, as build pipelines can hardcode the base path of fonts and other graphics in CSS files with the root path, which is a lot harder to change post build.

Solution #3 is the approach I have been implementing for the projects where I have been responsible for containerising web applications (both at my current and previous roles), which is the solution also implemented by my friend Israel and for which he helped me out implementing the first time around and the approach that will be described in this article.

Application-level configuration

Although it has a few moving parts, the plan for solution #3 is rather straightforward:

For code samples, I will utilise my fork of Adam Sandor micro-service Doom web client, which I have been refactoring to follow this technique, which is a Vue.js application. The web client communicates with two micro-services through HTTP APIs, the state and the engine, endpoints which I would like to be configurable without rebuilding the assets.

Single Page Applications (SPA) have a single “index.html” as a entry point to the app and during deployment. meta tags with optional configuration defaults are added to the markup from which application can read configuration values. script tags would also work but I found meta tags simple enough for key value pairs.









1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

<!DOCTYPE html>

<html lang=”en”>

<head>

<meta charset=”utf-8″>

<meta http-equiv=”X-UA-Compatible” content=”IE=edge”>

<meta name=”viewport” content=”width=device-width,initial-scale=1.0″>

<meta property=”DOOM_STATE_SERVICE_URL” content=”http://localhost:8081/” />

<meta property=”DOOM_ENGINE_SERVICE_URL” content=”http://localhost:8082/” />

<link rel=”icon” href=”./favicon.ico”>

<title>frontend</title>

</head>

<body>

<noscript>

<strong>We’re sorry but frontend doesn’t work properly without JavaScript enabled. Please enable it to continue.</strong>

</noscript>

<div id=”app”></div>

<!– built files will be auto injected –>

</body>

</html>



For reading configuration values from meta tags (and other sources), I wrote a simple Javascript module (“/src/config.loader.js”):









1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

/**

* Get config value with precedence:

* – check `process.env`

* – check current web page meta tags

* @param key Configuration key name

*/

function getConfigValue (key) {

let value = null

if (process.env && process.env[`$`] !== undefined) {

// get env var value

value = process.env[`$`]

} else {

// get value from meta tag

return getMetaValue(key)

}

return value

}

/**

* Get value from HTML meta tag

*/

function getMetaValue (key) {

let value = null

const node = document.querySelector(`meta[property=$]`)

if (node !== null) {

value = node.content

}

return value

}

export default { getConfigValue, getMetaValue }



This module will read configuration “keys” by looking them up in the available environment variables (“process.env”) first, so that configuration can be overridden with environment variables when developing locally (webpack dev server) and then the current document meta tags.

I also abstracted the configuration layer by adding a “src/config/index.js” that exports an object with the proper values:










import loader from ‘./loader’

export default {

DOOM_STATE_SERVICE_URL: loader.getConfigValue(‘DOOM_STATE_SERVICE_URL’),

DOOM_ENGINE_SERVICE_URL: loader.getConfigValue(‘DOOM_ENGINE_SERVICE_URL’)

}



which can then be utilised in the main application by importing the “src/config” module and accessing the configuration keys transparently:










import config from ‘./config’

console.log(config.DOOM_ENGINE_SERVICE_URL)



There is some room for improvement in the current code as it not DRY (list of required configuration variables is duplicated in several places in the project) and I’ve considered writing a simple Javascript package to simplify this approach as I’m not aware if something already exists. Writing the Docker & Docker Compose files The Dockerfile for the SPA adds source-code to the container to the ‘/app’ directory, installs dependencies and runs a production webpack build (“NODE_ENV=production”). Assets bundles are written to the “/app/dist” directory of the image:










FROM node:8.11.4-jessie

RUN mkdir /app

WORKDIR /app

COPY package.json .

RUN npm install

COPY . .

ENV NODE_ENV production

RUN npm run build

CMD npm run dev



The docker image contains a Node.js script (“/app/bin/rewrite-config.js”) which copies “/app/dist” assets to another target directory before rewriting the configuration. Assets will be served by NGINX and therefore copied to a directory that NGINX can serve, in this case, a shared (persistent) volume. Source and destination directories can be defined through container environment variables:









1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

#!/usr/bin/env node

const cheerio = require(‘cheerio’)

const copy = require(‘recursive-copy’)

const fs = require(‘fs’)

const rimraf = require(‘rimraf’)

const DIST_DIR = process.env.DIST_DIR

const WWW_DIR = process.env.WWW_DIR

const DOOM_STATE_SERVICE_URL = process.env.DOOM_STATE_SERVICE_URL

const DOOM_ENGINE_SERVICE_URL = process.env.DOOM_ENGINE_SERVICE_URL

// – Delete existing files from public directory

// – Copy `dist` assets to public directory

// – Rewrite config meta tags on public directory `index.html`

rimraf(WWW_DIR + ‘/*’, {}, function() {

copy(`$`, `$`, , function(error, results) {

if (error) {

console.error(‘Copy failed: ‘ + error);

} else {

console.info(‘Copied ‘ + results.length + ‘ files’);

rewriteIndexHTML(`$/index.html`, {

DOOM_STATE_SERVICE_URL: DOOM_STATE_SERVICE_URL,

DOOM_ENGINE_SERVICE_URL: DOOM_ENGINE_SERVICE_URL

})

}

});

})

/**

* Rewrite meta tag config values in “index.html”.

* @param file

* @param values

*/

function rewriteIndexHTML(file, values) {

console.info(`Reading ‘$’`)

fs.readFile(file, ‘utf8’, function (error, data) {

if (!error) {

const $ = cheerio.load(data)

console.info(`Rewriting values ‘$’`)

for (let [key, value] of Object.entries(values)) {

console.log(key, value);

$(`[property=$]`).attr(“content”, value);

}

fs.writeFile(file, $.html(), function (error) {

if (!error) {

console.info(`Wrote ‘$’`)

} else {

console.error(error)

}

});

} else {

console.error(error)

}

});

}



The script utilises CheerioJS to read the “index.html” into memory, replaces values of meta tags according to environment variables and overwrites “index.html”. Although “sed” would have been sufficient for search & replace, I chose CherioJS as a more reliable solution that also allows expanding into more complex solutions like script injections.

Deployment with Kubernetes

Let’s jump into the Kubernetes Deployment manifest:









1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

# Source: doom-client/templates/deployment.yaml

apiVersion: apps/v1beta2

kind: Deployment

metadata:

name: doom-client

labels:

name: doom-client

spec:

replicas: 1

selector:

matchLabels:

name: doom-client

template:

metadata:

labels:

name: doom-client

spec:

initContainers:

– name: doom-client

image: “doom-client:latest”

command: [“/app/bin/rewrite-config.js”]

imagePullPolicy: IfNotPresent

env:

– name: DIST_DIR

value: “/app/dist”

– name: WWW_DIR

value: “/tmp/www”

– name: DOOM_ENGINE_SERVICE_URL

value: “http://localhost:8081/”

– name: DOOM_STATE_SERVICE_URL

value: “http://localhost:8082/”

volumeMounts:

– name: www-data

mountPath: /tmp/www

containers:

– name: nginx

image: nginx:1.14

imagePullPolicy: IfNotPresent

volumeMounts:

– name: www-data

mountPath: /usr/share/nginx/html

– name: doom-client-nginx-vol

mountPath: /etc/nginx/conf.d

volumes:

– name: www-data

emptyDir: {}

– name: doom-client-nginx-vol

configMap:

name: doom-client-nginx



The Deployment manifest defines an “initContainer” which executes the “rewrite-config.js” Node.js script to prepare and update the shared storage volume with the asset bundles. It also defines an NGINX container for serving our static assets. Finally, it also creates a shared Persistent Volume which is mounted on both of the above containers. In the NGINX container the mount point is “/var/www/share/html” but on the frontend container “/tmp/www” for avoiding creating extra directories. “/tmp/www” will be the directory where the Node.js script will copy asset bundles to and rewrite the “index.html”. Local development with Docker Compose

The final piece of our puzzle is the local Docker Compose development environment. I’ve included several services that allow both developing the web application with the development server and testing the application when serving production static assets through NGINX. It is perfectly possible to separate these services into several YAML files (“docker-compose.yaml”, “docker-compose.dev.yaml” and “docker-compose.prod.yaml”) and do some composition but I’ve added a single file for the sake of simplicity.

Apart from the “doom-state” and “doom-engine” services which are our backend APIs, the “ui” service starts the webpack development server with “npm run dev” and the “ui-deployment” service, which runs a container based on the same Dockerfile, runs the configuration deployment script. The “nginx” service serves static assets from a persistent volume (“www-data”) which is also mounted on the “ui-deployment” script.









1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

# docker-compose.yaml

version: ‘3’

services:

ui:

build: .

command: [“npm”, “run”, “dev”, ]

ports:

– “8080:8080”

environment:

– HOST=0.0.0.0

– PORT=8080

– NODE_ENV=development

– DOOM_ENGINE_SERVICE_URL=http://localhost:8081/

– DOOM_STATE_SERVICE_URL=http://localhost:8082/

volumes:

– .:/app

# bind volume inside container for source mount not shadow image dirs

– /app/node_modules

– /app/dist

doom-engine:

image: microservice-doom/doom-engine:latest

environment:

– DOOM_STATE_SERVICE_URL=http://doom-state:8080/

– DOOM_STATE_SERVICE_PASSWORD=enginepwd

ports:

– “8081:8080”

doom-state:

image: microservice-doom/doom-state:latest

ports:

– “8082:8080”

# run production deployment script

ui-deployment:

build: .

command: [“/app/bin/rewrite-config.js”]

environment:

– NODE_ENV=production

– DIST_DIR=/app/dist

– WWW_DIR=/tmp/www

– DOOM_ENGINE_SERVICE_URL=http://localhost:8081/

– DOOM_STATE_SERVICE_URL=http://localhost:8082/

volumes:

– .:/app

# bind volume inside container for source mount not shadow image dirs

– /app/node_modules

– /app/dist

# shared NGINX static files dir

– www-data:/tmp/www

depends_on:

– nginx

# serve docker image production build with nginx

nginx:

image: nginx:1.14

ports:

– “8090:80”

volumes:

– www-data:/usr/share/nginx/html

volumes:

www-data:



Since the webpack dev server is a long running process which also hot-reloads the app on source code changes, the Node.js config module will yield configuration from environment variables, based on the precedence I created. Also, although source code changes can trigger client-side updates without restarts (hot reload), it will not update the production build, which has to be manual but straightforward with a $ docker-compose build && docker-compose up.

Summarizing, although there are a few improvements points, including on the source code I wrote for this implementation, this setup has been working pretty well for the last few projects and is flexible enough to also support deployments to CDNs, which is as simple as adding a step for pushing assets to the cloud instead of a shared volume with NGINX.

If you have any comments feel free to get in touch on Twitter or comment under the article.

Download our free whitepaper, Kubernetes: Crossing the Chasm below.

Download Whitepaper

Source

Kubernetes versus Docker: What’s the difference?

 

Expert Training in Kubernetes and Rancher

Join our free online training sessions to learn how to manage Kubernetes workloads with Rancher.

Sign up here

Docker vs Kubernetes: The Journey from Docker to Kubernetes

The need to deploy applications from one computing environment to another quickly, easily, and reliably has become a critical part of enterprise’s business requirements and DevOps team’s daily workflow.

It’s unsurprising then, that container technologies, which make application deployment and management easier for teams of all sizes, have risen dramatically in recent years. At the same time, however, virtual machines (VM) as computing resources have reached their peak use in virtualized data centers. Since VMs existed long before containers, you may wonder what the need isfor containers and why they have become so popular.

The Benefits and Limitations of Virtual Machines

Virtual machines allow you to run a full copy of an operating system on top of virtualized hardware as if it is a separate machine. In cloud computing, physical hardware on a bare metal server are virtualized and shared between virtual machines running on a host machine in a data center by the help of hypervisor (i.e. virtual machine manager).

Even though virtual machines bring us great deal of advantages such as running different operating systems or versions, VMs can consume a lot of system resources and also take longer boot time. On the other hand, containers share the same operating system kernel with collocated containers each one running as isolated processes. Containers are lightweight alternative by taking up less space (MBs) and can be provisioning rapidly (milliseconds) as opposed to VM’s slow boot time (minutes) and more storage space requirements (GBs). This allows containers to operate at an unprecedented scale and maximize the number of applications running on minimum number of servers. Therefore, containerization shined drastically in the recent years because of all these advantages for many software projects of enterprises.

Since its initial release in 2013, Docker has become the most popular container technology worldwide, despite a host of other options, including RKT from CoreOS, LXC, LXD from Canonical, OpenVZ, and Windows Containers.

However, Docker technology alone is not enough to reduce the complexity of managing containerized applications, as software projects get more and more complex and require the use tens of thousands of Docker containers. To address these larger container challenges, substantial number of container orchestration systems, such as Kubernetes and Docker Swarm, have exploded onto the scene shortly after the release of Docker.

There has been some confusion surrounding Docker and Kubernetes for awhile: “what they are?”, “what they are not?”, “where are they used?”, and “why are both needed?”

This post aims to explain the role of each technology and how each technology helps companies ease their software development tasks. Let’s use a made-up company, NetPly (sounds familiar?), as a case study to highlight the issues we are addressing.

NetPly is an online and on-demand entertainment movie streaming company with 30 million members in over 100 countries. NetPly delivers video streams to your favorite devices and provides personalized movie recommendations to their customers based on their previous activities, such as sharing or rating a movie. To run their application globally, at scale, and provide quality of service to their customers, NetPly runs 15,000 production servers worldwide and follow agile methodology to deploy new features and bug fixes to the production environment at a fast clip.

However, NetPly has been struggling with two fundamental issues in their software development lifecycle:

Issue 1- Code that runs perfectly in a development box, sometimes fails on test and/or production environments. Therefore, NetPly would like to keep code and configuration consistent across their development, test, and production environments to reduce the issues arising from application hosting environments.

Issue 2- Viewers experience a lot of lags as well as poor quality and degraded performance for video streams during weekends, nights, and holidays, when incoming requests spike. To resolve this potentially-devastating issue, NetPly would like to use load-balancing and auto scaling techniques and automatically adjust the resource capacity (e.g. increase or decrease number of computing resources) to maintain application availability, provide stable application performance, and optimize operational costs as computing demand increases or decreases. These requests also require NetPly to manage the complexity of computing resources and the connections between the flood of these resources in production.

Docker can be used to resolve Issue 1 by following a container-based approach; in other words, packaging application code along with all of its dependencies, such as libraries, files, and necessary configurations, together in a Docker image.

Docker is an open-source operating system level virtualized containerization platform with a light-weight application engine to run, build and distribute applications in Docker containers that run nearly anywhere. Docker containers, as part of Docker, are portable and light-weight alternative to virtual machines, and eliminate the waste of esources and longer boot times of the virtual-machine approach. Docker containers are created using Docker images, which consist of a prebuilt application stack required to launch the applications inside the container.

With that explanation of a Docker container in mind, let’s go back our successful company that is under duress: NetPly. As more users simultaneously request movies to watch on the site, NetPly needs to scale up more Docker containers at a reasonably fast rate and scale down when the traffic lowers. However, Docker alone is not capable of taking care of this job, and writing simple shell scripts to scale the number of Docker containers up or down by monitoring the network traffic or number of requests that hit to the server would not be a viable and practicable solution.

As the number of containers increases to tens of hundreds to thousands, and the NetPly IT team starts managing fleets of containers across multiple heterogeneous host machines, it becomes a nightmare to execute Docker commands like “docker run”, “docker kill”, and “docker network” manually.

Right at the point where the team starts launching containers, wiring them together, ensuring high availability even when a host goes down, and distributing the incoming traffic to the appropriate containers, the team wishes they had something that handled all these manual tasks with no or minimal intervention. Exit human, enter program.

To sum up: Docker by itself is not enough to handle these resources demands at scale. Simple shell commands alone are not sufficient to handle tasks for a tremendous number of containers on a cluster of bare metal or virtual servers. Therefore, another solution is needed to handle all these hurdles for the NetPly team.

This is where the magic starts with Kubernetes. Kubernetes is as container orchestration engine (COE), originally developed by Google and used to resolve NetPly’s Issue 2. Kubernetes allows you to handle fleets of containers. Kubernetes automatically manages the deployment, scaling and networking of containers, as well as container failovers by launching a new one with ease.

The following are some of the fundamental features of Kubernetes.

  • Load balancing
  • Configuration management
  • Automatic IP assignment
  • Container scheduling
  • Health checks and self healing
  • Storage management
  • Auto rollback and rollout
  • Auto scaling

Container Orchestration Alternatives

Although Kubernetes seems to solve the challenges our NetPly team faces, there are a good deal of container management tool alternatives for Kubernetes out there.

Docker Swarm, Marathon on Apache Mesos, and Nomad are all container orchestration engines that can also be used for managing your fleet of containers.

Why choose anything other than Kubernetes? Although Kubernetes has a lot of great qualities, it has challenges too. The most arresting issues people face with Kubernetes are:

1) the steep learning curve to its commands;

2) setting Kubernetes up for different operating systems.

As opposed to Kubernetes, Docker Swarm uses the Docker CLI to manage all container services. Docker Swarm is easy to set up, has less commands to learn to get started rapidly, and is cheaper to train employees. A drawback of Docker Swarm bounds you to the limitations of the Docker API.

Another option is the Marathon framework on Apache Mesos. It’s extremely fault-tolerant and scalable for thousands of servers. However, it may be too complicated to set up and manage small clusters with Marathon, making it impractical for many teams.

Each container management tool comes with its own set of advantages and disadvantages. However, Kubernetes with its heritage based in Google’s Borg system, has been greatly adopted and supported by the large community as well as industry for many years and become the most popular container management solution among other players. With the power of both Docker and Kubernetes, it seems like journey of the power and popularity of these technologies will continue to rise and being adopted by even larger communities.

In our next article in this series, we will compare in more depth Kubernetes and Docker Swarm.

Faruk Caglar, PhD

Faruk Caglar, PhD

Cloud Computing Researcher and Solution Architect

Faruk Caglar received his PhD from the Electrical Engineering and Computer Science Department at Vanderbilt University. He is a researcher in the fields of Cloud Computing, Big Data, Internet of Things (IoT) as well as Machine Learning and solution architect for cloud-based applications. He has published several scientific papers and has been serving as reviewer at peer-reviewed journals and conferences. He also has been providing professional consultancy in his research field.

Source

The Unexpected Kubernetes: Part 2: Volume and Many Ways of Persisting Data

Recap

Last time we talked about PV, PVC, Storage Class and Provisioner.

To quickly recap:

  1. Originally PV was designed to be a piece of storage pre-allocated by administrator. Though after the introduction of Storage Class and Provisioner, users are able to dynamically provision PVs now.

  2. PVC is a request for a PV. When used with Storage Class, it will trigger the dynamic provisioning of a matching PV.

  3. PV and PVC are always one to one mapping.

  4. Provisioner is a plugin used to provision PV for users. It helps to remove the administrator from the critical path of creating a workload that needs persistent storage.

  5. Storage Class is a classification of PVs. The PV in the same Storage Class can share some properties. In most cases, while being used with a Provisioner, it can be seen as the Provisioner with predefined properties. So when users request it, it can dynamically provision PVs with those predefined properties.

But those are not the only ways to use persistent storage in Kubernetes.

Take a deep dive into Best Practices in Kubernetes Networking

From overlay networking and SSL to ingress controllers and network security policies, we’ve seen many users get hung up on Kubernetes networking challenges. In this video recording, we dive into Kubernetes networking, and discuss best practices for a wide variety of deployment options.

Watch the video

Volume

In the previous article, I mentioned that there is also a concept of Volume in Kubernetes. In order to differentiate Volume from Persistent Volume, people sometimes call it In-line Volume, or Ephemeral Volume.

Let me quote the definition of Volume here:

A Kubernetes volume … has an explicit lifetime – the same as the Pod that encloses it. Consequently, a volume outlives any Containers that run within the Pod, and data is preserved across Container restarts. Of course, when a Pod ceases to exist, the volume will cease to exist, too. Perhaps more importantly than this, Kubernetes supports many types of volumes, and a Pod can use any number of them simultaneously.

At its core, a volume is just a directory, possibly with some data in it, which is accessible to the Containers in a Pod. How that directory comes to be, the medium that backs it, and the contents of it are determined by the particular volume type used.

One important property of Volume is that it has the same lifecycle as the Pod it belongs to. It will be gone if the Pod is gone. That’s different from Persistent Volume, which will continue to exist in the system until users delete it. Volume can also be used to share data between containers inside the same Pod, but this isn’t the primary use case, since users normally only have one container per Pod.

So it’s easier to treat Volume as a property of Pod, instead of as a standalone object. As the definition said, it represents a directory inside the pod, and Volume type defines what’s in the directory. For example, Config Map Volume type will create configuration files from the API server in the Volume directory; PVC Volume type will mount the filesystem from the corresponding PV in the directory, etc. In fact, Volume is almost the only way to use storage natively inside Pod.

It’s easy to get confused between Volume, Persistent Volume and Persistent Volume Claim. So if you can imagine that there is a data flow, it will look like this: PV -> PVC -> Volume. PV contains the real data, bound to PVC, which used as Volume in Pod in the end.

However, Volume is also confusing in the sense that besides PVC, it can be backed by pretty much any type of storage supported by Kubernetes directly.

Remember we already have Persistent Volume, which supports different kinds of storage solutions. We also have Provisioner, which supports the similar (but not exactly the same) set of solutions. And we have different types of Volume as well.

So, how are they different? And how to choose between them?

Many ways of persisting data

Take AWS EBS for example. Let’s start counting the ways of persisting data in Kubernetes.

Volume Way

awsElasticBlockStore is a Volume type.

You can create a Pod, specify a volume as awsElasticBlockStore, specify the volumeID, then use your existing EBS volume in the Pod.

The EBS volume must exist before you use it with Volume directly.

PV way

AWSElasticBlockStore is also a PV type.

So you can create a PV that represents an EBS volume (assuming you have the privilege to do that), then create a PVC bound to it. Finally, use it in your Pod by specifying the PVC as a volume.

Similar to Volume Way, EBS volume must exist before you create the PV.

Provisioner way

kubernetes.io/aws-ebs is also a Kubernetes built-in Provisioner for EBS.

You can create a Storage Class with Provisioner kubernetes.io/aws-ebs, then create a PVC using the Storage Class. Kubernetes will automatically create the matching PV for you. Then you can use it in your Pod by specifying the PVC as a volume.

In this case, you don’t need to create EBS volume before you use it. The EBS Provisioner will create it for you.

Third-Party Way

All the options listed above are the built-in options of Kubernetes. There are also some third-party implementations of EBS in the format of Flexvolume driver, to help you hook it up to Kubernetes if you’re not yet satisfied by any options above.

And there are CSI drivers for the same purpose if Flexvolume doesn’t work for you. (Why? More on this later.)

VolumeClaimTemplate Way

If you’re using StatefulSet, congratulations! You now have one more way to use EBS volume with your workload – VolumeClaimTemplate.

VolumeClaimTemplate is a StatefulSet spec property. It provides a way to create matching PVs and PVCs for the Pod that Statefulset created. Those PVCs will be created using Storage Class so they can be created automatically when StatefulSet is scaling up. When a StatefulSet has been scaled down, the extra PVs/PVCs will be kept in the system. So when the StatefulSet scales up again, they will be used again for the new Pods created by Kubernetes. We will talk more on StatefulSet later.

As an example, let’s say you created a StatefulSet named www with replica 3, and a VolumeClaimTemplate named data with it. Kubernetes will create 3 Pods, named www-0, www-1, www-2 accordingly. Kubernetes will also create PVC www-data-0 for Pod www-0, www-data-1 for www-1, and www-data-2 for www-2. If you scale the StatefulSet to 5, Kubernetes will create www-3, www-data-3, www-4 and www-data-4 accordingly. Then you scale the StatefulSet down to 1, all www-1 to www-4 will be deleted, but www-data-1 to www-data-4 will remain in the system. So when you decide to scale up to 5 again, Pod www-1 to www-4 will be created, and PVC www-data-1 will still serve Pod www-1, www-data-2 for www-2, etc. That’s because the identity of Pod are stable in StatefulSet. The name and relationship are predictable when using StatefulSet.

VolumeClaimTemplate is important for the block storage solutions like EBS and Longhorn. Because those solutions are inherently ReadWriteOnce, you cannot share it between the Pods. Deployment won’t work well with them if you have more than one Pod running with persistent data. So VolumeClaimTemplate provides a way for the block storage solution to scale horizontally for a Kubernetes workload.

How to choose between Volume, Persistent Volume and Provisioner

As you see, there are built-in Volume types, PV types, Provisioner types, plus external plugins using Flexvolume and/or CSI. The most confusing part is that they just provide largely the same but also slightly different functionality.

I thought, at least, there should be a guideline somewhere on how to choose between them.

But I cannot find it anywhere.

So I’ve plowed through codes and documents, to bring you the comparison matrix, and the guideline that makes the most sense to me.
Comparison of Volume, Persistent Volume and Provisioner

Name Volume Persistent Volume Provisioner
AWS EBS
Azure Disk
Azure File
CephFS
Cinder
Fiber Channel
Flexvolume
Flocker
GCE Persistent Disk
Glusterfs
HostPath
iSCSI
NFS
Photon PersistentDisk
Portworx
Quobyte
Ceph RBD
ScaleIO
StorageOS
vsphereVolume
ConfigMap
DownwardAPI
EmptyDir
Projected
Secret
Container Storage Interface(CSI)
Local

Here I only covered the in-tree support from Kubernetes. There are some official out-of-tree Provisioners you can use as well.

As you see here, Volume, Persistent Volume and Provisioner are different in some nuanced ways.

  1. Volume supports most of the volume plugins.
    1. It’s the only way to connect PVC to Pod.
    2. It’s also the only one that supports Config Map, Secret, Downward API, and Projected. All of those are closely related to the Kubernetes API server.
    3. And it’s the only one that supports EmptyDir, which will automatically allocate and clean up a temporary volume for Pod*.
  2. PV’s supported plugins are the superset of what Provisioner supports. Because Provisioner needs to create PV before workloads can use it. However, there are a few plugins supported by PV but not supported by Provisioner, e.g. Local Volume (which is a work-in-progress).
  3. There are two types that Volume doesn’t support. These are the two most recent feature, CSI and Local Volume. There are works-in-progress trying to bring them to Volume.

* A side note about EmptyDir with PV:

Back in 2015, there was an issue raised by Clayton Coleman to support EmptyDir with PV. It can be very helpful for the workloads needing persistent storage but only have local volumes available. But it didn’t get much traction. Without scheduler supports, it was too hard to do it at the time. Now, in 2018, scheduler and PV node affinity support have been added for Local Volume in Kubernetes v1.11. But there is still no EmptyDir PV. And Local Volume feature is not exactly what I expected since it doesn’t have the ability to create new volumes with new directories on the node. So I’ve written Local Path Provisioner, which utilized the scheduler and PV node affinity changes, to dynamically provision Host Path type PV for the workload.

Guideline for choosing between Volume, Persistent Volume and Provisioner

So which way should users choose?

In my opinion, users should stick to one principle:

Choose Provisioner over Persistent Volume, Persistent Volume over Volume when possible.

To elaborate:

  1. For Config Map, Downward API, Secret or Projected, use Volume since PV doesn’t support those.
  2. For EmptyDir, use Volume directly. Or use Host Path instead.
  3. For Host Path, use Volume directly in general, since it’s bound to a specific node and normally homogeneous across the node.
    1. If you want to have heterogeneous Host Path volumes, it didn’t work until Kubernetes v1.11 due to lack of node affinity knowledge for PV. With v1.11+, you can create Host Path PV with node affinity using my Local Path Provisioner.
  4. For all other cases, unless you need to hook up with existing volumes (in which case you should use PV), use Provisioner instead. Some of Provisioners are not made into built-in options, but you should able to find them here or at vendor’s official repositories.

The rationale behind this guideline is simple. While operating inside Kubernetes, an object (PV) is easier to manage than a property (Volume), and creating PV automatically (Provisioner) is much easier than creating it manually.

There is an exception: if you prefer to operate storages outside of Kubernetes, it’s better to stick with Volume. Though in this way, you will need to do creation/deletion using another set of API. Also, you will lose the ability to scale storage automatically with StatefulSet due to the lack of VolumeClaimTemplate. I don’t think it will be the choice for most Kubernetes users.

Why are there so many options to do the same thing?

This question was one of the first things that came to my mind when I started working with Kubernetes storage. The lack of consistent and intuitive design makes Kubernetes storage look like an afterthought. I’ve tried to research the history behind those design decisions, but it’s hard to find anything before 2016.

In the end, I tend to believe those are due to a few initial design decision made very early, which may be combined with the urgent need for vendor support, resulting in Volume gets way more responsibility than it should have. In my opinion, all those built-in volume plugins duplicated with PV shouldn’t be there.

While researching the history, I realized dynamic provisioning was already an alpha feature in Kubernetes v1.2 release in early 2016. It took two release cycles to become beta, another two to become stable, which is very reasonable.

There is also a huge ongoing effort by SIG Storage (which drives Kubernetes storage development) to move Volume plugins to out of tree using Provisioner and CSI. I think it will be a big step towards a more consistent and less complex system.

Unfortunately, I don’t think different Volume types will go away. It’s kinda like the flipside of Silicon Valley’s unofficial motto: move fast and break things. Sometimes, it’s just too hard to fix the legacy design left by a fast-moving project. We can only live with them, work around them cautiously, and don’t herald them in a wrong way.

What’s next

We will talk about the mechanism to extend Kubernetes storage system in the next part of the series, namely Flexvolume and CSI. A hint: as you may have noticed already, I am not a fan of Flexvolume. And it’s not storage subsystem’s fault.

[To be continued]

[You can join the discussion here]

Sheng Yang

Sheng Yang

Principal Engineer

Sheng Yang currently leads Project Longhorn in Rancher Labs, Rancher’s open source microservices-based, distributed block storage solution. He is also the author of Convoy, an open source persistent storage solution for Docker. Before Rancher Labs, he joined Citrix through the Cloud.com acquisition, where he worked on CloudStack project and CloudPlatform product. Before that, he was a kernel developer at Intel focused on KVM and Xen development. He has worked in the fields of virtualization and cloud computing for the last eleven years.

Source

Prometheus – transforming monitoring over the years

Today we extend our appreciation to the teams who created Prometheus, the cloud native monitoring project, and look ahead to reflect on the future of the project.

For a broad history, Prometheus is an open source project that has made significant traction in the cloud native industry and Kubernetes ecosystem. It was started at SoundCloud in 2012 by development teams that needed a tool designed to monitor and provide alerts in microservice infrastructures. Prometheus was inspired by the internal Borgmon monitoring tool at Google, similar to how Kubernetes was inspired by the internal orchestration tool Borg.

Fast forward to 2016, and the project was donated to the Cloud Native Computing Foundation (CNCF) for the benefit of the cloud native community. It reached version 1.0 in 2016, and version 2.0 in 2017.

The CoreOS team, now part of Red Hat, has invested in the project since 2016, and today, we have continued to work with Prometheus through a dedicated team of developers in order to make it consumable for the enterprise. You may recall the dedicated attention the CoreOS team has given the project over the years including upstream development, enabling it in Tectonic, and dedication to the latest v2 release. We have kept up our investment as a key part of the future of cloud native computing with Red Hat OpenShift.

Let’s walk through some of the ways we see Prometheus being very useful today in the Kubernetes ecosystem, and where we see it making an impact moving forward.

Stars are an imperfect metric, but they do give a good coarse grained measurement for the popularity of an open source project. Over the years Prometheus has grown in popularity and this metric reflects that popularity. Within the last two years it grew from 4,000 stars to 18,000 stars on GitHub; even though this is a popularity metric, it does show the rising interest in the project.

Prometheus is easy to set up as a single, statically linked binary that can be downloaded and started with a single command. In tandem with this simplicity, it scales to hundreds of thousands of samples per second ingested on modern commodity hardware. Prometheus’ architecture is well suited for dynamic environments in which containers start and stop frequently, instead of requiring manual re-configuration. We specifically re-implemented the time-series database to accommodate high churn use cases with short lived time-series, while retaining and improving query latency and resource usage.

Nearly as important as the software itself is Prometheus’ low barrier to entry into monitoring, helping to define a new era of monitoring culture. Multiple books have been written by both users as well as maintainers of Prometheus highlighting this shift towards usability, and even the new Google SRE workbook uses Prometheus in its example queries and alerts.

Moving forward, Prometheus is poised to continue widespread community development as well as at Red Hat as we seek to bring enhanced container monitoring capabilities to more users. Looking at the Kubernetes and OpenShift ecosystem, we believe Prometheus is already the de facto default solution to perform monitoring. Standardizing the efforts that have made Prometheus successful, such as the metrics format formalized through the OpenMetrics project, highlights the importance of this project in the industry.

Going forward, we believe that this standardization will be key for organizations as they seek to develop the next generation of operational tooling and culture – the bulk of which will be likely driven by Prometheus.

Learn more about Prometheus and join the community

We plan to deliver Prometheus in a future version of Red Hat OpenShift. Today you can join the community, kick the tires on the Prometheus Operator, or check out our getting started guides.

Source

Fully automated canary deployments in Kubernetes

In a previous article, we described how you can do blue/green deployments in Codefresh using a declarative step in your Codefresh Pipeline.

Blue/Green deployments are very powerful when it comes to easy rollbacks, but they are not the only approach for updating your Kubernetes application.

Another deployment strategy is using Canaries (a.k.a. incremental rollouts). With canaries, the new version of the application is gradually deployed to the Kubernetes cluster while getting a very small amount of live traffic (i.e. a subset of live users are connecting to the new version while the rest are still using the previous version).

The small subset of live traffic to the new version acts as an early warning for potential problems that might be present in the new code. As our confidence increases, more canaries are created and more users are now connecting to the updated version. In the end, all live traffic goes to canaries, and thus the canary version becomes the new “production version”.

The big advantage of using canaries is that deployment issues can be detected very early while they still affect only a small subset of all application users. If something goes wrong with a canary, the production version is still present and all traffic can simply be reverted to it.

While a canary is active, you can use it for additional verification (for example running smoke tests) to further increase your confidence on the stability of each new version.

Unlike Blue/green deployments, Canary releases are based on the following assumptions:

  1. Multiple versions of your application can exist together at the same time, getting live traffic.
  2. If you don’t use some kind of sticky session mechanism, some customers might hit a production server in one request and a canary server in another.

If you cannot guarantee these two points, then blue/green deployments are a much better approach for safe deployments.

Canaries with/without Istio

The gradual confidence offered by canary releases is a major selling point and lots of organizations are looking for ways to adopt canaries for the main deployment method. Codefresh recently released a comprehensive webinar that shows how you can perform canary updates in Kubernetes using Helm and Istio.

The webinar shows the recommended way to do canaries using Istio. Istio is a service mesh that can be used in your Kubernetes cluster to shape your traffic according to your own rules. Istio is a perfect solution for doing canaries as you can point any percentage of your traffic to the canary version regardless of the number of pods that serve it.

In a Kubernetes cluster without Istio, the number of canary pods is directly affecting the traffic they get at any given point in time.

Traffic switching without IstioTraffic switching without Istio

So if for example you need your canary to get at 10% traffic, you need at least 9 production pods. With Istio there is no such restriction. The number of pods serving the canary version and the traffic they get is unrelated. All possible combinations that you might think of are valid. Here are some examples of what you can achieve with Istio:

Traffic switching with IstioTraffic switching with Istio

This is why we recommend using Istio. Istio has several other interesting capabilities such as rate limiting, circuit breakers, A/B testing etc.

The webinar also uses Helm for deployments. Helm is a package manager for Kubernetes that allows you to group multiple manifests together, allowing you to deploy an application along with its dependencies.

At Codefresh we have several customers that wanted to use Canary deployments in their pipelines but chose to wait until Istio reached 1.0 version before actually using it in production.

Even though we fully recommend Istio for doing canary deployments, we also developed a Codefresh plugin (i.e. a Docker image) that allows you to take advantage of canary deployments even on plain Kubernetes clusters (without Istio installed).

We are open-sourcing this Docker image today for everybody to use and we will explain how you can integrate it in a Codefresh pipeline with only declarative syntax.

Canary deployments with a declarative syntax

In a similar manner as the blue/green deployment plugin, the Canary plugin is also taking care of all the kubectl invocations needed behind the scenes. To use it you can simply insert it in a Codefresh pipeline as below:

canaryDeploy:

title: “Deploying new version ${}”

image: codefresh/k8s-canary:master

environment:

– WORKING_VOLUME=.

– SERVICE_NAME=my-demo-app

– DEPLOYMENT_NAME=my-demo-app

– TRAFFIC_INCREMENT=20

– NEW_VERSION=${}

– SLEEP_SECONDS=40

– NAMESPACE=canary

– KUBE_CONTEXT=myDemoAKSCluster

Notice the complete lack of kubectl commands. The Docker image k8s-canary contains a single executable that takes the following parameters as environment variables:

Environment Variable Description
KUBE_CONTEXT Name of your cluster in Codefresh dashboard
WORKING_VOLUME A folder for saving temp/debug files
SERVICE_NAME Existing K8s service
DEPLOYMENT_NAME Existing k8s deployment
TRAFFIC_INCREMENT Percentage of pods to convert to canaries at each stage
NEW_VERSION Docker tag for the next version of the app
SLEEP_SECONDS How many seconds each canary stage will wait. After that, the new pods will be checked for restarts
NAMESPACE K8s Namespace where deployments happen

Prerequisites

The canary deployments steps expect the following assumptions:

  • An initial service and the respective deployment should already exist in your cluster.
  • The name of each deployment should contain each version
  • The service should have a metadata label that shows what the “production” version is.

These requirements allow each canary deployment to finish into a state that allows the next one to run in a similar manner.

You can use anything you want as a “version”, but the recommended approach is to use GIT hashes and tag your Docker images with them. In Codefresh this is very easy because the built-in variable CF_SHORT_REVISION gives you the git hash of the commit that was pushed.

The build step of the main application that creates the Docker image that will be used in the canary step is a standard build step that tags the Docker image with the git hash.

BuildingDockerImage:

title: Building Docker Image

type: build

image_name: trivial-web

working_directory: ./example/

tag: ‘${}’

dockerfile: Dockerfile

For more details, you can look at the example application that also contains a service and deployment with the correct labels as well as the full codefresh.yml file.

How to perform Canary deployments

When you run a deployment in Codefresh, the pipeline step will print messages with its progress:

Canary LogsCanary Logs

First, the Canary plugin will read the Kubernetes services and extract the “version” metadata label to find out which version is running “in production”. Then it will read the respective deployment and find the Docker image currently getting live traffic. It will also read the number of current replicas for that deployment.

Then it will create a second deployment using the new Docker image tag. This second deployment uses the same labels as the first one, so the existing service will serve BOTH deployments at the same time. A single pod for the new version will be deployed. This pod will instantly get live traffic according to the total number of pods. For example, if you have in production 3 pods and the new version pod is created, it will instantly get 25% of the traffic (1 canary, 3 production version).

Once the first pod is created, the script is running in a loop where each iteration does the following:

  1. Increases the number of canaries according to the predefined percentage. For example, a percentage of 33% means that 3 phases of canaries will be performed. With 25%, you will see 4 canary iterations and so on. The algorithm used is pretty basic and for a very low number of pods, you will see a lot of rounding happening.
  2. Waits for some seconds until the pods have time to start (the time is configurable).
  3. Checks for pod restarts. If there are none, it assumes that everything is ok and the next iteration happens.

This goes on until only canaries get live traffic. The previous deployment is destroyed and the new one is marked as “production” in the service.

If at any point there are problems with canaries (or restarts), all canary instances are destroyed and all live traffic goes back to the production version.

You can see all this happening in real-time either using direct kubectl commands or looking at the Codefresh Kubernetes dashboard. While canaries are active you will see two docker image versions in the Images column:

We are working on more ways of health-checking in addition to looking at pod restarts. The Canary image is available in Dockerhub.

New to Codefresh? Create Your Free Account Today!

Source

Topology-Aware Volume Provisioning in Kubernetes

Topology-Aware Volume Provisioning in Kubernetes

Author: Michelle Au (Google)

The multi-zone cluster experience with persistent volumes is improving in Kubernetes 1.12 with the topology-aware dynamic provisioning beta feature. This feature allows Kubernetes to make intelligent decisions when dynamically provisioning volumes by getting scheduler input on the best place to provision a volume for a pod. In multi-zone clusters, this means that volumes will get provisioned in an appropriate zone that can run your pod, allowing you to easily deploy and scale your stateful workloads across failure domains to provide high availability and fault tolerance.

Previous challenges

Before this feature, running stateful workloads with zonal persistent disks (such as AWS ElasticBlockStore, Azure Disk, GCE PersistentDisk) in multi-zone clusters had many challenges. Dynamic provisioning was handled independently from pod scheduling, which meant that as soon as you created a PersistentVolumeClaim (PVC), a volume would get provisioned. This meant that the provisioner had no knowledge of what pods were using the volume, and any pod constraints it had that could impact scheduling.

This resulted in unschedulable pods because volumes were provisioned in zones that:

  • did not have enough CPU or memory resources to run the pod
  • conflicted with node selectors, pod affinity or anti-affinity policies
  • could not run the pod due to taints

Another common issue was that a non-StatefulSet pod using multiple persistent volumes could have each volume provisioned in a different zone, again resulting in an unschedulable pod.

Suboptimal workarounds included overprovisioning of nodes, or manual creation of volumes in the correct zones, making it difficult to dynamically deploy and scale stateful workloads.

The topology-aware dynamic provisioning feature addresses all of the above issues.

Supported Volume Types

In 1.12, the following drivers support topology-aware dynamic provisioning:

  • AWS EBS
  • Azure Disk
  • GCE PD (including Regional PD)
  • CSI (alpha) – currently only the GCE PD CSI driver has implemented topology support

Design Principles

While the initial set of supported plugins are all zonal-based, we designed this feature to adhere to the Kubernetes principle of portability across environments. Topology specification is generalized and uses a similar label-based specification like in Pod nodeSelectors and nodeAffinity. This mechanism allows you to define your own topology boundaries, such as racks in on-premise clusters, without requiring modifications to the scheduler to understand these custom topologies.

In addition, the topology information is abstracted away from the pod specification, so a pod does not need knowledge of the underlying storage system’s topology characteristics. This means that you can use the same pod specification across multiple clusters, environments, and storage systems.

Getting Started

To enable this feature, all you need to do is to create a StorageClass with volumeBindingMode set to WaitForFirstConsumer:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: topology-aware-standard
provisioner: kubernetes.io/gce-pd
volumeBindingMode: WaitForFirstConsumer
parameters:
type: pd-standard

This new setting instructs the volume provisioner to not create a volume immediately, and instead, wait for a pod using an associated PVC to run through scheduling. Note that previous StorageClass zone and zones parameters do not need to be specified anymore, as pod policies now drive the decision of which zone to provision a volume in.

Next, create a pod and PVC with this StorageClass. This sequence is the same as before, but with a different StorageClass specified in the PVC. The following is a hypothetical example, demonstrating the capabilities of the new feature by specifying many pod constraints and scheduling policies:

  • multiple PVCs in a pod
  • nodeAffinity across a subset of zones
  • pod anti-affinity on zones

apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: “nginx”
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
– matchExpressions:
– key: failure-domain.beta.kubernetes.io/zone
operator: In
values:
– us-central1-a
– us-central1-f
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
– labelSelector:
matchExpressions:
– key: app
operator: In
values:
– nginx
topologyKey: failure-domain.beta.kubernetes.io/zone
containers:
– name: nginx
image: gcr.io/google_containers/nginx-slim:0.8
ports:
– containerPort: 80
name: web
volumeMounts:
– name: www
mountPath: /usr/share/nginx/html
– name: logs
mountPath: /logs
volumeClaimTemplates:
– metadata:
name: www
spec:
accessModes: [ “ReadWriteOnce” ]
storageClassName: topology-aware-standard
resources:
requests:
storage: 10Gi
– metadata:
name: logs
spec:
accessModes: [ “ReadWriteOnce” ]
storageClassName: topology-aware-standard
resources:
requests:
storage: 1Gi

Afterwards, you can see that the volumes were provisioned in zones according to the policies set by the pod:

$ kubectl get pv -o=jsonpath='{.spec.claimRef.name}{“t”}{.metadata.labels.failure-domain.beta.kubernetes.io/zone}{“n”}’
www-web-0 us-central1-f
logs-web-0 us-central1-f
www-web-1 us-central1-a
logs-web-1 us-central1-a

How can I learn more?

Official documentation on the topology-aware dynamic provisioning feature is available here:
https://kubernetes.io/docs/concepts/storage/storage-classes/#volume-binding-mode

Documentation for CSI drivers is available at https://kubernetes-csi.github.io/docs/

What’s next?

We are actively working on improving this feature to support:

  • more volume types, including dynamic provisioning for local volumes
  • dynamic volume attachable count and capacity limits per node

How do I get involved?

If you have feedback for this feature or are interested in getting involved with the design and development, join the Kubernetes Storage Special-Interest-Group (SIG). We’re rapidly growing and always welcome new contributors.

Special thanks to all the contributors that helped bring this feature to beta, including Cheng Xing (verult), Chuqiang Li (lichuqiang), David Zhu (davidz627), Deep Debroy (ddebroy), Jan Šafránek (jsafrane), Jordan Liggitt (liggitt), Michelle Au (msau42), Pengfei Ni (feiskyer), Saad Ali (saad-ali), Tim Hockin (thockin), and Yecheng Fu (cofyc).

Source

2018 Steering Committee Election Results

2018 Steering Committee Election Results

Authors: Jorge Castro (Heptio), Ihor Dvoretskyi (CNCF), Paris Pittman (Google)

Results

The Kubernetes Steering Committee Election is now complete and the following candidates came ahead to secure two year terms that start immediately:

Big Thanks!

  • Steering Committee Member Emeritus Quinton Hoole for his service to the community over the past year. We look forward to
  • The candidates that came forward to run for election. May we always have a strong set of people who want to push community forward like yours in every election.
  • All 307 voters who cast a ballot.
  • And last but not least…Cornell University for hosting CIVS!

Get Involved with the Steering Committee

You can follow along to Steering Committee backlog items and weigh in by filing an issue or creating a PR against their repo. They meet bi-weekly on Wednesdays at 8pm UTC and regularly attend Meet Our Contributors.

Steering Committee Meetings:

Meet Our Contributors Steering AMA’s:

Source