Better autoscaling with Prometheus and the Kubernetes Metrics APIs

The ability to autoscale workloads based on metrics such as CPU and memory usage is one of the most powerful features of Kubernetes. Of course, to enable this feature we first need a method of gathering and storing these metrics. Today this is most often accomplished using Heapster, but this method can be cumbersome and support from the various contributors to the project has been inconsistent – and in fact it may soon be phased out.

Fortunately, the new Kubernetes metrics APIs are paving the way for a more consistent and efficient way to supply metrics data for the purpose of autoscaling based on Prometheus. It’s no secret that we at CoreOS are big fans of Prometheus, so in this post we will explain the metrics APIs, what’s new, and our recommended method of scaling Kubernetes workloads, going forward.

This post assumes you have a basic understanding of Kubernetes and monitoring.

The Heapster problem

Heapster provides metric collection and basic monitoring capabilities and it supports multiple data sinks to store the collected metrics. The code for each sink resides within the Heapster repository. Heapster also enables the use of the Horizontal Pod Autoscaler (HPA) to automatically scale workloads based on metrics.

There are two problems with the architecture Heapster has chosen to implement. First, it assumes the data store is a bare time-series database for which there is a direct write path. This makes it fundamentally incompatible with Prometheus, as Prometheus uses a pull-based model. Because the rest of the Kubernetes ecosystem has first class Prometheus support, however, it’s not uncommon to run Prometheus, Heapster, an an additional non-Prometheus data store exclusively for Heapster (which typically is InfluxDB) – a less-than-ideal scenario.

Second, because the code for each sink is considered part of the core Heapster code base, the result is a “vendor dump,” where vendors implement support for their systems but often swiftly abandon the code. This is a common cause of frustration when maintaining Heapster. At the time of this writing, many of the 15 available sinks have been unsupported for a long time.

What’s more, even though Heapster doesn’t implement Prometheus as a data sink, it exposes metrics in Prometheus format. This often causes additional confusion.

A bit over a year ago, SIG-Instrumentation was founded and this problem was one of the first we tackled. Contributors and maintainers of the Heapster, Prometheus, and Kubernetes projects came together to design the Kubernetes resource and custom metrics APIs, which point the way forward to a better approach to autoscaling.

Resource and custom metrics APIs

To avoid repeating Heapster’s mistakes, the resource and custom metrics APIs were intentionally created as mere API definitions and not implementations. They are installed into a Kubernetes cluster as aggregated APIs, which allows implementations to be switched out while the APIs stay the same. Both APIs are defined to respond with the current value of the requested metric/query and are both available in beta starting with Kubernetes 1.8.0. Historical metrics APIs may be defined and implemented in the future.

The canonical implementation of the resource metrics API is the Metrics Server, which simply gathers what is referred to as the resource metrics: CPU and memory (and possibly more in the future). It gathers these from all the kubelets in a cluster through the kubelet’s stats AP and simply keeps all values on Pods and Nodes in memory.

The custom metrics API, as the name says, allows requesting arbitrary metrics. Custom metrics API implementations are specific to the respective backing monitoring system. Prometheus was the first monitoring system that an adapter was developed for, simply due to it being a very popular choice to monitor Kubernetes. This Kubernetes Custom Metrics Adapter for Prometheus can be found in the k8s-prometheus-adapter repository on GitHub. Requests to the adapter (aka the Prometheus implementation of the custom metrics API) are converted to a Prometheus query and executed against the respective Prometheus server. The result Prometheus returns is then returned by the custom metrics API adapter.

This architecture solves all the problems we intended to solve:

  • Resource metrics can be used more reliably and consistently.
  • There is no “vendor dump” for data sinks. Whoever implements an adapter must maintain it.
  • Pull-based as well as push-based monitoring systems can be supported.
  • Running Heapster with a datastore like InfluxDB in addition to Prometheus will not be necessary anymore.
  • Prometheus can consistently be used to monitor, alert and autoscale.

Better yet, because the Kubernetes metrics APIs are standardized, we can now also consistently autoscale on custom metrics, such as worker queue size, in addition to plain CPU and memory.

What to do going forward

Using the Custom Metrics Adapter for Prometheus means we can autoscale on arbitrary metrics that we already collect with Prometheus, without the need to run Heapster at all. In fact, one of the areas SIG-Instrumentation is currently working on is phasing out Heapster – meaning it will eventually be unsupported. Thus, I recommend switching to using the resource and custom metrics APIs sooner rather than later. To enable using the resource and custom metrics APIs with the HPA one must pass the following flag to the kube-controller-manager:

–horizontal-pod-autoscaler-use-rest-clients

If you have any questions, feel free to follow up with me on Twitter (@FredBrancz) or Kubernetes Slack (@brancz). I also want to give Solly Ross (@directXMan12) a huge shout-out as he worked on all of this from the HPA to defining the resource and custom metrics APIs as well as implementing the Custom Metrics Adapter for Prometheus.

Finally, if you are interested in this area and would like to contribute, please join us on the SIG-Instrumentation biweekly call on Thursdays at 17:30 UTC. See you there!

Source

Hands On With Linkerd 2.0

Hands On With Linkerd 2.0

Author: Thomas Rampelberg (Buoyant)

Linkerd 2.0 was recently announced as generally available (GA), signaling its readiness for production use. In this tutorial, we’ll walk you through how to get Linkerd 2.0 up and running on your Kubernetes cluster in a matter seconds.

But first, what is Linkerd and why should you care? Linkerd is a service sidecar that augments a Kubernetes service, providing zero-config dashboards and UNIX-style CLI tools for runtime debugging, diagnostics, and reliability. Linkerd is also a service mesh, applied to multiple (or all) services in a cluster to provide a uniform layer of telemetry, security, and control across them.

Linkerd works by installing ultralight proxies into each pod of a service. These proxies report telemetry data to, and receive signals from, a control plane. This means that using Linkerd doesn’t require any code changes, and can even be installed live on a running service. Linkerd is fully open source, Apache v2 licensed, and is hosted by the Cloud Native Computing Foundation (just like Kubernetes itself!)

Without further ado, let’s see just how quickly you can get Linkerd running on your Kubernetes cluster. In this tutorial, we’ll walk you through how to deploy Linkerd on any Kubernetes 1.9+ cluster and how to use it to debug failures in a sample gRPC application.

Step 1: Install the demo app 🚀

Before we install Linkerd, let’s start by installing a basic gRPC demo application called Emojivoto onto your Kubernetes cluster. To install Emojivoto, run:

curl https://run.linkerd.io/emojivoto.yml | kubectl apply -f –

This command downloads the Kubernetes manifest for Emojivoto, and uses kubectl to apply it to your Kubernetes cluster. Emojivoto is comprised of several services that run in the “emojivoto” namespace. You can see the services by running:

kubectl get -n emojivoto deployments

You can also see the app live by running

minikube -n emojivoto service web-svc –url # if you’re on minikube

… or:

kubectl get svc web-svc -n emojivoto -o jsonpath=”{.status.loadBalancer.ingress[0].*}” #

… if you’re somewhere else

Click around. You might notice that some parts of the application are broken! If you were to inspect your handly local Kubernetes dashboard, you wouldn’t see very much interesting—as far as Kubernetes is concerned, the app is running just fine. This is a very common situation! Kubernetes understands whether your pods are running, but not whether they are responding properly.

In the next few steps, we’ll walk you through how to use Linkerd to diagnose the problem.

Step 2: Install Linkerd’s CLI

We’ll start by installing Linkerd’s command-line interface (CLI) onto your local machine. Visit the Linkerd releases page, or simply run:

curl -sL https://run.linkerd.io/install | sh

Once installed, add the linkerd command to your path with:

export PATH=$PATH:$HOME/.linkerd2/bin

You should now be able to run the command linkerd version, which should display:

Client version: v2.0
Server version: unavailable

“Server version: unavailable” means that we need to add Linkerd’s control plane to the cluster, which we’ll do next. But first, let’s validate that your cluster is prepared for Linkerd by running:

linkerd check –pre

This handy command will report any problems that will interfere with your ability to install Linkerd. Hopefully everything looks OK and you’re ready to move on to the next step.

Step 3: Install Linkerd’s control plane onto the cluster

In this step, we’ll install Linkerd’s lightweight control plane into its own namespace (“linkerd”) on your cluster. To do this, run:

linkerd install | kubectl apply -f –

This command generates a Kubernetes manifest and uses kubectl command to apply it to your Kubernetes cluster. (Feel free to inspect the manifest before you apply it.)

(Note: if your Kubernetes cluster is on GKE with RBAC enabled, you’ll need an extra step: you must grant a ClusterRole of cluster-admin to your Google Cloud account first, in order to install certain telemetry features in the control plane. To do that, run: kubectl create clusterrolebinding cluster-admin-binding-$USER –clusterrole=cluster-admin –user=$(gcloud config get-value account).)

Depending on the speed of your internet connection, it may take a minute or two for your Kubernetes cluster to pull the Linkerd images. While that’s happening, we can validate that everything’s happening correctly by running:

linkerd check

This command will patiently wait until Linkerd has been installed and is running.

Finally, we’re ready to view Linkerd’s dashboard! Just run:

linkerd dashboard

If you see something like below, Linkerd is now running on your cluster. 🎉

Step 4: Add Linkerd to the web service

At this point we have the Linkerd control plane installed in the “linkerd” namespace, and we have our emojivoto demo app installed in the “emojivoto” namespace. But we haven’t actually added Linkerd to our service yet. So let’s do that.

In this example, let’s pretend we are the owners of the “web” service. Other services, like “emoji” and “voting”, are owned by other teams–so we don’t want to touch them.

There are a couple ways to add Linkerd to our service. For demo purposes, the easiest is to do something like this:

kubectl get -n emojivoto deploy/web -o yaml | linkerd inject – | kubectl apply -f –

This command retrieves the manifest of the “web” service from Kubernetes, runs this manifest through linkerd inject, and finally reapplies it to the Kubernetes cluster. The linkerd inject command augments the manifest to include Linkerd’s data plane proxies. As with linkerd install, linkerd inject is a pure text operation, meaning that you can inspect the input and output before you use it. Since “web” is a Deployment, Kubernetes is kind enough to slowly roll the service one pod at a time–meaning that “web” can be serving traffic live while we add Linkerd to it!

We now have a service sidecar running on the “web” service!

Step 5: Debugging for Fun and for Profit

Congratulations! You now have a full gRPC application running on your Kubernetes cluster with Linkerd installed on the “web” service. Of course, that application is failing when you use it–so now let’s use Linkerd to track down those errors.

If you glance at the Linkerd dashboard (the linkerd dashboard command), you should see all services in the “emojivoto” namespace show up. Since “web” has the Linkerd service sidecar installed on it, you’ll also see success rate, requests per second, and latency percentiles show up.

That’s pretty neat, but the first thing you might notice is that success rate is well below 100%! Click on “web” and let’s dig in.

You should now be looking at the Deployment page for the web service. The first thing you’ll see here is that web is taking traffic from vote-bot (a service included in the Emojivoto manifest to continually generate a low level of live traffic), and has two outgoing dependencies, emoji and voting.

The emoji service is operating at 100%, but the voting service is failing! A failure in a dependent service may be exactly what’s causing the errors that web is returning.

Let’s scroll a little further down the page, we’ll see a live list of all traffic endpoints that “web” is receiving. This is interesting:

There are two calls that are not at 100%: the first is vote-bot’s call the “/api/vote” endpoint. The second is the “VotePoop” call from the web service to the voting service. Very interesting! Since /api/vote is an incoming call, and “/VotePoop” is an outgoing call, this is a good clue that the failure of the vote service’s VotePoop endpoint is what’s causing the problem!

Finally, if we click on the “tap” icon for that row in the far right column, we’ll be taken to live list of requests that match this endpoint. This allows us to confirm that the requests are failing (they all have gRPC status code 2, indicating an error).

At this point we have the ammunition we need to talk to the owners of the vote “voting” service. We’ve identified an endpoint on their service that consistently returns an error, and have found no other obvious sources of failures in the system.

We hope you’ve enjoyed this journey through Linkerd 2.0. There is much more for you to explore. For example, everything we did above using the web UI can also be accomplished via pure CLI commands, e.g. linkerd top, linkerd stat, and linkerd tap.

Also, did you notice the little Grafana icon on the very first page we looked at? Linkerd ships with automatic Grafana dashboards for all those metrics, allowing you to view everything you’re seeing in the Linkerd dashboard in a time series format. Check it out!

Want more?

In this tutorial, we’ve shown you how to install Linkerd on a cluster, add it as a service sidecar to just one service–while the service is receiving live traffic!—and use it to debug a runtime issue. But this is just the tip of the iceberg. We haven’t even touched any of Linkerd’s reliability or security features!

Linkerd has a thriving community of adopters and contributors, and we’d love for YOU to be a part of it. For more, check out the docs and GitHub repo, join the Linkerd Slack and mailing lists (users, developers, announce), and, of course, follow @linkerd on Twitter! We can’t wait to have you aboard!

Source

Kubernetes Training with Jetstack // Jetstack Blog

12/Feb 2018

By Hannah Morris

This blog post provides an insight into how we run our Kubernetes workshops as we prepare for even more from Jetstack training in 2018.

In 2017, Jetstack ran more than 25 Kubernetes in Practice workshops: We trained engineers from over 80 different companies in London and across Europe, and had a great time doing so!

2018 promises to be an even busier year for Jetstack training, with several dates already in the diary for our first and second series of Beginner and Intermediate workshops. In addition, we will be running a number of workshops in our Kubernetes for Startups series. If you want to participate, you can find dates for Q1 at the end of this post.

In the run up to our 2018 workshops, we have been developing new material for our Advanced courses and working hard on the Jetstack Subscription, which will soon provide users with on-demand Kubernetes training modules and playbooks.

luke1

“Lots of lightbulb moments!” ~ Workshop Attendee

We run a number of our workshops in association with Google Cloud at the Google Academy in London Victoria. A typical day commences at 9.30am and finishes around 5pm, with a break for lunch. This gives us enough time to cover course content, answer questions, and pause to pick the brains of fellow Kubernetes enthusiasts over a cup of coffee!

Jetstack training courses are developed around the knowledge and experience gained by our engineers when deploying Kubernetes for clients. Our training modules are continuously updated and refined in order to ensure that they are consistent with the constantly changing Kubernetes ecosystem.

We focus on making our courses interactive, with a mixture of presentations, demos and hands-on labs. They are designed to prepare you to deploy, use and operate Kubernetes efficiently.

bates

“The trainers were enthusiastic and knowledgeable.” ~ Workshop Attendee

We have a range of courses tailored for Kubernetes users of all levels. We run a two-day Kubernetes in Practice course (Beginner and Intermediate) and we now offer our Advanced workshop as part of a two-day Kubernetes Operator course.

Kubernetes in Practice

Day 1 – Beginner

  • Introduction to core concepts of Kubernetes.
  • Hands-on labs covering: persistent volume types, application logs, and using Helm to package and deploy applications.
  • Demo: continuous integration and deployment (CI/CD) on Kubernetes. An application is deployed into multiple environments before being rolled out to production.

“The workshop helped me understand what Kubernetes is made for, what the building blocks are and how we are supposed to use them.” ~ Beginner Workshop Attendee

Day 2 – Intermediate

  • More advanced Kubernetes features and how to use them to lessen operational burden.
  • Hands-on labs covering: Autoscaling, the Control Plane, Ingress and StatefulSet.
  • Set up a CI/CD pipeline running on Kubernetes, deploying to multiple environments with Helm.
  • Cluster Federation demo.

“I now know the concepts and building blocks and I have a sense of what’s possible.” ~ Intermediate Workshop Attendee

Kubernetes Operator

Our Advanced Wargaming workshop is now part of a two-day Advanced Operations course.

Advanced Operations – Day 1

  • Provision a Kubernetes cluster manually from the ground up.
  • Learn how to configure cluster components (apiserver, controller-manager, scheduler, etcd, kubelet, kube-proxy, container runtime) and how the configuration of these components affects cluster behaviour.
  • Comparison of Kubernetes deployment tools such as Kubeadm, Kops, Tarmak.

Advanced Operations – Day 2 (Wargaming)

  • Deep dive into Kubernetes Internals.
  • Team Wargaming: work together to overcome various production issues and common cluster failures.

“Had a really insightful day…just proves how important it is to have the operational tasks rehearsed!” ~ Advanced Workshop Attendee

janos

“Yes yes please please more training at an advanced level!” ~ Workshop Attendee

We make workshop slides available to participants following the session, and warmly welcome feedback. We develop new material with our participants’ comments in mind; we need to know which approaches work best from an educational perspective, as well as any further topics and issues that could be tackled in future workshops.

We were grateful to receive some very positive feedback from our 2017 series, with an average score of 9⁄10 from participants. We aim to build on this by constantly refining our course content and adding new material, and we are looking forward to delivering in 2018.

stars

“Boom!”

Kubernetes for Startups 2018

Dates for Q1 training:

  • Thursday 8th March – Beginner (London)
  • Friday 9th March – Intermediate (London)
  • Monday 26th March – Beginner (Paris)
  • Tuesday 27th March – Intermediate (Paris)

If you work in a Startup and would like to attend any of these workshops, please email hannah@jetstack.io

Source

Let’s actually talk about Diversity – Heptio

In May I had the honor of sitting on a diversity panel at Kubecon EU with an awesome group of folks from Heptio, Google, Redhat and StorageOS.

When I first started telling people that I was going to be on a diversity panel, I got a few different reactions. The most surprising were people telling me I shouldn’t do it, that they themselves wouldn’t want to be considered the “token” diversity advocate, that it is too hard to be the person speaking publicly about diversity. People don’t even want to talk about diversity among their families or co-workers, much less speak publicly about the issue. This got me thinking about my own experiences.

My father is a retired military veteran so I moved about every three years my entire life. I have lived in several countries and several different states. Living on a military base was always a multi-cultural experience full of families of all shapes and sizes and made up of people from all over the world. At some point, everyone was the new kid, so being supportive and open to meeting new people was a natural part of surviving. To me, diversity and inclusion were part of my life and seemed like the way things worked everywhere. As I grew up and lived on my own I started to pull my head up, look around and realize how far from the truth my assumptions had been.

I started to notice the problem once I moved into more senior roles and experienced first-hand many examples of sexist, bigoted, and otherwise non-inclusive behavior in companies. I suddenly had a broader view of the organization and could see these issues more clearly. I was running up against real problems affecting not just me but others on my teams. I realized I really did need to be part of the change and part of the conversation. I realized I needed to be a person that advocated for diversity and for a workplace that allowed everyone to succeed.

The problem is, a lot of people are afraid to talk about diversity. Not just publicly but even privately. This point makes it really hard to make change and to keep educating people. It also makes it really difficult to feel like we can have truly open dialogue.

We are never going to learn from each other if we don’t start talking to each other; and when I say talking to each other, I really mean listening to each other. We have to start having more open dialogue about issues in a way that ensures we are educating. We have to do that in a way that is open to listening and void of our tendencies to lecture or shame. A true human connection can set the stage for real change.

I want to encourage everyone to get involved, educate, listen, and work hard not to make assumptions about others. We don’t always know how someone identifies in relation to race, ethnicity, gender, religion, sexual orientation, geographical representation, political beliefs and more, and we need to take the time to learn about others and understand who they are so we can really start to understand how to create change.

I want to see real companies making real change and I want to be involved in the conversation and I want to make real change. I want to continue to evolve my ideas and my thinking and I want to ensure I share those ideas and this mission with my children in hopes things are better when they start their careers. Taking the time to truly understand others will make this much more possible.

The goal of the Kubecon diversity panel was to encourage action. Here are some actions you can take to encourage more conversation.

The next time someone asks you to be in a panel or write a blog post about diversity, consider it an opportunity to share your personal journey. I can guarantee at least one other person will relate to your journey and feel inspired and supported by you sharing it.

The next time you see someone sharing their personal story about diversity in a panel or blog, let them know you appreciate it. It can be lonely to put your words out there so hearing that someone enjoyed it will encourage that person to speak out more.

Thanks to everyone who attended the panel and reached out after. If you missed it you can check out the panel here.

We would love to have more people joining the discussion so please join the diversity slack channel in the Kubernetes slack.

Source

Creating a minimal Debian container for Docker – Own your bits

In the last post, we introduced some basic techniques to free up unused space on a Debian system. Following those steps, I created a Debian 8 Docker image that takes only 56.7 MB!

Usage

You can get it typing the following, but you really don’t need to because
docker run pulls the image for you if you do not already have it. It is still useful to get updates.

docker pull ownyourbits/minidebian

Bash into it with

docker run –rm -ti ownyourbits/minidebian

Run any command inside the container, for instance list root with

docker run –rm -ti ownyourbits/minidebian ls /

Is this small?

In order to see how small this really is, we can compare it to a minimal system built using debootstrap. More details on this below.

Docker images can be made very small, because there are parts of the system that are not needed in order to launch things in a container. If we take a look at the official Debian Docker repository, we can see that their images are smaller than the file system generated by bootstrap. They even have slim versions that get rid of locales and man pages, but they are still bigger than ownyourbits/minidebian.

$ docker images

REPOSITORY TAG IMAGE ID CREATED SIZE

ownyourbits/minidebian latest e3d72e6d0731 40 hours ago 56.7 MB

debootstrap-sid latest a702df4074d3 41 hours ago 316 MB

debian jessie-slim 7d86024f45a4 4 weeks ago 79.9 MB

debian jessie e5599115b6a6 4 weeks ago 123 MB

Is this useful for anything?

It is! docker containers are quite handy to use and allow us to play around with a Debian system easily. Sure, we always could do this with a virtual machine or with bootstrap, but there are benefits to using docker.

One benefit lays in the fact that Docker uses overlayfs, so any changes made to your container will be lost when you exit, unless you issue
docker commit. We can play around, we can experiment, break things without fear, and then throw it away.

Another benefit is that we can use it to build more complex systems, overlaying a database, Java runtime, or a web server on top of it. That means that if an Apache server adds a 140 MB layout, you only have to get that compressed overlay, which is quite fast and space efficient.

It is also convenient to distribute stuff with dependencies. Everything is packed for you and you do not have to deal with configuration. This makes trying things out easy. Want to get a feel of gentoo?
docker pull gentoo/stage3-amd64 will save you tons of compilation and configuration time.

Finally, we can share this easily on dockerhub.io or our private docker repo.

Details

In order to get a working Debian system that we can then trim down, we have different options.

One of them is working on a live ISO, another is starting from the official Debian Docker repo that we mentioned earlier.

Another one is using good old debootstrap. Debootstrap is a little tool that gets the base debs from the official repositories, then installs them in a directory, so you can chroot to it. It provides the basic directory structure for Debian.

We can see what packages Debian considers essential

$ debootstrap –print-debs sid .

I: Keyring file not available at /usr/share/keyrings/debian-archive-keyring.gpg; switching to https mirror https://deb.debian.org/debian

I: Retrieving InRelease

I: Retrieving Packages

I: Validating Packages

I: Resolving dependencies of required packages…

I: Resolving dependencies of base packages…

I: Found additional required dependencies: libaudit-common libaudit1 libbz2-1.0 libcap-ng0 libdb5.3 libdebconfclient0 libgcrypt20 libgpg-error0 liblz4-1 libncursesw5 libsemanage-common libsemanage1 libsystemd0 libudev1 libustr-1.0-1

I: Found additional base dependencies: dmsetup gnupg-agent libapparmor1 libassuan0 libbsd0 libcap2 libcryptsetup4 libcurl3-gnutls libdevmapper1.02.1 libdns-export162 libelf1 libfastjson4 libffi6 libgmp10 libgnutls30 libgssapi-krb5-2 libhogweed4 libidn11 libidn2-0 libip4tc0 libip6tc0 libiptc0 libisc-export160 libk5crypto3 libkeyutils1 libkrb5-3 libkrb5support0 libksba8 libldap-2.4-2 libldap-common liblocale-gettext-perl liblognorm5 libmnl0 libnetfilter-conntrack3 libnettle6 libnfnetlink0 libnghttp2-14 libnpth0 libp11-kit0 libperl5.24 libpsl5 librtmp1 libsasl2-2 libsasl2-modules-db libseccomp2 libsqlite3-0 libssh2-1 libtasn1-6 libtext-charwidth-perl libtext-iconv-perl libtext-wrapi18n-perl libunistring0 libxtables12 openssl perl perl-modules-5.24 pinentry-curses xxd

base-files base-passwd bash bsdutils coreutils dash debconf debianutils diffutils dpkg e2fslibs e2fsprogs findutils gcc-5-base gcc-6-base grep gzip hostname init-system-helpers libacl1 libattr1 libaudit-common libaudit1 libblkid1 libbz2-1.0 libc-bin libc6 libcap-ng0 libcomerr2 libdb5.3 libdebconfclient0 libfdisk1 libgcc1 libgcrypt20 libgpg-error0 liblz4-1 liblzma5 libmount1 libncurses5 libncursesw5 libpam-modules libpam-modules-bin libpam-runtime libpam0g libpcre3 libselinux1 libsemanage-common libsemanage1 libsepol1 libsmartcols1 libss2 libsystemd0 libtinfo5 libudev1 libustr-1.0-1 libuuid1 login lsb-base mawk mount multiarch-support ncurses-base ncurses-bin passwd perl-base sed sensible-utils sysvinit-utils tar tzdata util-linux zlib1g adduser apt apt-transport-https apt-utils blends-tasks bsdmainutils ca-certificates cpio cron debconf-i18n debian-archive-keyring dmidecode dmsetup gnupg gnupg-agent gpgv ifupdown init iproute2 iptables iputils-ping isc-dhcp-client isc-dhcp-common kmod libapparmor1 libapt-inst2.0 libapt-pkg5.0 libassuan0 libbsd0 libcap2 libcryptsetup4 libcurl3-gnutls libdevmapper1.02.1 libdns-export162 libelf1 libestr0 libfastjson4 libffi6 libgdbm3 libgmp10 libgnutls30 libgssapi-krb5-2 libhogweed4 libidn11 libidn2-0 libip4tc0 libip6tc0 libiptc0 libisc-export160 libk5crypto3 libkeyutils1 libkmod2 libkrb5-3 libkrb5support0 libksba8 libldap-2.4-2 libldap-common liblocale-gettext-perl liblogging-stdlog0 liblognorm5 libmnl0 libnetfilter-conntrack3 libnettle6 libnewt0.52 libnfnetlink0 libnghttp2-14 libnpth0 libp11-kit0 libperl5.24 libpipeline1 libpopt0 libprocps6 libpsl5 libreadline6 libreadline7 librtmp1 libsasl2-2 libsasl2-modules-db libseccomp2 libslang2 libsqlite3-0 libssh2-1 libssl1.0.2 libssl1.1 libstdc++6 libtasn1-6 libtext-charwidth-perl libtext-iconv-perl libtext-wrapi18n-perl libunistring0 libusb-0.1-4 libxapian30 libxtables12 logrotate nano netbase openssl perl perl-modules-5.24 pinentry-curses procps readline-common rsyslog systemd systemd-sysv tasksel tasksel-data udev vim-common vim-tiny wget whiptail xxd

This is what debootstrap considers a base filesystem. We already see things that will not be needed in a container. Let’s create the filesystem.

mkdir debian_root && cd debian_root

sudo debootstrap sid .

We can then chroot to it manually. Some preparations need to be done to interface the new userspace with the virtual filesystems it expects.

sudo mount -t devpts devpts debian_root/dev/pts

sudo mount -t proc proc debian_root/proc

sudo mount -t sysfs sysfs debian_root/sys

sudo chroot debian_root

That is already more cumbersome than using Docker. Docker also offers more advanced isolation using newer kernel features like namespaces and cgroups.

It is easier to import this filesystem as a Docker image.

tar -c * | docker import – minidebian:raw

Now we can start freeing up space. The problem with this is that, because Docker uses overlays, you will not get a smaller container even if you delete things. This happens because when you delete in an upper layer it is just marked as deleted so that you can go back to the original contents just by getting rid of the upper layer.

In order to get around this, we can repack everything in an unique layer with

docker container create –name minidebian-container minidebian

docker export minidebian-container | docker import – minidebian:raw

When we are happy with the result, we end up with a Docker image with no metadata. We are only left with creating a basic dockerfile in an empty directory

FROM minidebian:raw

LABEL description=”Minimal Debian 8 image”

MAINTAINER Ignacio Núñez Hernanz <nacho@ownyourbits.com>

CMD [“/bin/bash”]

 

, and building the final image

docker build . -t minidebian:latest

In this example, we have only indicated Docker to spawn
bash if no other arguments are given.

In the next post we will create a LAMP installation on top of this small debian layer.

Source

Marketing in a Remote First Startup Part 3

First 100 Days as a Marketer in a Remote-First Cloud Services Startup III

The Learning Curve

The first round of dust settled pretty quick. The daily routine taking hold, legal to work in Germany, got the equipment, met the team. So, what’s this Kubernetes thing and how do we use it?

Positions and Titles of my Dev. Colleagues

Some of the titles of my colleagues are easy. We have the Co-Founders, CTO, UX/UI, Working Students, etc. But better understanding what my colleagues do, besides run Kubernetes infrastructure has been a real learning curve. We have Solutions Engineers, Platform Reliability Engineers, Support Engineers, a Developer Advocate only to mention a few. Each developer does their thing and I do mine – and from what I’ve found, they know as much about what I do as I now about what they do, all the way from the daily routine to the terms and acronyms we use.

Terms and Acronyms

Admittedly, this is about the most fun part of the learning curve. Every acronym that you nail, gets you closer to actually (feeling like) you know what you’re talking about and even more importantly, what (the Hell) they’re talking about. Here are a few when using #Slack, GitHub, VS Code, Kubernetes.

K8s – It’s what we do day in and day out. Kubernetes.

Repository – A repository is usually used to organize a single project. Repositories can contain folders and files, images, videos, spreadsheets, and data sets – anything your project needs.

Bare Metal – A computer system running on physical hardware compared to using a virtual server provided by a cloud provider like AWS or Azure.

Commit – These are saved changes on Github. Each commit has an associated message, which is a description explaining why a particular change was made. These messages capture the history of your changes, so other contributors can understand what you’ve done and why.

AFK – Away from Keyboard. It’s to let your colleagues know that you’re not available for a period of time.

Retro – Look back at the month (time dependant on organization).

Branch – Branching is the way to work on different versions of a repository at one time.

SIG – Special Interest Group. These are groups that are used across teams to both help in a specific category (channel) or communicate to the entire team.

Platform Reliability Engineer – That’s Joe.

Pull Request (PR) – To implement a change using team-sourcing. Requesting approvals and comments from chosen team members. Once the PR is approved, it gets merged and goes live.

mühsam ernährt sich das Eichhörnchen (it’s a German startup) – Idiom: used in the English context of Slowly but Surely.

The Giant Swarm Infrastructure

Not going to go too deep into our tool. If you are interested in learning what it does, we do have several tutorials online that explain how we launch and scale Kubernetes clusters. Just know that there is a learning curve with this one that is ever-changing alongside the product – big one.

Competitors

As you may know, competitive research in online marketing is essential. I spent a long time researching competitors and running analysis based on standard practices. Things like what marketing automation system are they using, how many pieces of content do they post weekly/monthly, who picks up their stories in the media, how many followers, etc. Competition however in the Open Source space is very different. There are your standard competitors, which we all agree are not as good as us but there are also so many available tools for Open Source that it is tough to say who adds value to your product compared to who is your actual competitor.

Even with Publicly Traded companies who are obvious competitors, our team may know a dozen people who work there and we may even invite them to speak at a meetup. As long as they offer good insight to Kubernetes as a whole. That makes it easy to be a part of the ecosystem but very difficult to make sure you don’t plug a product that is the one gunning after you.

Settled in as the Marketing Guy: Days 61 – 99 »

Source

From Cattle to K8s – Service Discovery in Rancher 2.0

How to Migrate from Rancher 1.6 to Rancher 2.1 Online Meetup

Key terminology differences, implementing key elements, and transforming Compose to YAML

Watch the video

Service discovery is one of the core functionalities of any container-based environment. Once you have packaged your application and launched it using containers, the next step is making it discoverable to other application containers in your environment or the external world.

In this article we will go over the service discovery support provided by Rancher 2.0 and see how the Rancher 1.6 feature set maps to the latest version.

Service Discovery in Rancher 1.6

Rancher 1.6 provided service discovery within Cattle environments. Rancher’s own DNS microservice provided the internal DNS functionality.

Rancher’s internal DNS provides the following key features:

  • Service discovery within stack and across stack

    All services in the stack are resolvable by <service_name> and by <service_name>.<stack_name> across stacks.

  • Container discovery

    All containers are resolvable globally by their name.

  • Creating a service alias name

    Adding an alias name to services and linking to other services using aliases.

  • Discovery of external services

    Pointing to services deployed outside of Rancher using the external IP(s) OR a domain name.

Service Discovery in Rancher 2.0

Rancher 2.0 uses the native Kubernetes DNS support to provide equivalent service discovery for Kubernetes workloads and pods. A Cattle user will be able to replicate all the service discovery features in Rancher 2.0 without loss of any functionality.

Similar to the Rancher 1.6 DNS microservice, Kubernetes schedules a DNS pod and service in the cluster and configures the kubelets to route all DNS lookups to this DNS service. Rancher 2.0’s Kubernetes cluster deploys skyDNS as the Kubernetes DNS service, which is a flavor of the default Kube-DNS implementation.

Service Resolution

As noted in this previous article, a Rancher 1.6 service maps to a Kubernetes workload of a certain type. A short summary on the popular types of workloads can be found here.

Kubernetes workloads are objects that specify the deployment rules for pods that are launched for the workload. Workload objects by themselves are not resolvable via DNS to other objects in the Kubernetes cluster. To lookup and access a workload, a Kubernetes Service needs to be created for the workload. Here are some details about a Kubernetes Service.

Any service created within Kubernetes gets a DNS name. The DNS A record created for the service is of the form <service_name>.<namespace_name>.svc.cluster.local. The DNS name for the service resolves to the cluster IP of the service. The cluster IP is an internal IP assigned to the service which is resolvable within the cluster.

Within the Kubernetes namespace, the service is resolvable directly by the <service_name> and outside of the namespace using <service_name>.<namespace_name>. This convention is similar to the service discovery within stack and across stack for Rancher 1.6.

Thus to lookup and access your application workload, a service needs to be created that gets a DNS record assigned.

Rancher simplifies this process by automatically creating a service along with the workload, using the service port and service type you select in the UI while deploying the workload and service name identical to the workload’s name. If no port is exposed, port 42 is used. This practice makes the workload discoverable within and across namespaces by its name.

For example, as seen below, I deploy a few workloads of type Deployment in two namespaces using Rancher 2.0 UI.

Imgur

I can see the corresponding DNS records auto-created by Rancher for the workloads under Cluster > Project > Service Discovery tab.

Imgur

The workloads become accessible to any other workload within and across the namespaces as demonstrated below.

Imgur

Pod Resolution

Individual pods running in the Kubernetes cluster also get a DNS record assigned, which is in the form <pod_ip_address>.<namespace_name>.pod.cluster.local. For example, a pod with an IP of 10.42.2.7 in the namespace default with a DNS name of cluster.local would have an entry of 10-42-2-7.default.pod.cluster.local.

Pods can also be resolved using the hostname and subdomain fields if set in the pod spec. Details about this resolution is covered in the Kubernetes docs here.

Creating Alias Names for Workloads and External Services

Just as you can create an alias for Rancher 1.6 services, you can do the same for Kubernetes workloads using Rancher 2.0. Similarly you can also create DNS records pointing to externally running services using their hostname or IP address in Rancher 2.0. These DNS records are Kubernetes service objects.

Using the 2.0 UI, navigate to the Cluster > Project view and choose the Service Discovery tab. Here, all the existing DNS records created for your workloads will be listed under each namespace.

Click on Add Record to create new DNS records and view the various options supported to link to external services or to create aliases for another workload/DNS record/set of pods.

Imgur

One thing to note is that out of these options for creating DNS records, the following options are supported natively by Kubernetes:

  • Point to an external hostname
  • Point to a set of pods which match a selector

The remaining options are implemented by Rancher leveraging Kubernetes:

  • Point to external IP address
  • Create alias for another DNS record
  • Point to another workload

Docker Compose to Kubernetes YAML

Now let’s see what is needed if we want to migrate an application from 1.6 to 2.0 using Compose files instead of deploying it over the 2.0 UI.

As noted above, when we deploy workloads using the Rancher 2.0 UI, Rancher internally takes care of creating the necessary Kubernetes ClusterIP service for service discovery. However, if you deploy the workload via Rancher CLI or Kubectl client, what should you do to ensure that the same service discovery behavior is accomplished?

Service Discovery Within and Across Namespaces via Compose

Lets start with the following docker-compose.yml file, which shows two services (foo and bar) within a stack. Within a Cattle stack, these two services can reach each other by using their service names.

version: ‘2’
services:
bar:
image: user/testnewhostrouting
stdin_open: true
tty: true
labels:
io.rancher.container.pull_image: always
foo:
image: user/testnewhostrouting
stdin_open: true
tty: true
labels:
io.rancher.container.pull_image: always

What happens to service discovery if we migrate these two services to a namespace in Rancher 2.0?

We can convert this docker-compose.yml file from Rancher 1.6 to Kubernetes YAML using the Kompose tool, and then deploy the application in the Kubernetes cluster using Rancher CLI.

Imgur

Now this conversion generates the *-deployment.yaml files, and deploying them using Rancher CLI creates the corresponding workloads within a namespace.

Imgur

Imgur

Can these workloads reach each other within the namespace? We can exec into the shell of workload foo using Rancher 2.0 UI and see if pinging the other workload bar works.

Imgur

No! The reason is because we only created the workload objects of type Deployment. To make these workloads discoverable, they each need a service of type ClusterIP pointing to them that will be assigned a DNS record. The Kubernetes YAML for such a service should look like the sample below.

Note that ports is a required field. Therefore, we need to provide it using some port number, such as 42 as shown here.

apiVersion: v1
kind: Service
metadata:
annotations:
io.rancher.container.pull_image: always
creationTimestamp: null
labels:
io.kompose.service: bar
name: bar
spec:
clusterIP: None
ports:
– name: default
port: 42
protocol: TCP
targetPort: 42
selector:
io.kompose.service: bar

After deploying this service via CLI, service foo can successfully ping service bar!

Imgur

Imgur

Thus if you take the Compose-to-Kubernetes-YAML route to migrate your 1.6 services to Rancher 2.0, make sure you also deploy corresponding ClusterIP services for the workloads. The same solution also applies to cross-namespace referencing of workloads.

Links/External_Links via Compose

If you are a Cattle user, you know that in Rancher 1.6 you can create a service-link/alias pointing to another service, and use that alias name in your application to discover that linked target service.

For example, consider the application below, where the web service links to the database service using the alias name mongo.

Imgur

Using Kompose, converting this Compose file to Kubernetes YAML generated the corresponding deployment and service YAML specs. If your services in docker-compose.yml expose ports, Kompose generates a Kubernetes ClusterIP service YAML spec by default.

Imgur

Deploying these using Rancher CLI generated the necessary workloads.

Imgur

However the service link mongo is missing, as the Kompose conversion does not support links in the docker-compose.yml file. As a result, the workload web encounters an error, and its pods keep restarting, failing to resolve the mongo link to database service.

Imgur

Imgur

How do we fix the broken DNS link? The solution is to create another ClusterIP service spec and set its name to the alias name of the link in docker-compose.

Imgur

Deploying this service creates the necessary DNS record, and the link mongo is created, making the web workload available!

The following image shows that the pods launched for the web workload entered a Running state.

Imgur

Transitioning from SkyDNS to CoreDNS in the Future

As of v2.0.7, Rancher deploys skyDNS as supported by Kubernetes version 1.10.x. In Kubernetes version 1.11 and later, CoreDNS can be installed as a DNS provider. We are evaluating CoreDNS as well, and it will be presentable as an alternative to skyDNS in the future versions of Rancher.

Conclusion

This article looked at how equivalent service discovery can be supported in Rancher 2.0 via Kubernetes DNS functionality. In the upcoming article, I plan to look at load balancing options supported by Rancher 2.0 and any limitations present in comparison to Rancher 1.6.

Prachi Damle

Prachi Damle

Principal Software Engineer

Source

Five tips to move your project to Kubernetes

Here are five tips to help you move your projects to Kubernetes with learnings from the OpenFaaS community over the past 12 months. The following is compatible with Kubernetes 1.8.x. and is being used with OpenFaaS – Serverless Functions Made Simple.

Disclaimer: the Kubernetes API is something which changes frequently and you should always refer to the official documentation for the latest information.

1. Put everything in Docker

It might sound obvious but the first step is to create a Dockerfile for every component that runs as a separate process. You may have already done this in which case you have a head-start.

If you haven’t started this yet then make sure you use multi-stage builds for each component. A multi-stage build makes use of two separate Docker images for the build-time and run-time components of your code. A base image may be the Go SDK for example which is used to build binaries and the final stage will be a minimal Linux user-space like Alpine Linux. We copy the binary over into the final stage, install any packages like CA certificates and then set the entry-point. This means that your final is smaller and won’t contain unused packages.

Here’s an example of a multi-stage build in Go for the OpenFaaS API gateway component. You will also notice some other practices:

  • Uses a non-root user for runtime
  • Names the build stages such as build
  • Specifies the architecture of the build i.e. linux
  • Uses specific version tags i.e. 3.6 – if you use latest then it can lead to unexpected behaviour

FROM golang:1.9.4 as build
WORKDIR /go/src/github.com/openfaas/faas/gateway

COPY . .

RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o gateway .

FROM alpine:3.6

RUN addgroup -S app
&& adduser -S -g app app

WORKDIR /home/app

EXPOSE 8080
ENV http_proxy “”
ENV https_proxy “”

COPY –from=build /go/src/github.com/openfaas/faas/gateway/gateway .
COPY assets assets

RUN chown -R app:app ./

USER app

CMD [“./gateway”]

Note: If you want to use OpenShift (a distribution of Kubernetes) then you need to ensure that all of your Docker images are running as a non-root user.

1.1 Get Kubernetes

You’ll need Kubernetes available on your laptop or development machine. Read my blog post on Docker for Mac which covers all the most popular options for working with Kubernetes locally.

https://blog.alexellis.io/docker-for-mac-with-kubernetes/

If you’ve worked with Docker before then you may be used to hearing about containers. In Kubernetes terminology you rarely work directly with a container, but with an abstraction called a Pod.

A Pod is a group of one-to-many containers which are scheduled and deployed together and get direct access to each other over the loopback address 127.0.0.1.

An example of where the Pod abstraction becomes useful is where you may have an existing legacy application without TLS/SSL which is deployed in a Pod along with Nginx or another web-server that is configured with TLS. The benefit is that multiple containers can be deployed together to extend functionality without having to make breaking changes.

2. Create YAML files

Once you have a set of Dockerfiles and images your next step is to write YAML files in the Kubernetes format which the cluster can read to deploy and maintain your project’s configuration.

These are different from Docker Compose files and can be difficult to get right at first. My advice is to find some examples in the documentation or other projects and try to follow the style and approach. The good news it that it does get easier with experience.

Every Docker image should be defined in a Deployment which specifies the containers to run and any additional resources it may need. A Deployment will create and maintain a Pod to run your code and if the Pod exits it will be restarted for you.

You will also need a Service for each component which you want to access over HTTP or TCP.

It is possible to to have multiple Kubernetes definitions within a single file by separating them with — and a new line, but prevailing opinion suggests we should spread our definitions over many YAML files – one for each API object in the cluster.

An example may be:

  • gateway-svc.yml – for the Service
  • gateway-dep.yml – for the Deployment

If all of your files are in the same directory then you can apply all the files in one step with kubectl apply -f ./yaml/ for instance.

When working with additional operating systems or architectures such as the Raspberry Pi – we find it useful to separate those definitions into a new folder such as yaml_arm.

  • Deployment example

Here is a simple example of a Deployment for NATS Streaming which is a lightweight streaming platform for distributing work:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nats
namespace: openfaas
spec:
replicas: 1
template:
metadata:
labels:
app: nats
spec:
containers:
– name: nats
image: nats-streaming:0.6.0
imagePullPolicy: Always
ports:
– containerPort: 4222
protocol: TCP
– containerPort: 8222
protocol: TCP
command: [“/nats-streaming-server”]
args:
– –store
– memory
– –cluster_id
– faas-cluster

A deployment can also state how many replicas or instances of the service to create at start-up time.

  • Service definition

apiVersion: v1
kind: Service
metadata:
name: nats
namespace: openfaas
labels:
app: nats
spec:
type: ClusterIP
ports:
– port: 4222
protocol: TCP
targetPort: 4222
selector:
app: nats

Services provide a mechanism to balance requests between all the replicas of your Deployments. In the example above we have one replica of NATS Streaming but if we had more they would all have unique IP addresses and tracking those would be problematic. The advantage of using a Service is that it has a stable IP address and DNS entry which can be used to access one of the replicas at any time.

Services are not directly mapped to Deployments, but are mapped to labels. In the example above the Service is looking for a label of app=nats. Labels can be added or removed from Deployments (and other API objects) at runtime making it easy to redirect traffic in your cluster. This can help enable A/B testing or rolling deployments.

The best way to learn about the Kubernetes-specific YAML format is to look up an API object in the documentation where you will find examples that can be used with YAML or via kubectl.

Find out more about the various API objects here:

https://kubernetes.io/docs/concepts/

2.1 helm

Helm describes itself as a package manager for Kubernetes. From my perspective it has two primary functions:

  • To distribute your application (in a Chart)

Once you are ready to distribute your project’s YAML files you can bundle them up and submit them to the Helm repository so that other people can find your application and install it with a single command. Charts can also be versioned and can specify dependencies on other Charts.

Here are three example charts: OpenFaaS, Kakfa or Minio.

  • To make editing easier

Helm supports in-line templates written in Go, which means you can move common configuration into a single file. So if you have released a new set of Docker images and need to perform some updates – you only have to do that in one place. You can also write conditional statements so that flags can be used with the helm command to turn on different features at deployment time.

This is how we define a Docker image version using regular YAML:

image: functions/gateway:0.7.5

With Helm’s templates we can do this:

image: {{ .Values.images.gateway }}

Then in a separate file we can define the value for “images.gateway”. The other thing helm allows us to do is to use conditional statements – this is useful when supporting multiple architectures or features.

This example shows how to apply either a ClusterIP or a NodePort which are two different options for exposing a service in a cluster. A NodePort exposes the service outside of the cluster so you may want to control when that happens with a flag.

If we were using regular YAML files then that would have meant maintaining two sets of configuration files.

spec:
type: {{ .Values.serviceType }}
ports:
– port: 8080
protocol: TCP
targetPort: 8080
{{- if contains “NodePort” .Values.serviceType }}
nodePort: 31112
{{- end }}

In the example “serviceType” refers to ClusterIP or NodePort and then we have a second conditional statement which conditionally applies a nodePort element to the YAML.

3. Make use of ConfigMaps

In Kubernetes you can mount your configuration files into the cluster as a ConfigMap. ConfigMaps are better than “bind-mounting” because the configuration data is replicated across the cluster making it more robust. When data is bind-mounted from a host then it has to be deployed onto that host ahead of time and synchronised. Both options are much better than building config files directly into a Docker image since they are much easier to update.

A ConfigMap can be created ad-hoc via the kubectl tool or through a YAML file. Once the ConfigMap is created in the cluster it can then be attached or mounted into a container/Pod.

Here’s an example of how to define a ConfigMap for Prometheus:

kind: ConfigMap
apiVersion: v1
metadata:
labels:
app: prometheus
name: prometheus-config
namespace: openfaas
data:
prometheus.yml: |
scrape_configs:
– job_name: ‘prometheus’
scrape_interval: 5s
static_configs:
– targets: [‘localhost:9090’]

You can then attach it to a Deployment or Pod:

volumeMounts:
– mountPath: /etc/prometheus/prometheus.yml
name: prometheus-config
subPath: prometheus.yml
volumes:
– name: prometheus-config
configMap:
name: prometheus-config
items:
– key: prometheus.yml
path: prometheus.yml
mode: 0644

See the full example here: ConfigMap
Prometheus config

Read more in the docs: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/

4. Use secure secrets

In order to keep your passwords, API keys and tokens safe you should make use of Kubernetes’ secrets management mechanisms.

If you’re already making use of ConfigMaps then the good news it that secrets work in almost exactly the same way:

  • Define the secret in the cluster
  • Attach the secret to a Deployment/Pod via a mount

The other type of secrets you need to use is when you want to pull an image in from a private Docker image repository. This is called an ImagePullSecret and you can find out more here.

You can read more about how to create and manage secrets in the Kubernetes docs: https://kubernetes.io/docs/concepts/configuration/secret/

5. Implement health-checks

Kubernetes supports health-checks in the form of liveness and readiness checking. We need these mechanisms to make our cluster self-healing and resilient to failure. They work through a probe which either runs a command within the Pod or calls into a pre-defined HTTP endpoint.

  • Liveness

A liveness check can show whether application is running. With OpenFaaS functions we create a lock file of /tmp/.lock when the function starts. If we detect an unhealthy state we can remove this file and Kubernetes will re-schedule the function for us.

Another common pattern is to add a new HTTP route like /_/healthz. The route of /_/ is used by convention because it is unlikely to clash with existing routes for your project.

  • Readiness checks

If you enable a readiness check then Kubernetes will only send traffic to containers once that criteria has passed.

A readiness check can be set to run on a periodic basis and is different from a health-check. A container could be healthy but under too much load – in which case it could report as “not ready” and Kubernetes would stop sending traffic until resolved.

You can read more in the docs: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/

Wrapping up

In this article, we’ve listed some of the key things to do when bringing a project over to Kubernetes. These include:

  • Creating good Docker images
  • Writing good Kubernetes manifests (YAML files)
  • Using ConfigMaps to decouple tunable settings from your code
  • Using Secrets to protect sensitive data such as API keys
  • Using liveness and readiness probes to implement resiliency and self-healing

For further reading I’m including a comparison of Docker Swarm and Kubernetes and a guide for setting up a cluster fast.

Compare Kubernetes with Docker and Swarm and get a good overview of the tooling from the CLI to networking to the component parts

If you want to get up and running with Kubernetes on a regular VM or cloud host – this is probably the quickest way to get a development cluster up and running.

Follow me on Twitter @alexellisuk for more.

Five tips to move your project to Kubernetes – https://t.co/xUEyrYdO0H @kubeweekly @kubernetesio @Docker

— Alex Ellis (@alexellisuk) March 19, 2018

Acknowledgemnts: Thanks to Nigel Poulton for proof-reading and reviewing the post.

Source

Kubernetes security bugs patched in Tectonic 1.7 and 1.8

Today we are issuing patches for two newly disclosed security vulnerabilities affecting all versions of Tectonic and Kubernetes versions 1.3 through 1.10. The vulnerabilities have been assigned CVE-2017-1002101 and CVE-2017-1002102, respectively.

Both bugs affect all versions of Tectonic and versions of Kubernetes from 1.3 to 1.10 that use Pod Security Policies (PSPs). The bugs can be used to bypass a PSP. If you aren’t using PSPs, you don’t have anything to worry about.

The first vulnerability involves the subPath parameter, which is commonly used to reference a volume multiple times within a Pod. A successful exploit can allow an attacker to access unauthorized files on a Pod with any kind of volume mount, including files on the host.

The second bug relates to mounting ConfigMaps and Secrets as volumes within a Pod. Maliciously crafted Pods can trigger deletion of any file or directory on the host.

To address these vulnerabilities, today we’re releasing two new versions of Tectonic:

  • Tectonic 1.7.14-tectonic.1 to our 1.7 production and preproduction channels
  • Tectonic 1.8.9-tectonic.1 to our 1.8 production and preproduction channels

Apply the update to your clusters by clicking “Check for Update” in your cluster settings.

In addition, this bug impacts the kubelet, which is managed at the infrastructure level. New clusters will install the patched kubelet version. If you have enabled PSPs on an existing cluster (which are not on by default), you will need to update your autoscaling group user-data or provisioning tool to install the 1.7.14 or 1.8.9 version of the kubelet, or update it manually.

All current Tectonic customers will receive an email alert about the bugs and the need to update. More information on the Tectonic update process and how to use the two channels can be found in our documentation.

Source

Kubernetes 1.12: Kubelet TLS Bootstrap and Azure Virtual Machine Scale Sets (VMSS) Move to General Availability

Kubernetes 1.12: Kubelet TLS Bootstrap and Azure Virtual Machine Scale Sets (VMSS) Move to General Availability

Author: The 1.12 Release Team

We’re pleased to announce the delivery of Kubernetes 1.12, our third release of 2018!

Today’s release continues to focus on internal improvements and graduating features to stable in Kubernetes. This newest version graduates key features such as security and Azure. Notable additions in this release include two highly-anticipated features graduating to general availability: Kubelet TLS Bootstrap and Support for Azure Virtual Machine Scale Sets (VMSS).

These new features mean increased security, availability, resiliency, and ease of use to get production applications to market faster. The release also signifies the increasing maturation and sophistication of Kubernetes on the developer side.

Let’s dive into the key features of this release:

Introducing General Availability of Kubelet TLS Bootstrap

We’re excited to announce General Availability (GA) of Kubelet TLS Bootstrap. In Kubernetes 1.4, we introduced an API for requesting certificates from a cluster-level Certificate Authority (CA). The original intent of this API is to enable provisioning of TLS client certificates for kubelets. This feature allows for a kubelet to bootstrap itself into a TLS-secured cluster. Most importantly, it automates the provision and distribution of signed certificates.

Before, when a kubelet ran for the first time, it had to be given client credentials in an out-of-band process during cluster startup. The burden was on the operator to provision these credentials. Because this task was so onerous to manually execute and complex to automate, many operators deployed clusters with a single credential and single identity for all kubelets. These setups prevented deployment of node lockdown features like the Node authorizer and the NodeRestriction admission controller.

To alleviate this, SIG Auth introduced a way for kubelet to generate a private key and a CSR for submission to a cluster-level certificate signing process. The v1 (GA) designation indicates production hardening and readiness, and comes with the guarantee of long-term backwards compatibility.

Alongside this, Kubelet server certificate bootstrap and rotation is moving to beta. Currently, when a kubelet first starts, it generates a self-signed certificate/key pair that is used for accepting incoming TLS connections. This feature introduces a process for generating a key locally and then issuing a Certificate Signing Request to the cluster API server to get an associated certificate signed by the cluster’s root certificate authority. Also, as certificates approach expiration, the same mechanism will be used to request an updated certificate.

Support for Azure Virtual Machine Scale Sets (VMSS) and Cluster-Autoscaler is Now Stable

Azure Virtual Machine Scale Sets (VMSS) allow you to create and manage a homogenous VM pool that can automatically increase or decrease based on demand or a set schedule. This enables you to easily manage, scale, and load balance multiple VMs to provide high availability and application resiliency, ideal for large-scale applications that can run as Kubernetes workloads.

With this new stable feature, Kubernetes supports the scaling of containerized applications with Azure VMSS, including the ability to integrate it with cluster-autoscaler to automatically adjust the size of the Kubernetes clusters based on the same conditions.

Additional Notable Feature Updates

RuntimeClass is a new cluster-scoped resource that surfaces container runtime properties to the control plane being released as an alpha feature.

Snapshot / restore functionality for Kubernetes and CSI is being introduced as an alpha feature. This provides standardized APIs design (CRDs) and adds PV snapshot/restore support for CSI volume drivers.

Topology aware dynamic provisioning is now in beta, meaning storage resources can now understand where they live. This also includes beta support to AWS EBS and GCE PD.

Configurable pod process namespace sharing is moving to beta, meaning users can configure containers within a pod to share a common PID namespace by setting an option in the PodSpec.

Taint node by condition is now in beta, meaning users have the ability to represent node conditions that block scheduling by using taints.

Arbitrary / Custom Metrics in the Horizontal Pod Autoscaler is moving to a second beta to test some additional feature enhancements. This reworked Horizontal Pod Autoscaler functionality includes support for custom metrics and status conditions.

Improvements that will allow the Horizontal Pod Autoscaler to reach proper size faster are moving to beta.

Vertical Scaling of Pods is now in beta, which makes it possible to vary the resource limits on a pod over its lifetime. In particular, this is valuable for pets (i.e., pods that are very costly to destroy and re-create).

Encryption at rest via KMS is now in beta. This adds multiple encryption providers, including Google Cloud KMS, Azure Key Vault, AWS KMS, and Hashicorp Vault, that will encrypt data as it is stored to etcd.

Availability

Kubernetes 1.12 is available for download on GitHub. To get started with Kubernetes, check out these interactive tutorials. You can also install 1.12 using Kubeadm.

5 Day Features Blog Series

If you’re interested in exploring these features more in depth, check back next week for our 5 Days of Kubernetes series where we’ll highlight detailed walkthroughs of the following features:

  • Day 1 – Kubelet TLS Bootstrap
  • Day 2 – Support for Azure Virtual Machine Scale Sets (VMSS) and Cluster-Autoscaler
  • Day 3 – Snapshots Functionality
  • Day 4 – RuntimeClass
  • Day 5 – Topology Resources

Release team

This release is made possible through the effort of hundreds of individuals who contributed both technical and non-technical content. Special thanks to the release team led by Tim Pepper, Orchestration & Containers Lead, at VMware Open Source Technology Center. The 36 individuals on the release team coordinate many aspects of the release, from documentation to testing, validation, and feature completeness.

As the Kubernetes community has grown, our release process represents an amazing demonstration of collaboration in open source software development. Kubernetes continues to gain new users at a rapid clip. This growth creates a positive feedback cycle where more contributors commit code creating a more vibrant ecosystem. Kubernetes has over 22,000 individual contributors to date and an active community of more than 45,000 people.

Project Velocity

The CNCF has continued refining DevStats, an ambitious project to visualize the myriad contributions that go into the project. K8s DevStats illustrates the breakdown of contributions from major company contributors, as well as an impressive set of preconfigured reports on everything from individual contributors to pull request lifecycle times. On average, 259 different companies and over 1,400 individuals contribute to Kubernetes each month. Check out DevStats to learn more about the overall velocity of the Kubernetes project and community.

User Highlights

Established, global organizations are using Kubernetes in production at massive scale. Recently published user stories from the community include:

Is Kubernetes helping your team? Share your story with the community.

Ecosystem Updates

  • CNCF recently released the findings of their bi-annual CNCF survey, finding that the use of cloud native technologies in production has grown over 200% within the last six months.
  • CNCF expanded its certification offerings to include a Certified Kubernetes Application Developer exam. The CKAD exam certifies an individual’s ability to design, build, configure, and expose cloud native applications for Kubernetes. More information can be found here.
  • CNCF added a new partner category, Kubernetes Training Partners (KTP). KTPs are a tier of vetted training providers who have deep experience in cloud native technology training. View partners and learn more here.
  • CNCF also offers online training that teaches the skills needed to create and configure a real-world Kubernetes cluster.
  • Kubernetes documentation now features user journeys: specific pathways for learning based on who readers are and what readers want to do. Learning Kubernetes is easier than ever for beginners, and more experienced users can find task journeys specific to cluster admins and application developers.

KubeCon

The world’s largest Kubernetes gathering, KubeCon + CloudNativeCon is coming to Shanghai from November 13-15, 2018 and Seattle from December 10-13, 2018. This conference will feature technical sessions, case studies, developer deep dives, salons and more! Register today!

Webinar

Join members of the Kubernetes 1.12 release team on November 6th at 10am PDT to learn about the major features in this release. Register here.

Get Involved

The simplest way to get involved with Kubernetes is by joining one of the many Special Interest Groups (SIGs) that align with your interests. Have something you’d like to broadcast to the Kubernetes community? Share your voice at our weekly community meeting, and through the channels below.

Thank you for your continued feedback and support.

  • Post questions (or answer questions) on Stack Overflow
  • Join the community portal for advocates on K8sPort
  • Follow us on Twitter @Kubernetesio for latest updates
  • Chat with the community on Slack
  • Share your Kubernetes story

Source