Introducing Docker Desktop Enterprise – Docker Blog

Nearly 1.4 million developers use Docker Desktop every single day because it is the simplest and easiest way for container-based development. Docker Desktop provides the Docker Engine with Swarm and Kubernetes orchestrators right on the desktop, all from a single install. While this is great for an individual user, in enterprise environments administrators often want to automate the Docker Desktop installation and ensure everyone on the development team has the same configuration following enterprise requirements and creating applications based on architectural standards.

Docker Desktop Enterprise is a new desktop offering that is the easiest, fastest and most secure way to create and deliver production-ready containerized applications. Developers can work with frameworks and languages of their choice, while IT can securely configure, deploy and manage development environments that align to corporate standards and practices. This enables organizations to rapidly deliver containerized applications from development to production.

Enterprise Manageability That Helps Accelerate Time-to-Production

Docker Desktop Enterprise provides a secure way to configure, deploy and manage developer environments while enforcing safe development standards that align to corporate policies and practices. IT teams and application architects can present developers with application templates designed specifically for their team, to bootstrap and standardize the development process and provide a consistent environment all the way to production.

Key new features for IT:

  • Packaged as standard MSI (Win) and PKG (Mac) distribution files that work with existing endpoint management tools with lockable settings via policy files
  • Present developers with customized and approved application templates, ready for coding

Enterprise Deployment & Configuration Packaging

Docker Desktop Enterprise enables IT desktop admins to deploy and manage Docker Desktop Enterprise across distributed developer teams with their preferred endpoint management tools using standard MSI and PKG files. No manual intervention or extra configuration from developers is required and desktop administrators can enable or disable particular settings within Docker Desktop Enterprise to meet corporate standards and provide the best developer experience.

Application Templates

Docker Application templates

Application architects can provide developers with trusted, customized application templates through the Application Designer interface in Docker Desktop Enterprise, helping to improve reliability and security by ensuring developers start from approved designs. Together, application teams and IT can implement consistent security and development practices across the entire software supply chain, from the developers’ desktops all the way to production.

Increase Developer Productivity and Ship Production-ready Containerized Applications

For developers, Docker Desktop Enterprise is the easiest and fastest way to build production-ready containerized applications working with frameworks and languages of choice and targeting every platform. Developers can rapidly innovate by leveraging company-provided application templates that instantly replicate production-approved application configurations on the local desktop.

Key new features for developers:

  • Configurable version packs instantly replicate production environment configurations on the local desktop
  • Application Designer interface allows for template-based workflows for creating containerized applications – no Docker CLI commands are required to get started

Configurable Version Packs

Desktop Enterprise Version Packs

Docker Desktop has Docker Engine and Kubernetes built-in and with the addition of swappable version packs you can now synchronize your desktop development environment with the same Docker API and Kubernetes versions that are used in production with Docker Enterprise. You get the assurance that your application will not break due to incompatible API calls, and if you have multiple downstream environments running different versions of the APIs, you can quickly change your desktop configuration with the click of a button.

Application Designer

Choice of GUI or CLI

The Application Designer is a new workflow that provides production-ready application and service templates that let you get coding quickly, with the reassurance that your application meets architectural standards. And even if you’ve never launched a container before, the Application Designer interface provides the foundational container artifacts and your organization’s skeleton code, getting you started with containers in minutes. Plus, Docker Desktop Enterprise integrates with your choice of development tools, whether you prefer an IDE or a text editor and command line interfaces.

The Docker Desktop Products

Docker Desktop Enterprise is a new addition to our desktop product portfolio which currently includes the free Docker Desktop Community products for MacOS and Windows. Docker Desktop is the simplest way to get started with container-based development on both Windows 10 and macOS with a set of features now available for the enterprise:

Desktop Comparison Table

To learn more about Docker Desktop Enterprise:

  • Sign up to learn more about Docker Desktop Enterprise as we approach general availability
  • Watch the livestreams of the DockerCon EU keynotes, Tuesday from 09:00 – 11:00 CET and Wednesday from 9:30am-11:00am CET. (Replays will also be available)
  • Download Docker Desktop Community and build your first containerized application in minutes [ Windows | mac OS ]

containers, desktop, Desktop Enterprise, docker, dockercon, DockerCon Europe, Kubernetes

Source

The Kubernetes Cluster API – Heptio

I’ve been working with Kubernetes since filing my first commit in October 2016. I’ve had the chance to collaborate with the community on Kops, Kubicorn, and Kubeadm, but there’s one gap that has been nagging me for years: how to to create the right abstraction for bringing up a Kubernetes cluster and managing it once it’s online. As it turned out, I wasn’t alone. So begins the story of Cluster API.

In 2017 I spent an afternoon enjoying lunch at the Google office in Seattle’s Fremont neighborhood meeting with Robert Bailey and Weston Hutchins. We had connected via open source and shared a few similar ideas about declarative infrastructure built on new primitives in Kubernetes. Robert Bailey and Jacob Beacham began to spearhead the charge from Google. We slowly began to start formalizing an effort to create a system for bootstrapping and managing a Kubernetes cluster in a declarative way. I remember the grass roots nature of the project. Google began work on evangelizing these ideas within the Kubernetes community.

Following an email to the Kubernetes mailing list sig cluster lifecycle, the Cluster API working group was born. The group rapidly discovered other projects such as archon and work from Loodse that also had similar ideas. It was clear we were all thinking of a brighter future with declarative infrastructure.

We started brainstorming what a declarative Kubernetes cluster might look like. We each consulted Kubernetes “elders” at our respective companies. The engineers at Google consulted Tim Hockin while I talked this over with Joe Beda, co-founder of Kubernetes and my long time colleague. Tim suggested we start building tooling and playing with abstractions to get a feel for what would work or not. At the time of this “sandbox” stage, we started prototyping in the kube-deploy repository. We needed a place to start hacking on the code and being that this feature was originally intentionally called out of scope finding a home was challenging. Later we were able to move out of that kube-deploy repository to cluster-api which is where the code lives today.

Now “Cluster API,” which is short for “Cluster Management API,” is a great example of a bad name. As Tim St. Clair (Heptio) suggested, a better name for this layer of software is probably “cluster framework”. The community is still figuring out how we plan on dealing with this conundrum.

One of the first decisions the working group made was in regard to the scope of the API itself. In other words, what would our new abstraction be responsible for representing, and what would our abstraction intentionally ignore. We landed on two primary new resources, Clusters, and Machines. The Cluster resource was intended to map cleanly to the official Kubernetes bootstrap tool kubeadm, and the Machine resource was intended to be a simple representation of some compute load in a cloud (EC2 Instances, Google Virtual Machines, etc) or a physical machine. We explicitly decided to keep the new Machine resource separate from the existing node resource, as we could munge the two together later if necessary. According to Jacob Beacham, “the biggest motivation being that this was the only way to build the functionality outside of core, since the Node API is already defined in core.” Each Cluster resource would be mapped to a set of Machine resources, and ultimately all of these combined would represent a single Kubernetes cluster.

We later looked at implementing higher level resources to manage the Machine resource, in the same way deployments and ReplicaSets manage pods in Kubernetes today. Following this logic we modeled MachineDeployment after Deployment and MachineSet after ReplicaSet. These would allow unique strategy implementations for how various controllers would manage scaling and mutating underlying Machines.

We also decided early on that “how” a controller reconciles one of these resources is unique to the controller. In other words, the API should, by design, never include bash commands or any logic that suggests “how” to bring a cluster up, only “what” the cluster should look like after it’s been stood up. Where and how the controller reasons about what it needs to do is in scope for the controller and out of scope for the API. For example, Cluster API would define what version of Kubernetes to install, but would never define how to install that version.

With ClusterAPI, we hope to solve for many of the technical concerns in managing Kubernetes clusters, by drawing upon the lessons we’ve learned from Kubeadm, Kops, Kubicorn, and Kube-up, Gardener, etc.. So we set out to build ClusterAPI with the following goals in mind:

Facilitate Atomic Transactions

While keeping the spirit of planning for failure and mitigating hazards in our software, we knew we wanted to build software that would make it possible to guarantee a successful infrastructure mutation or no mutation at all. We learned this from Kops, when a cluster upgrade or create would fail partially through and we would orphan costly infrastructure in a cloud account.

Enabling Cluster Automation

With Cluster API we find that cluster level configuration is now declared through a common API, making it easy to automate and build tooling to interface with the new API. Tools like the cluster autoscaler now are liberated from having to concern themselves with how a node is created/destroyed. This simplifies the tooling and enables new tooling to be crafted around updating the cluster definition based on arbitrary business needs. This will change how operators think about managing a cluster.

Keep infrastructure resilient

Kops, Kubicorn, and Kube-up all have a fatal flaw. They run only for a finite amount of time. They all have some concept of accomplishing a task, and then terminating the program. This was a good starting point, but it didn’t offer the goal seeking and resilient behavior users are used to with Kubernetes. We needed a controller, to reconcile state over time. If a machine goes down, we don’t want to have to worry about bringing it back up.

Create a better user experience

Standing up a Kubernetes cluster is hard. Period. We hoped to build tooling that would go from 0 to Kubernetes in a friendly way, that made sense to operators. Furthermore, we hoped the API abstractions we created would also resonate with engineers so we could encourage them to build tooling around these new abstractions. For example, if the abstraction was user-friendly, we could port the upstream autoscaler over to using the new abstraction so it no longer had to concern itself with implementation — simply updating a record in Kubernetes.

Provide a solution for cluster upgrades

We wanted a turnkey solution to upgrades. Upgrading a Kubernetes cluster is tedious and risky, and having a residual controller in place not only solved the implementation of how to upgrade a Kubernetes cluster but it also gave us visibility into the state of the current upgrade.

Bring the community together

As it stands every Kubernetes installer to date represents a cluster in a different way, and the user experience is fragmented. This diminishes Kubernetes adoption and frankly pisses users off. We hoped to reduce this fragmentation and provide a solution to defining “what” a cluster looks like, and provide tooling to jumpstart implementations that solve “how” to bring a cluster to life.

All of these lessons and more are starting to sing in the Cluster API repositories. We are on the verge of alpha and beta releases for clouds like AWS and GCP. We hope that the community driven API becomes a standard teams can count on, and we hope that the community can start to offer an arsenal of controller implementations that bring these various variants of clusters to life.

Going Further

Learn More about the Cluster API from Kris Nova (Heptio) and Loc Nguyen (VMware) live at Kubecon 2018 during their presentation on the topic. The talk will be recorded in case you can’t make it. We will upload the video to our advocacy site as soon as we can.

Source

Production-Ready Kubernetes Cluster Creation with kubeadm

Production-Ready Kubernetes Cluster Creation with kubeadm

Authors: Lucas Käldström (CNCF Ambassador) and Luc Perkins (CNCF Developer Advocate)

kubeadm is a tool that enables Kubernetes administrators to quickly and easily bootstrap minimum viable clusters that are fully compliant with Certified Kubernetes guidelines. It’s been under active development by SIG Cluster Lifecycle since 2016 and we’re excited to announce that it has now graduated from beta to stable and generally available (GA)!

This GA release of kubeadm is an important event in the progression of the Kubernetes ecosystem, bringing stability to an area where stability is paramount.

The goal of kubeadm is to provide a foundational implementation for Kubernetes cluster setup and administration. kubeadm ships with best-practice defaults but can also be customized to support other ecosystem requirements or vendor-specific approaches. kubeadm is designed to be easy to integrate into larger deployment systems and tools.

The scope of kubeadm

kubeadm is focused on bootstrapping Kubernetes clusters on existing infrastructure and performing an essential set of maintenance tasks. The core of the kubeadm interface is quite simple: new control plane nodes are created by running kubeadm init and worker nodes are joined to the control plane by running kubeadm join. Also included are utilities for managing already bootstrapped clusters, such as control plane upgrades and token and certificate renewal.

To keep kubeadm lean, focused, and vendor/infrastructure agnostic, the following tasks are out of its scope:

  • Infrastructure provisioning
  • Third-party networking
  • Non-critical add-ons, e.g. for monitoring, logging, and visualization
  • Specific cloud provider integrations

Infrastructure provisioning, for example, is left to other SIG Cluster Lifecycle projects, such as the Cluster API. Instead, kubeadm covers only the common denominator in every Kubernetes cluster: the control plane. The user may install their preferred networking solution and other add-ons on top of Kubernetes after cluster creation.

What kubeadm’s GA release means

General Availability means different things for different projects. For kubeadm, going GA means not only that the process of creating a conformant Kubernetes cluster is now stable, but also that kubeadm is flexible enough to support a wide variety of deployment options.

We now consider kubeadm to have achieved GA-level maturity in each of these important domains:

  • Stable command-line UX — The kubeadm CLI conforms to #5a GA rule of the Kubernetes Deprecation Policy, which states that a command or flag that exists in a GA version must be kept for at least 12 months after deprecation.
  • Stable underlying implementation — kubeadm now creates a new Kubernetes cluster using methods that shouldn’t change any time soon. The control plane, for example, is run as a set of static Pods, bootstrap tokens are used for the kubeadm join flow, and ComponentConfig is used for configuring the kubelet.
  • Configuration file schema — With the new v1beta1 API version, you can now tune almost every part of the cluster declaratively and thus build a “GitOps” flow around kubeadm-built clusters. In future versions, we plan to graduate the API to version v1 with minimal changes (and perhaps none).
  • The “toolbox” interface of kubeadm — Also known as phases. If you don’t want to perform all kubeadm init tasks, you can instead apply more fine-grained actions using the kubeadm init phase command (for example generating certificates or control plane Static Pod manifests).
  • Upgrades between minor versions — The kubeadm upgrade command is now fully GA. It handles control plane upgrades for you, which includes upgrades to etcd, the API Server, the Controller Manager, and the Scheduler. You can seamlessly upgrade your cluster between minor or patch versions (e.g. v1.12.2 -> v1.13.1 or v1.13.1 -> v1.13.3).
  • etcd setup — etcd is now set up in a way that is secure by default, with TLS communication everywhere, and allows for expanding to a highly available cluster when needed.

Who will benefit from a stable kubeadm

SIG Cluster Lifecycle has identified a handful of likely kubeadm user profiles, although we expect that kubeadm at GA can satisfy many other scenarios as well.

Here’s our list:

  • You’re a new user who wants to take Kubernetes for a spin. kubeadm is the fastest way to get up and running on Linux machines. If you’re using Minikube on a Mac or Windows workstation, you’re actually already running kubeadm inside the Minikube VM!
  • You’re a system administrator responsible for setting up Kubernetes on bare metal machines and you want to quickly create Kubernetes clusters that are secure and in conformance with best practices but also highly configurable.
  • You’re a cloud provider who wants to add a Kubernetes offering to your suite of cloud services. kubeadm is the go-to tool for creating clusters at a low level.
  • You’re an organization that requires highly customized Kubernetes clusters. Existing public cloud offerings like Amazon EKS and Google Kubernetes Engine won’t cut it for you; you need customized Kubernetes clusters tailored to your hardware, security, policy, and other needs.
  • You’re creating a higher-level cluster creation tool than kubeadm, building the cluster experience from the ground up, but you don’t want to reinvent the wheel. You can “rebase” on top of kubeadm and utilize the common bootstrapping tools kubeadm provides for you. Several community tools have adopted kubeadm, and it’s a perfect match for Cluster API implementations.

All these users can benefit from kubeadm graduating to a stable GA state.

kubeadm survey

Although kubeadm is GA, the SIG Cluster Lifecycle will continue to be committed to improving the user experience in managing Kubernetes clusters. We’re launching a survey to collect community feedback about kubeadm for the sake of future improvement.

The survey is available at https://bit.ly/2FPfRiZ. Your participation would be highly valued!

This release wouldn’t have been possible without the help of the great people that have been contributing to the SIG. SIG Cluster Lifecycle would like to thank a few key kubeadm contributors:

We also want to thank all the companies making it possible for their developers to work on Kubernetes, and all the other people that have contributed in various ways towards making kubeadm as stable as it is today!

About the authors

Lucas Käldström

  • kubeadm subproject owner and SIG Cluster Lifecycle co-chair
  • Kubernetes upstream contractor, last two years contracting for Weaveworks
  • CNCF Ambassador
  • GitHub: luxas

Luc Perkins

  • CNCF Developer Advocate
  • Kubernetes SIG Docs contributor and SIG Docs tooling WG chair
  • GitHub: lucperkins

Source

First impressions with DigitalOcean’s Kubernetes Engine

In this guide I’ll set up a Kubernetes cluster with DigitalOcean’s new Kubernetes Engine using CLI tooling and then work out the cost of the cluster running a Cloud Native workload – OpenFaaS. OpenFaaS brings portable Serverless Functions to Kubernetes for any programming language.

Kubernetes is everywhere

Since James Governor from RedMonk declared that Kubernetes had won the container orchestration battle we’ve seen cloud and service providers scramble to ship their own managed Kubernetes services – to win mindshare and to get their share of the pie.

Kubernetes won – so now what? https://t.co/JeZwBEWNHy

— RedMonk (@redmonk) May 25, 2018

One of the earliest and most complete Kubernetes services is probably Google Kubernetes Engine (GKE), followed by a number of newer offerings like:

Kubernetes services coming soon:

Kubernetes engines

The point of a managed Kubernetes service or engine as I see it is to abstract away the management of servers and the day to day running of the cluster such as requesting, joining and configuring nodes. In some senses – Kubernetes delivered as a Service is a type of serverless and each product gives a varying level of control and visibility over the nodes provisioned.

For instance with VMware’s Cloud PKS you have no way to even see the server inventory and the cluster is sized dynamically based upon usage. spotinst released a product called Ocean last week which also focuses on hiding the visibility of the servers backing your workloads. The Azure team at Microsoft is combining their Azure Container Instances (ACI) with Virtual Kubelet to provide a “node-less” experience.

The areas I would score a Kubernetes Engine are around:

  • Ease of use – what is the installation experience, bootstrap time and tooling like?
  • Reliability – does it work and stand up to testing?
  • Documentation – is it clear what to do when things go wrong?
  • Management interface – are a CLI and UI available, can I automate them?
  • Effective cost – is this cost effective and good value for money?

Anything else I count as a bonus such as node auto-scaling and native LoadBalancer support within Kubernetes.

Certification is also important for running in production, but if the above points are covered some divergence may be acceptable.

Get Kubernetes

At the time of writing the Kubernetes support is only by invitation only and is in beta. We saw a similar pattern with other Kubernetes services, so this seems to be normal.

Note: please bear in mind that this post is looking at pre-release beta product. Some details may change between now and GA including the CLI which is an early version.

Use the UI

To provision the cluster we can pick between the CLI or the familiar UI dashboard which gains a Kubernetes tab.

DigitalOcean Console

Here you can see three clusters I provisioned for testing OpenFaaS with my team at VMware and for a demo at Serverless Computing London.

From here we can create a new cluster using the UI.

Region

Initially you must pick a Region and a name for the cluster.

Node pool

Then pick the price you want to pay per month by configuring one or many Kubernetes Node pools. The suggestion is 3 nodes at 5USD / month working out at 15USD per month. This may be suitable for a simple workload, but for anyone who has used Kubernetes for real work will know – 1Gb is not enough to be productive.

2Gb RAM with 1 vCPU costs 10 USD month (3×10=30USD) and 4Gb RAM with 2 vCPU will come in at (3×20=60USD). This is probably the minimum cost you want to go with to run a serious application.

Each node gets a public IPv4 IP address, so an IngressController could be run on each node then load-balanced via DNS for free, or we could opt to use a DO load-balancer (not a Kubernetes-native one) at an additional fee.

Effective cost: 3/5

It is possible to create multiple node pools, so if you have lighter workloads you could assign them to cheaper machines.

As well as Standard Nodes we can pick from Optimized Nodes (best in class) and Flexible Nodes. For an Optimized Node with best in class 2x vCPU and 4Gb RAM you’ll be set back (2×40=80USD). This seems to compare favourably with the other services mentioned, but could be off-putting to newcomers. It also doesn’t seem like there is cost for the master node.

Within a minute or two of hitting the blue button we can already download the .kube/config file from the UI and connect to our cluster to deploy code.

The Kubernetes service from @digitalocean is looking to be one of the easiest and quickest I’ve used so far. Looking forward to writing some guides on this and seeing what you all make of it too. pic.twitter.com/0FsMIzEzwF

— Alex Ellis (@alexellisuk) November 16, 2018

Ease of use for the UI: 5/5

Use the CLI

Up to now the doctl CLI could be used to do most of what you could do in the console – apart from provision Kubernetes clusters.

The @DigitalOcean CLI (doctl) now supports their managed Kubernetes Service -> https://t.co/r2muCj8EHz 🎉👨‍💻💻👩‍💻

— Alex Ellis (@alexellisuk) December 3, 2018

Well that all changed today and now in v1.12.0 you can turn on an experimental flag and manage those clusters.

The typical flow for using the doctl CLI involves downloading a static binary, unpacking it yourself, placing it in the right place, logging into the UI and generating an access token and then running doctl auth init. Google Cloud does this better by opening a web-browser to get an access token over OAuth.

Create the cluster with the following command:

Usage:
doctl kubernetes create [flags]

Aliases:
create, c

Flags:
–name string cluster name (required)
–node-pools value cluster node pools in the form “name=your-name;size=droplet_size;count=5;tag=tag1;tag=tag2” (required) (default [])
–region string cluster region location, example value: nyc1 (required)
–tag-names value cluster tags (default [])
–version string cluster version (required)

Global Flags:
-t, –access-token string API V2 Access Token
-u, –api-url string Override default API V2 endpoint
-c, –config string config file (default is $HOME/.config/doctl/config.yaml)
–context string authentication context name
-o, –output string output format [text|json] (default “text”)
–trace trace api access
-v, –verbose verbose output

The –node-pools flag may be better split out into multiple individual flags rather than separated with ;. When it came to picking the slug size I found that bit cryptic too. The only ways I could find them listed were through the API and a release announcement, which may or may not be the most recent. This could be improved for GA.

I also had to enter –version but didn’t know what format the string should be in. I copied the exact value from the UI. This is early days for the team so I would expect this to improve before GA.

$ doctl kubernetes create –name ae-openfaas
–node-pools=”name=main;size=s-2vcpu-4gb;count=3;tag=tutorial”
–region=”lon1″
–tag-names=”tutorial”
–version=”1.11.1-do.2″

ID Name Region Version Status Endpoint IPv4 Cluster Subnet Service Subnet Tags Created At Updated At Node Pools
82f33a60-02af-4e94-a550-dd5afd06cf0e ae-openfaas lon1 1.11.1-do.2 provisioning 10.244.0.0/16 10.245.0.0/16 tutorial,k8s,k8s:82f33a60-02af-4e94-a550-dd5afd06cf0e 2018-12-03 19:38:27 +0000 UTC 2018-12-03 19:38:27 +0000 UTC main

The result is asynchronous and not blocking so now we need to poll / check the CLI for completion.

We can type in doctl kubernetes list or doctl kubernetes get ae-openfaas.

Once we see the Running state then type in: doctl kubernetes kubeconfig ae-openfaas and save the contents into a file.

In the future I’d like to see this config merged optionally into .kube/config like we see with Minikube or VMware’s Cloud PKS CLI.

$ doctl kubernetes kubeconfig ae-openfaas > config
$ export KUBECONFIG=config
$ kubectl get node
NAME STATUS ROLES AGE VERSION
kind-knuth-3o9d Ready <none> 32s v1.11.1
kind-knuth-3o9i Ready <none> 1m v1.11.1
kind-knuth-3o9v Ready <none> 1m v1.11.1

We now have a config on the local computer and we’re good to go!

Ease of use for the CLI: 3/5

Total ease of use: 4/5

I would give GKE’s Management interface 5/5 – DO would have got 2/5 without the CLI. Now that the CLI is available and we can use it to automate clusters I think this goes up to 3/5 leaving room to grow.

Deploying a workload

In order to figure out a Reliability score we need to deploy an application and run it for some time. I deployed OpenFaaS live during a demo at Serverless Computing London and ran the Colorizer from the Function Store – this was on one of the cheaper nodes so ran slower than I would have expected, but was feature complete on a budget.

I then set up OpenFaaS Cloud with GitLab – this is a more demanding task as it means running a container builder and responding to events from GitLab to clone, build and deploy new OpenFaaS functions to the cluster. Again this held up really well using 3x4Gb 2vCPU nodes with no noticeable slow-down.

Reliability 4/5

You can deploy OpenFaaS with helm here: OpenFaaS helm chart

In this Tweet from my talk at goto Copenhagen you can see the conceptual architecture for OpenFaaS on Kubernetes which bundles a Serverless Function CRD & Operator along with NATS Streaming for asynchronous execution and Prometheus for built-in metrics and auto-scaling.

Serverless beyond the hype by @alexellisuk. Donating to @Bornecancerfond in the live demo 💰💸 #serverless pic.twitter.com/n1rzcqRByd

— Martin Jensen (@mrjensens) November 19, 2018

Steps:

  • Install helm and tiller.
  • Create the OpenFaaS namespaces
  • Generate a password for the API gateway and install OpenFaaS the helm chart passing the option –set serviceType=LoadBalancer
  • Install OpenFaaS CLI and log into the API gateway

This is the point at which a “normal” Kubernetes engine would give us a LoadBalancer in the openfaas namespace. We would then query its state to find a public IP address. DigitalOcean gets full marks here because it will respond to this event and provision a load balancer which is around 10 USD / month – cheaper than a traditional cloud provider.

Type in the following and look for a public IP for gateway-external in the EXTERNAL-IP field.

$ kubectl get svc -n openfaas
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager ClusterIP 10.245.45.200 <none> 9093/TCP 1m
gateway ClusterIP 10.245.83.10 <none> 8080/TCP 1m
gateway-external LoadBalancer 10.245.16.17 188.166.136.202 8080:32468/TCP 1m
nats ClusterIP 10.245.208.70 <none> 4222/TCP 1m
prometheus ClusterIP 10.245.105.120 <none> 9090/TCP 1m

Now set your OPENFAAS_URL as follows: export OPENFAAS_URL=http://188.166.136.202:8080 and do the login step:

$ echo -n $PASSWORD | faas-cli login –username admin –password-stdin

Note you can also add the export command to your bash profile ~/.bash_profile file to have it set automatically on every session.

You’ll now be able to open the UI using the address above or deploy a function using the CLI.

Deploy certinfo which can check a TLS certificate for a domain:

$ faas-cli store deploy certinfo

Check the status:

$ faas-cli list -v
$ faas-cli describe certinfo

Invoke the function and check a TLS certificate:

$ echo -n www.openfaas.com | faas-cli invoke certinfo

For me this seemed to take less than 10 seconds from deployment to getting a successful response from the function.

We can even scale the function to zero and see it come back to life:

$ kubectl scale deploy/certinfo -n openfaas-fn –replicas=0

Give it a few seconds for the function to be torn-down, then invoke it again. You’ll see it block – scale up and then serve the request:

$ echo -n cli.openfaas.com | faas-cli invoke certinfo

The function itself will take 1-2 seconds to execute since it works with a remote website. You can try it out with one of your own functions or find out how to enable the OpenFaaS idler component by reading the docs.

Earlier in the year in August Richard Gee wrote up a guide and supporting automation with Ansible to create up a single-node development cluster with Kubernetes and OpenFaaS. What I like about the new service is that we we can now get the same result, but better with just a few CLI commands.

TLS with LetsEncrypt

This is the point at which I’d usually tell you to follow the instructions for LetsEncrypt and cert-manager to setup HTTPS/TLS for your OpenFaaS gateway. I’m not going to need to go there because DigitalOcean can automate all of this for us if we let them take control of our domain.

tls-domains

The user-experience for cert-manager is in my opinion firmly set at expert level, so this kind of automation will be welcomed by developers. It does come at a cost of automation however.

Tear down the cluster

A common use-case for Kubernetes services is running CI and other automation testing. We can now tear down the whole cluster in the UI or CLI:

$ doctl kubernetes delete ae-openfaas –force

That’s it – we’ve now removed the cluster completely.

Wrapping up

As a maintainer, developer, architect and operator – it’s great to see strong Kubernetes offerings appearing. Your team or company may not be able to pick the best fit if you already take all your services from a single vendor, but I hope that the current level of choice and quality will drive down price and drive up usability wherever you call home in the cloud.

If you have the choice to run your workloads wherever you like, or are an aspiring developer then the Kubernetes service from DigitalOcean provides a strong option with additional value adds and some high scores against my rating system. I hope to see the score go up even more with some minor refinements around the CLI ready for the GA.

If taken in isolation then my overall rating for this new Kubernetes service from DigitalOcean would be 4/5, but when compared to the much more mature, feature-rich services Kubernetes engines we have to take this rating in context.

Source

Announcing Cloud Native Application Bundle (CNAB)

As more organizations pursue cloud-native applications and infrastructures for creating modern software environments, it has become clear that there is no single solution in the market for defining and packaging these multi-service, multi-format distributed applications. Real-world applications can now span on-premises infrastructure and cloud-based services, requiring multiple tools like Terraform for the infrastructure, Helm charts and Docker Compose files for the applications, and CloudFormation or ARM templates for the cloud-services. Each of these need to be managed separately.

To address this problem, Microsoft in collaboration with Docker are announcing Cloud Native Application Bundle (CNAB) – an open source, cloud-agnostic specification for packaging and running distributed applications. CNAB unifies the management of multi-service, distributed applications across different toolchains into a single all-in-one packaging format.The CNAB specification lets you define resources that can be deployed to any combination of runtime environments and tooling including Docker Engine, Kubernetes, Helm, automation tools and cloud services.

Docker is the first to implement CNAB for containerized applications and will be expanding it across the Docker platform to support new application development, deployment and lifecycle management. Initially CNAB support will be released as part of our docker-app experimental tool for building, packaging and managing cloud-native applications. Docker lets you package CNAB bundles as Docker images, so you can distribute and share through Docker registry tools including Docker Hub and Docker Trusted Registry. Additionally, Docker will enable organizations to deploy and manage CNAB-based applications in Docker Enterprise in the upcoming months.

The draft specification is available at cnab.io and we’re actively looking for contributors to the spec itself and people interested in building tools around the specification. Docker will be contributing to the CNAB specification.

Source

Securing the Configuration of Kubernetes Cluster Components

Securing the Configuration of Kubernetes Cluster Components

In the previous article of this series Securing Kubernetes for Cloud Native Applications, we discussed what needs to be considered when securing the infrastructure on which a Kubernetes cluster is deployed. This time around, we’re turning our attention to the cluster itself.

Kubernetes Architecture

Kubernetes is a complex system, and the diagram above shows the many different constituent parts that make up a cluster. Each of these components needs to be carefully secured in order to maintain the overall integrity of the cluster.

We won’t be able to cover every aspect of cluster-level security in this article, but we’ll aim to address the more important topics. As we’ll see later, help is available from the wider community, in terms of best-practice security for Kubernetes clusters, and the tooling for measuring adherence to that best-practice.

Cluster Installers

We should start with a brief observation about the many different tools that can be used to install the cluster components.

Some of the default configuration parameters for the components of a Kubernetes cluster, are sub-optimal from a security perspective, and need to be set correctly to ensure a secure cluster. Unless you opt for a managed Kubernetes cluster (such as that provided by Giant Swarm), where the entire cluster is managed on your behalf, this problem is exacerbated by the many different cluster installation tools available, each of which will apply a subtly different configuration. While most installers come with sane defaults, we should never consider that they have our backs covered when it comes to security, and we should make it our objective to ensure that whichever installer mechanism we elect to use, it’s configured to secure the cluster according to our requirements.

Let’s take a look at some of the important aspects of security for the control plane.

API Server

The API server is the hub of all communication within the cluster, and it’s on the API server where the majority of the cluster’s security configuration is applied. The API server is the only component of the cluster’s control plane, that is able to interact directly with the cluster’s state store. Users operating the cluster, other control plane components, and sometimes cluster workloads, all interact with the cluster using the server’s HTTP-based REST API.

Because of its pivotal role in the control of the cluster, carefully managing access to the API server is crucial as far as security is concerned. If somebody or something gains unsolicited access to the API, it may be possible for them to acquire all kinds of sensitive information, as well as gain control of the cluster itself. For this reason, client access to the Kubernetes API should be encrypted, authenticated, and authorized.

Securing Communication with TLS

To prevent man-in-the-middle attacks, the communication between each and every client and the API server should be encrypted using TLS. To achieve this, the API server needs to be configured with a private key and X.509 certificate.

The X.509 certificate for the root certificate authority (CA) that issued the API server’s certificate, must be available to any clients needing to authenticate to the API server during a TLS handshake, which leads us to the question of certificate authorities for the cluster in general. As we’ll see in a moment, there are numerous ways for clients to authenticate to the API server, and one of these is by way of X.509 certificates. If this method of client authentication is employed, which is probably true in the majority of cases (at least for cluster components), each cluster component should get its own certificate, and it makes a lot of sense to establish a cluster-wide PKI capability.

There are numerous ways that a PKI capability can be realised for a cluster, and no one way is better than another. It could be configured by hand, it may be configured courtesy of your chosen installer, or by some other means. In fact, the cluster can be configured to have its own in-built CA, that can issue certificates in response to certificate signing requests submitted via the API server. Here, at Giant Swarm, we use an operator called cert-operator, in conjunction with Hashicorp’s Vault.

Whilst we’re on the topic of secure communication with the API server, be sure to disable its insecure port (prior to Kubernetes 1.13), which serves the API over plain HTTP (–insecure-port=0)!

Authentication, Authorization, and Admission Control

Now let’s turn our attention to controlling which clients can perform which operations on which resources in the cluster. We won’t go into much detail here, as by and large, this is a topic for the next article. What’s important, is to make sure that the components of the control plane are configured to provide the underlying access controls.

Kubernetes API Authorization Flow

When an API request lands at the API server, it performs a series of checks to determine whether to serve the request or not, and if it does serve the request, whether to validate or mutate the the resource object according to defined policy. The chain of execution is depicted in the diagram above.

Kubernetes supports many different authentication schemes, which are almost always implemented externally to the cluster, including X.509 certificates, basic auth, bearer tokens, OpenID Connect (OIDC) for authenticating with a trusted identity provider, and so on. The various schemes are enabled using relevant config options on the API server, so be sure to provide these for the authentication scheme(s) you plan to use. X.509 client certificate authentication requires the path to a file containing one or more certificates for CAs (–client-ca-file), for example. One important point to remember, is that by default, any API requests that are not authenticated by one of the authentication schemes, are treated as anonymous requests. Whilst the access that anonymous requests gain can be limited by authorization, if they’re not required, they should be turned off altogether (–anonymous-auth=false).

Once a request is authenticated, the API server then considers the request against authorization policy. Again, the authorization modes are a configuration option (–authorization-mode), which should at the very least be altered from the default value of AlwaysAllow. The list of authorization modes ideally should include RBAC and Node, the former for enabling the RBAC API for fine-grained access control, and the latter to authorize kubelet API requests (see below).

Once an API request has been authenticated and authorized, the resource object can be subject to validation or mutation before it’s persisted to the cluster’s state database, using admission controllers. A minimum set of admission controllers are recommended for use, and shouldn’t be removed from the list, unless there is very good reason to do so. Additional security related admission controllers that are worthy of consideration are:

  • DenyEscalatingExec – if it’s necessary to allow your pods to run with enhanced privileges (e.g. using the host’s IPC/PID namespaces), this admission controller will prevent users from executing commands in the pod’s privileged containers.
  • PodSecurityPolicy – provides the means for applying various security mechanisms for all created pods. We’ll discuss this further in the next article in this series, but for now it’s important to ensure this admission controller is enabled, otherwise our security policy cannot be applied.
  • NodeRestriction – an admission controller that governs the access a kubelet has to cluster resources, which is covered in more detail below.
  • ImagePolicyWebhook – allows for the images defined for a pod’s containers, to be checked for vulnerabilities by an external ‘image validator’, such as the Image Enforcer. Image Enforcer is based on the Open Policy Agent (OPA), and works in conjunction with the open source vulnerability scanner, Clair.

Dynamic admission control, which is a relatively new feature in Kubernetes, aims to provide much greater flexibility over the static plugin admission control mechanism. It’s implemented with admission webhooks and controller-based initializers, and promises much for cluster security, just as soon as community solutions reach a level of sufficient maturity.

Kubelet

The kubelet is an agent that runs on each node in the cluster, and is responsible for all pod-related activities on the node that it runs on, including starting/stopping and restarting pod containers, reporting on the health of pod containers, amongst other things. After the API server, the kubelet is the next most important cluster component to consider when it comes to security.

Accessing the Kubelet REST API

The kubelet serves a small REST API on ports 10250 and 10255. Port 10250 is a read/write port, whilst 10255 is a read-only port with a subset of the API endpoints.

Providing unfettered access to port 10250 is dangerous, as it’s possible to execute arbitrary commands inside a pod’s containers, as well as start arbitrary pods. Similarly, both ports provide read access to potentially sensitive information concerning pods and their containers, which might render workloads vulnerable to compromise.

To safeguard against potential compromise, the read-only port should be disabled, by setting the kubelet’s configuration, –read-only-port=0. Port 10250, however, needs to be available for metrics collecting and other important functions. Access to this port should be carefully controlled, so let’s discuss the key security configurations.

Client Authentication

Unless its specifically configured, the kubelet API is open to unauthenticated requests from clients. It’s important, therefore, to configure one of the available authentication methods; X.509 client certificates, or requests with Authorization headers containing bearer tokens.

In the case of X.509 client certificates, the contents of a CA bundle needs to be made available to the kubelet, so that it can authenticate the certificates presented by clients during a TLS handshake. This is provided as part of the kubelet configuration (–client-ca-file).

In an ideal world, the only client that needs access to a kubelet’s API, is the Kubernetes API server. It needs to access the kubelet’s API endpoints for various functions, such as collecting logs and metrics, executing a command in a container (think kubectl exec), forwarding a port to a container, and so on. In order for it to be authenticated by the kubelet, the API server needs to be configured with client TLS credentials (–kubelet-client-certificate and –kubelet-client-key).

Anonymous Authentication

If you’ve taken the care to configure the API server’s access to the kubelet’s API, you might be forgiven for thinking ‘job done’. But this isn’t the case, as any requests hitting the kubelet’s API that don’t attempt to authenticate with the kubelet, are deemed to be anonymous requests. By default, the kubelet passes anonymous requests on for authorization, rather than rejecting them as unauthenticated.

If it’s essential in your environment to allow for anonymous kubelet API requests, then there is the authorization gate, which gives some flexibility in determining what can and can’t get served by the API. It’s much safer, however, to disallow anonymous API requests altogether, by setting the kubelet’s –anonymous-auth configuration to false. With such a configuration, the API returns a 401 Unauthorized response to unauthorized clients.

Authorization

With authorizing requests to the kubelet API, once again it’s possible to fall foul of a default Kubernetes setting. Authorization to the kubelet API operates in one of two modes; AlwaysAllow (default) or Webhook. The AlwaysAllow mode does exactly what you’d expect – it will allow all requests that have passed through the authentication gate, to succeed. This includes anonymous requests.

Instead of leaving this wide open, the best approach is to offload the authorization decision to the Kubernetes API server, using the kubelet’s –authorization-mode config option, with the webhook value. With this configuration, the kubelet calls the SubjectAccessReview API (which is part of the API server) to determine whether the subject is allowed to make the request, or not.

Restricting the Power of the Kubelet

In older versions of Kubernetes (prior to 1.7), the kubelet had read-write access to all Node and Pod API objects, even if the Node and Pod objects were under the control of another kubelet running on a different node. They also had read access to all objects that were contained within pod specs; the Secret, ConfigMap, PersistentVolume and PersistentVolumeClaim objects. In other words, a kubelet had access to, and control of, numerous resources it had no responsibility for. This is very powerful, and in the event of a cluster node compromise, the damage could quickly escalate beyond the node in question.

Node Authorizer

For this reason, a Node Authorization mode was introduced specifically for the kubelet, with the goal of controlling its access to the Kubernetes API. The Node authorizer limits the kubelet to read operations on those objects that are relevant to the kubelet (e.g. pods, nodes, services), and applies further read-only limits to Secrets, Configmap, PersistentVolume and PersistentVolumeClaim objects, that are related specifically to the pods bound to the node on which the kubelet runs.

NodeRestriction Admission Controller

Limiting a kubelet to read-only access for those objects that are relevant to it, is a big step in preventing a compromised cluster or workload. The kubelet, however, needs write access to its Node and Pod objects as a means of its normal function. To allow for this, once a kubelet’s API request has passed through Node Authorization, it’s then subject to the NodeRestriction admission controller, which limits the Node and Pod objects the kubelet can modify – its own. For this to work, the kubelet user must be system:node:<nodeName>, which must belong in the system:nodes group. It’s the nodeName component of the kubelet user, of course, which the NodeRestriction admission controller uses to allow or disallow kubelet API requests that modify Node and Pod objects. It follows, that each kubelet should have a unique X.509 certificate for authenticating to the API server, with the Common Name of the subject distinguished name reflecting the user, and the Organization reflecting the group.

Again, these important configurations don’t happen automagically, and the API server needs to be started with Node as one of the comma-delimited list of plugins for the –authorization-mode config option, whilst NodeRestriction needs to be in the list of admission controllers specified by the –enable-admission-plugins option.

Best Practice

It’s important to emphasize that we’ve only covered a sub-set of of the security considerations for the cluster layer (albeit important ones), and if you’re thinking that this all sounds very daunting, then fear not, because help is at hand.

In the same way that benchmark security recommendations have been created for elements of the infrastructure layer, such as Docker, they have also been created for a Kubernetes cluster. The Center for Internet Security (CIS) have compiled a thorough set of configuration settings and filesystem checks for each component of the cluster, published as the CIS Kubernetes Benchmark.

You might also be interested to know that the Kubernetes community has produced an open source tool for auditing a Kubernetes cluster against the benchmark, the Kubernetes Bench for Security. It’s a Golang application, and supports a number of different Kubernetes versions (1.6 onwards), as well as different versions of the benchmark.

If you’re serious about properly securing your cluster, then using the benchmark as a measure of compliance, is a must.

Summary

Evidently, taking precautionary steps to secure your cluster with appropriate configuration, is crucial to protecting the workloads that run in the cluster. Whilst the Kubernetes community has worked very hard to provide all of the necessary security controls to implement that security, for historical reasons some of the default configuration overlooks what’s considered best-practice. We ignore these shortcomings at our peril, and must take the responsibility for closing the gaps whenever we establish a cluster, or when we upgrade to newer versions that provide new functionality.

Some of what we’ve discussed here, paves the way for the next layer in the stack, where we make use of the security mechanisms we’ve configured, to define and apply security controls to protect the workloads that run on the cluster. The next article is called Applying Best Practice Security Controls to a Kubernetes Cluster.

Source

What’s New in Kubernetes 1.13

Banner

As the year comes to a close, Kubernetes contributors, our engineers included, have been hard at work to bring you the final release of 2018: Kubernetes 1.13. In recognition of the achievements the community has made this year, and the looming holiday season, we shift our focuses towards presenting this work to the world at large. KubeCon Shanghai was merely weeks ago and KubeCon NA (Seattle) kicks off next week!

That said, the Kubernetes 1.13 release cycle has been significantly shorter. Given the condensed timeline to plan, document, and deliver enhancements to the Kubernetes ecosystem, efforts were dedicated to minimizing new functionality and instead optimizing existing APIs, graduating major features, improving documentation, and strengthening the test suites within core Kubernetes and the associated components. So yet again, the common theme is stability. Let’s dive into some of the highlights of the release!

Storage

One of the major highlights in this release is CSI (Container Storage Interface), which was first introduced as alpha in January. CSI support in Kubernetes is now Generally Available.

In its infancy, Kubernetes was primarily geared towards running stateless applications. Since then, we’ve seen the evolution of constructs like PetSets evolve into StatefulSets, to build more robust support for running stateful applications. In keeping with that evolution, the Storage Special Interest Group (SIG) has made consistent improvements to the way Kubernetes interfaces with storage subsystems. These developments strengthen the community’s ability to provide storage guarantees to applications running within Kubernetes, which is of paramount importance, especially for Enterprise customers using technologies like Ceph and Gluster.

Making Declarative Changes Safer

At the risk of providing a simplistic explanation, Kubernetes is a set of APIs that receive declarative information from operators / other systems, process and store that information in a key-value store (etcd), and then query and act on the stored information to achieve some desired state. There are then reconciliation loops spread across multiple controllers to enable that the desired state is always maintained. It is important that the changes made to these systems are made in a safe way, as consequences can ripple out to multiples places in a Kubernetes environment.

To that end, we’d like to highlight two enhancements: APIServer DryRun and kubectl diff.

If flags like –dry-run or –apply=false in CLI tools sound familiar, then APIServer DryRun will too. APIServer DryRun is an enhancement which allows cluster operators to understand what would’ve happened with common operations (POST, PUT, PATCH, DELETE) on Kubernetes objects, without persisting the data of the proposed change. This brings an opportunity to better introspect on desired changes, without the burden of having to potentially rollback errors. DryRun has moved to beta in Kubernetes 1.13.

Similarly, kubectl diff provides a similar experience to using the diff utility. Prior to the introduction of this enhancement, operators would have to carefully compare objects to interpolate what the results of the change would be. Moving to beta in Kubernetes 1.13, users can now inspect a local declared Kubernetes object and compare that to the state of a running in-cluster object, or a previously applied object, or what the merging of two objects would result in.

Plugin Systems

As the Kubernetes ecosystem expands, the community has embraced separating the core codebase into new projects, which improve developer velocity, as well as help to minimize the size of the binaries that are delivered. A direct effect of this has been the requirement to extend the way Kubernetes core discovers and gains visibility into external components. This can include a wide gamut of components, like CRI (Container Runtime Interface) and GPU-enabled devices.

To make this happen, an enhancement called Kubelet Device Plugin Registration was introduced in 1.11 and graduates to GA in Kubernetes 1.13. Device plugin registration provides a common and consistent interface which plugins can register against in the kubelet.

Once new device plugins are integrated into the system, it becomes yet another vector that we want to gain visibility into. Third-party device monitoring is now in alpha for Kubernetes 1.13, and it seeks to solve that need. With this new enhancement, third-party device makers can route their custom information to the Kubernetes monitoring systems. This means GPU compute can now be monitored in a similar way as standard cluster resources like RAM and CPU are already monitored.

Collaboration is Key

The community has worked hard on this release, and it caps off a year that could best be summed up by a single word: Cooperation. More consistent, open source tools have emerged, like CNI, CRI, CSI, kubeadm, and CoreDNS to name a few.

Expect 2019 to see a continued push to enable the community through better interfaces, APIs and plugins.

To get started with the latest Kubernetes release you can find it on GitHub at https://github.com/kubernetes/kubernetes/releases.

Source

Kubernetes 1.13: Simplified Cluster Management with Kubeadm, Container Storage Interface (CSI), and CoreDNS as Default DNS are Now Generally Available

Kubernetes 1.13: Simplified Cluster Management with Kubeadm, Container Storage Interface (CSI), and CoreDNS as Default DNS are Now Generally Available

Author: The 1.13 Release Team

We’re pleased to announce the delivery of Kubernetes 1.13, our fourth and final release of 2018!

Kubernetes 1.13 has been one of the shortest releases to date at 10 weeks. This release continues to focus on stability and extensibility of Kubernetes with three major features graduating to general availability this cycle in the areas of Storage and Cluster Lifecycle. Notable features graduating in this release include: simplified cluster management with kubeadm, Container Storage Interface (CSI), and CoreDNS as the default DNS.

These stable graduations are an important milestone for users and operators in terms of setting support expectations. In addition, there’s a continual and steady stream of internal improvements and new alpha features that are made available to the community in this release. These features are discussed in the “additional notable features” section below.

Let’s dive into the key features of this release:

Simplified Kubernetes Cluster Management with kubeadm in GA

Most people who have gotten hands-on with Kubernetes have at some point been hands-on with kubeadm. It’s an essential tool for managing the cluster lifecycle, from creation to configuration to upgrade; and now kubeadm is officially GA. kubeadm handles the bootstrapping of production clusters on existing hardware and configuring the core Kubernetes components in a best-practice-manner to providing a secure yet easy joining flow for new nodes and supporting easy upgrades. What’s notable about this GA release are the now graduated advanced features, specifically around pluggability and configurability. The scope of kubeadm is to be a toolbox for both admins and automated, higher-level system and this release is a significant step in that direction.

Container Storage Interface (CSI) Goes GA

The Container Storage Interface (CSI) is now GA after being introduced as alpha in v1.9 and beta in v1.10. With CSI, the Kubernetes volume layer becomes truly extensible. This provides an opportunity for third party storage providers to write plugins that interoperate with Kubernetes without having to touch the core code. The specification itself has also reached a 1.0 status.

With CSI now stable, plugin authors are developing storage plugins out of core, at their own pace. You can find a list of sample and production drivers in the CSI Documentation.

CoreDNS is Now the Default DNS Server for Kubernetes

In 1.11, we announced CoreDNS had reached General Availability for DNS-based service discovery. In 1.13, CoreDNS is now replacing kube-dns as the default DNS server for Kubernetes. CoreDNS is a general-purpose, authoritative DNS server that provides a backwards-compatible, but extensible, integration with Kubernetes. CoreDNS has fewer moving parts than the previous DNS server, since it’s a single executable and a single process, and supports flexible use cases by creating custom DNS entries. It’s also written in Go making it memory-safe.

CoreDNS is now the recommended DNS solution for Kubernetes 1.13+. The project has switched the common test infrastructure to use CoreDNS by default and we recommend users switching as well. KubeDNS will still be supported for at least one more release, but it’s time to start planning your migration. Many OSS installer tools have already made the switch, including Kubeadm in 1.11. If you use a hosted solution, please work with your vendor to understand how this will impact you.

Additional Notable Feature Updates

Support for 3rd party device monitoring plugins has been introduced as an alpha feature. This removes current device-specific knowledge from the kubelet to enable future use-cases requiring device-specific knowledge to be out-of-tree.

Kubelet Device Plugin Registration is graduating to stable. This creates a common Kubelet plugin discovery model that can be used by different types of node-level plugins, such as device plugins, CSI and CNI, to establish communication channels with Kubelet.

Topology Aware Volume Scheduling is now stable. This make the scheduler aware of a Pod’s volume’s topology constraints, such as zone or node.

APIServer DryRun is graduating to beta. This moves “apply” and declarative object management from kubectl to the apiserver in order to fix many of the existing bugs that can’t be fixed today.

Kubectl Diff is graduating to beta. This allows users to run a kubectl command to view the difference between a locally declared object configuration and the current state of a live object.

Raw block device using persistent volume source is graduating to beta. This makes raw block devices (non-networked) available for consumption via a Persistent Volume Source.

Each Special Interest Group (SIG) within the community continues to deliver the most-requested enhancements, fixes, and functionality for their respective specialty areas. For a complete list of inclusions by SIG, please visit the release notes.

Availability

Kubernetes 1.13 is available for download on GitHub. To get started with Kubernetes, check out these interactive tutorials. You can also easily install 1.13 using kubeadm.

Features Blog Series

If you’re interested in exploring these features more in depth, check back tomorrow for our 5 Days of Kubernetes series where we’ll highlight detailed walkthroughs of the following features:

  • Day 1 – Simplified Kubernetes Cluster Creation with Kubeadm
  • Day 2 – Out-of-tree CSI Volume Plugins
  • Day 3 – Switch default DNS plugin to CoreDNS
  • Day 4 – New CLI Tips and Tricks (Kubectl Diff and APIServer Dry run)
  • Day 5 – Raw Block Volume

Release team

This release is made possible through the effort of hundreds of individuals who contributed both technical and non-technical content. Special thanks to the release team led by Aishwarya Sundar, Software Engineer at Google. The 39 individuals on the release team coordinate many aspects of the release, from documentation to testing, validation, and feature completeness.

As the Kubernetes community has grown, our release process represents an amazing demonstration of collaboration in open source software development. Kubernetes continues to gain new users at a rapid clip. This growth creates a positive feedback cycle where more contributors commit code creating a more vibrant ecosystem. Kubernetes has over 25,000 individual contributors to date and an active community of more than 51,000 people.

Project Velocity

The CNCF has continued refining DevStats, an ambitious project to visualize the myriad contributions that go into the project. K8s DevStats illustrates the breakdown of contributions from major company contributors, as well as an impressive set of preconfigured reports on everything from individual contributors to pull request lifecycle times. On average over the past year, 347 different companies and over 2,372 individuals contribute to Kubernetes each month. Check out DevStats to learn more about the overall velocity of the Kubernetes project and community.

User Highlights

Established, global organizations are using Kubernetes in production at massive scale. Recently published user stories from the community include:

Is Kubernetes helping your team? Share your story with the community.

Ecosystem Updates

  • CNCF recently released the findings of their bi-annual CNCF survey in Mandarin, finding that cloud usage in Asia has grown 135% since March 2018.
  • CNCF expanded its certification offerings to include a Certified Kubernetes Application Developer exam. The CKAD exam certifies an individual’s ability to design, build, configure, and expose cloud native applications for Kubernetes. More information can be found here.
  • CNCF added a new partner category, Kubernetes Training Partners (KTP). KTPs are a tier of vetted training providers who have deep experience in cloud native technology training. View partners and learn more here.
  • CNCF also offers online training that teaches the skills needed to create and configure a real-world Kubernetes cluster.
  • Kubernetes documentation now features user journeys: specific pathways for learning based on who readers are and what readers want to do. Learning Kubernetes is easier than ever for beginners, and more experienced users can find task journeys specific to cluster admins and application developers.

KubeCon

The world’s largest Kubernetes gathering, KubeCon + CloudNativeCon is coming to Seattle from December 10-13, 2018 and Barcelona from May 20-23, 2019. This conference will feature technical sessions, case studies, developer deep dives, salons, and more. Registration will open up in early 2019.

Webinar

Join members of the Kubernetes 1.13 release team on January 10th at 9am PDT to learn about the major features in this release. Register here.

Get Involved

The simplest way to get involved with Kubernetes is by joining one of the many Special Interest Groups (SIGs) that align with your interests. Have something you’d like to broadcast to the Kubernetes community? Share your voice at our weekly community meeting, and through the channels below.

Thank you for your continued feedback and support.

Source

NextCloudPi updated to NC14.0.4, brings HDD monitoring, OrangePi, VM and more – Own your bits

The latest release of NextCloudPi is out!

This release brings the latest major version of Nextcloud, as well as more platforms and tools for monitoring our hard drive health. As usual this release includes many small fixes and improvements, noticeably a new faster version of btrfs-sync.

We are still looking for people to help us support more boards. If you own a BananaPi, OrangePi, Pine64 or any other not yet supported board talk to us. We only need some of your time to perform a quick test in the new images every few months.

We are also in need translators, more automated testing, and some web devs to take on the web interface and improve the user experience.

NextCloudPi improves everyday thanks to your feedback. Please report any problems here. Also, you can discuss anything in the forums, or hang out with us in Telegram.

Last but not least, please download through bitorrent and share it for a while to help keep hosting costs down.

Nextcloud 14.0.4

We have been upgrading to every minor release and now we release an image with version 14.0.4 so new users don’t need to upgrade from 14.0.1. This is basically a more polished Nextcloud version without any new features, as you can see in the changelog.

Remember that it is recommended to upgrade through nc-update-nextcloud instead of the native Nextcloud installer, and that you have the option to let NextCloudPi automatically upgrade by activating nc-autoupdate-nc.

Check and monitor your hard drive health

We already introduced SMART in a previous post, so it was a given that this would be soon included in NextCloudPi! We can check our drive health with nc-hdd-test

We can choose between long and short tests as explained in the previous post.

We can also monitor our drive’s health and get notified via email so that we can hopefully take action before the drive fails

We will also receive a Nextcloud notification

OrangePi images

We are now including Orange Pi images for the Zero Plus 2 version. This board features fast Gigabit networking speeds, and nice eMMC storage, and is capable of displaying 2K graphics, which makes it a popular choice for a NAS + Media Center combo.

NextCloudPi VM

The VM provides a convenient way of installing NCP on a virtual machine, instead of the classic way of using the curl installer.

See details in this previous post.
Source

Continuous Delivery of Everything with Rancher, Drone, and Terraform

It’s 8:00 PM. I just deployed to production, but nothing’s working.
Oh, wait. the production Kinesis stream doesn’t exist, because the
CloudFormation template for production wasn’t updated.
Okay, fix that.
9:00 PM. Redeploy. Still broken. Oh, wait. The production config file
wasn’t updated to use the new database.
Okay, fix that. Finally, it
works, and it’s time to go home. Ever been there? How about the late
night when your provisioning scripts work for updating existing servers,
but not for creating a brand new environment? Or, a manual deployment
step missing from a task list? Or, a config file pointing to a resource
from another environment? Each of these problems stems from separating
the activity of provisioning infrastructure from that of deploying
software, whether by choice, or limitation of tools. The impact of
deploying should be to allow customers to benefit from added value or
validate a business hypothesis. In order to accomplish this,
infrastructure and software are both needed, and they normally change
together. Thus, a deployment can be defined as:

  • reconciling the infrastructure needed with the infrastructure that
    already exists; and
  • reconciling the software that we want to run with the software that
    is already running.

With Rancher, Terraform, and Drone, you can build a continuous delivery
pipeline that lets you deploy this way. Let’s look at a sample system:
This simple
architecture has a server running two microservices,
[happy-service]
and
[glad-service].
When a deployment is triggered, you want the ecosystem to match this
picture, regardless of what its current state is. Terraform is a tool
that allows you to predictably create and change infrastructure and
software. You describe individual resources, like servers and Rancher
stacks, and it will create a plan to make the world match the resources
you describe. Let’s create a Terraform configuration that creates a
Rancher environment for our production deployment:

provider “rancher” {
api_url = “$”
}

resource “rancher_environment” “production” {
name = “production”
description = “Production environment”
orchestration = “cattle”
}

resource “rancher_registration_token” “production_token” {
environment_id = “$”
name = “production-token”
description = “Host registration token for Production environment”
}

Terraform has the ability to preview what it’ll do before applying
changes. Let’s run terraform plan.

+ rancher_environment.production
description: “Production environment”

+ rancher_registration_token.production_token
command: “<computed>”

The pluses and green text indicate that the resource needs to be
created. Terraform knows that these resources haven’t been created yet,
so it will try to create them. Running terraform apply creates the
environment in Rancher. You can log into Rancher to see it. Now let’s
add an AWS EC2 server to the environment:

# A look up for rancheros_ami by region
variable “rancheros_amis” {
default = {
“ap-south-1” = “ami-3576085a”
“eu-west-2” = “ami-4806102c”
“eu-west-1” = “ami-64b2a802”
“ap-northeast-2” = “ami-9d03dcf3”
“ap-northeast-1” = “ami-8bb1a7ec”
“sa-east-1” = “ami-ae1b71c2”
“ca-central-1” = “ami-4fa7182b”
“ap-southeast-1” = “ami-4f921c2c”
“ap-southeast-2” = “ami-d64c5fb5”
“eu-central-1” = “ami-8c52f4e3”
“us-east-1” = “ami-067c4a10”
“us-east-2” = “ami-b74b6ad2”
“us-west-1” = “ami-04351964”
“us-west-2” = “ami-bed0c7c7”
}
type = “map”
}

# this creates a cloud-init script that registers the server
# as a rancher agent when it starts up
resource “template_file” “user_data” {
template = <<EOF
#cloud-config
write_files:
– path: /etc/rc.local
permissions: “0755”
owner: root
content: |
#!/bin/bash
for i in
do
docker info && break
sleep 1
done
sudo docker run -d –privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.1 $$
EOF

vars {
registration_url = “$”
}
}

# AWS ec2 launch configuration for a production rancher agent
resource “aws_launch_configuration” “launch_configuration” {
provider = “aws”
name = “rancher agent”
image_id = “$”
instance_type = “t2.micro”
key_name = “$”
user_data = “$”

security_groups = [ “$”]
associate_public_ip_address = true
}

# Creates an autoscaling group of 1 server that will be a rancher agent
resource “aws_autoscaling_group” “autoscaling” {
availability_zones = [“$”]
name = “Production servers”
max_size = “1”
min_size = “1”
health_check_grace_period = 3600
health_check_type = “ELB”
desired_capacity = “1”
force_delete = true
launch_configuration = “$”
vpc_zone_identifier = [“$”]
}

We’ll put these in the same directory as environment.tf, and run
terraform plan again:

+ aws_autoscaling_group.autoscaling
arn: “”

+ aws_launch_configuration.launch_configuration
associate_public_ip_address: “true”

+ template_file.user_data

This time, you’ll see that rancher_environment resources is missing.
That’s because it’s already created, and Rancher knows that it
doesn’t have to create it again. Run terraform apply, and after a few
minutes, you should see a server show up in Rancher. Finally, we want to
deploy the happy-service and glad-service onto this server:

resource “rancher_stack” “happy” {
name = “happy”
description = “A service that’s always happy”
start_on_create = true
environment_id = “$”

docker_compose = <<EOF
version: ‘2’
services:
happy:
image: peloton/happy-service
stdin_open: true
tty: true
ports:
– 8000:80/tcp
labels:
io.rancher.container.pull_image: always
io.rancher.scheduler.global: ‘true’
started: $STARTED
EOF

rancher_compose = <<EOF
version: ‘2’
services:
happy:
start_on_create: true
EOF

finish_upgrade = true
environment {
STARTED = “$”
}
}

resource “rancher_stack” “glad” {
name = “glad”
description = “A service that’s always glad”
start_on_create = true
environment_id = “$”

docker_compose = <<EOF
version: ‘2’
services:
glad:
image: peloton/glad-service
stdin_open: true
tty: true
ports:
– 8000:80/tcp
labels:
io.rancher.container.pull_image: always
io.rancher.scheduler.global: ‘true’
started: $STARTED
EOF

rancher_compose = <<EOF
version: ‘2’
services:
glad:
start_on_create: true
EOF

finish_upgrade = true
environment {
STARTED = “$”
}
}

This will create two new Rancher stacks; one for the happy service and
one for the glad service. Running terraform plan once more will show
the two Rancher stacks:

+ rancher_stack.glad
description: “A service that’s always glad”

+ rancher_stack.happy
description: “A service that’s always happy”

And running terraform apply will create them. Once this is done,
you’ll have your two microservices deployed onto a host automatically
on Rancher. You can hit your host on port 8000 or on port 8001 to see
the response from the services:
We’ve created each
piece of the infrastructure along the way in a piecemeal fashion. But
Terraform can easily do everything from scratch, too. Try issuing a
terraform destroy, followed by terraform apply, and the entire
system will be recreated. This is what makes deploying with Terraform
and Rancher so powerful – Terraform will reconcile the desired
infrastructure with the existing infrastructure, whether those resources
exist, don’t exist, or require modification. Using Terraform and
Rancher, you can now create the infrastructure and the software that
runs on the infrastructure together. They can be changed and versioned
together, too. In the future blog entries, we’ll look at how to
automate this process on git push with Drone. Be sure to check out the
code for the Terraform configuration are hosted on
[github].
The
[happy-service]
and
[glad-service]
are simple nginx docker containers. Bryce Covert is an engineer at
pelotech. By day, he helps teams accelerate
engineering by teaching them functional programming, stateless
microservices, and immutable infrastructure. By night, he hacks away,
creating point and click adventure games. You can find pelotech on
Twitter at @pelotechnology.

Source