Principles of Container-based Application Design

Today, almost all applications can run in containers. But creating a cloud-native application that automates the operation and management of containerized applications through a cloud-native platform such as Kubernetes requires extra work.
Cloud native applications need to consider failures; even when the underlying architecture fails, it needs to run reliably.
To provide such functionality, a cloud-native platform like Kubernetes needs to impose some contracts and constraints on running applications.
These contracts ensure that applications can run under certain constraints, allowing the platform to automate application management.

 

 

 

 

 

 

 

| Container Design Principles |

The seven principles described here relate to both the build time and the runtime, two types of concerns.

Build time
1) Single focus: Each container only addresses one point of interest and is done very well.
2) Self-contained: A container only relies on the Linux kernel. Additional library requirements can be added when building the container.
3) Mirror invariance: Containerized applications mean invariance, and once the build is complete, there is no need to rebuild from the environment.
Runtime
4) Highly observable: Each container must implement all the necessary APIs to help the platform observe and manage the application in the best possible way.
5) Lifecycle consistency: A container must be able to get event information from the platform and react accordingly.
6) Process tractability: The life of a containerized application must be as short as possible so that it can be replaced by another container at any time.
7) Runtime Restrictions: Each container must declare its own resource requirements and limit resource usage to the required range.

Source

Managing Kubernetes Workloads With Rancher 2.0

A Detailed Overview of Rancher’s Architecture

This newly-updated, in-depth guidebook provides a detailed overview of the features and functionality of the new Rancher: an open-source enterprise Kubernetes platform.

Get the eBook

Rancher 2.0 was built with many things in mind. You can provision and manage Kubernetes clusters, deploy user services onto them and easily control access with authentication and RBAC. One of the coolest things about Rancher 2.0 is its intuitive UI, which we’ve designed to try and demystify Kubernetes, and accelerate adoption for anyone new to it. In this tutorial I’ll walk you through that new user interface, and explain how you can use it to deploy a simple NGINX service.

Designing Your Workload

There are several things that you might need to figure out before deploying the workload for your app:

  • Is it a stateless or stateful app?
  • How many instances of your app need to be running?
  • What are the placement rules — whether the app needs to run on specific hosts?
  • Is your app meant to be exposed as a service on a private network, so other applications can talk to it?
  • Is public access to the app needed?

There can be more questions to answer, but the above are the most basic ones and a good starting place. The Rancher UI will give you more details on what you can configure on your workload, so you can tune it up or update later.

Deploying your first workload with Rancher 2.0

Lets start with the fun part — deploying some very simple workload and exposing it to the outside world with Rancher. Assuming Rancher installation is done (it takes just one click), and at least one Kubernetes cluster is provisioned (a little bit more challenging than one click, but also very fast), switch to Project View and hit “Deploy” on the Workloads page:

All the options are default, except for image and Port Mapping (we will get into more details on this later). I want my service to publish on a random port on every host in my cluster, and when the port is hit, the traffic redirected to nginx internal port 80. Once the workload is deployed, the public endpoint will be set on the object in the UI for easy access:

By clicking on the 31217 public endpoint link, you’d get redirected straight to your service:

As you can see, it takes just one step to deploy the workload and publish it to the outside, which is very similar to Rancher 1.6. If you are a Kubernetes user, you know it takes a couple of Kubernetes objects to backup the above — a deployment and a service. The deployment will take care of starting the containerized application; it also monitors its health, restarts if it crashes based on a restart policy, etc. But in order to expose the application to the outside, Kubernetes needs a service object created explicitly. Rancher makes it simple for the end user by just getting the workload declaration in a user friendly way, and creating all the required Kubernetes constructs behind the scenes. More on those constructs in the next section.

More Workload Options

By default, the Rancher UI presents the user with the basic options for the workload deployment. You can choose to change them starting with the Workload type:

Based on the type picked, a corresponding Kubernetes resource is going to get created.

  • Scalable deployment of (n) pods — Kubernetes Deployment
  • Run one pod on each node — Kubernetes DaemonSet
  • Stateful set — Kubernetes StatefulSet
  • Run on a cron schedule — Kubernetes CronJob

Along with the type, options like image, environment variables, and labels can be set. That will all define the deployment spec of your application. Now, exposing the application to the outside can be done via the Port Mapping section:

With this port declaration, after the workload is deployed, it will be exposed via the same random port on every node in the cluster. Modify Source Port if you need a specific value instead of a random one. There are several options for “Publish on”:

Based on the value picked, Rancher will create a corresponding service object on the Kubernetes side:

  • Every node — Kubernetes NodePort Service
  • Internal cluster IP — Kubernetes ClusterIP service. Your workload will be accessible via a private network only in this case.
  • Load Balancer — Kubernetes Load Balancer service. This option should be picked only when your Kubernetes cluster is deployed in the public cloud, such as with AWS, and has an External Load Balancer support (like AWS ELB).
  • Nodes running a pod — no service gets created; HostPort option gets set in the Deployment spec

We highlight the implementation details, but you don’t really need to use them. Rancher UI/API would give all the necessary information in order to access your workload by providing a clickable link to the workload endpoint.

Traffic Distribution between Workloads using Ingress

There is one more way to publish the workload — via Ingress. Not only does it publish applications on standard http ports 80/443, but it also provides L7 routing capabilities along with SSL termination. Functionality like this can be useful if you deploy a web application and would like your traffic routed to different endpoints based on the host/path routing rules:

Unlike in Rancher 1.6, the Load Balancer is not tight to a specific LB provider like haproxy. The implementation varies based on Cluster type. For Google Container Engine clusters, it is GLBC, for Amazon EKS — AWS ELB/ALB, for on Digital Ocean/Amazon EC2 — nginx load balancer. The last one Rancher installs and manages, and we are planning to introduce more Load Balancer providers in the future on demand.

Enhanced Service Discovery

If you are building an application that consists of multiple workloads talking to each other, most likely DNS is used to resolve the service name. You can certainly connect to the container using the API address, but the container can die, and the ip address will change. So DNS is really the preferable way. Kubernetes Service Discovery comes as a built in feature in all the clusters provisioned by Rancher. Every workload created from the Rancher UI can be resolved by its name within the same namespace. Although a Kubernetes service (of ClusterIP type) needs to be created explicitly in order to discover the workload, Rancher takes this burden from its users, and creates the service automatically for every workload. In addition, Rancher enhances the Service Discovery by letting users create:

  • An Alias of another DNS value
  • A Custom record pointing to one or more existing workloads

All the above is available under Workloads Service Discovery page in the UI:

As you can see, configuring workloads in Rancher 2.0 is just as easy as in 1.6. Even though the backend now implements everything through Kubernetes, the Rancher UI still simplifies workload creation just as before. Through the Rancher interface, you can expose your workload to the public, place it behind a load balancer and configure internal service discovery — all accomplished in an intuitive and easy way. This blog covered the basics of workload management. We are planning to write more on features like Volumes, Application Catalog, etc. In addition, our UI and backend are constantly evolving. There may be new cool features being exposed as you read this post—so stay tuned!

alena Prokharchyk

Alena Prokharchyk

Alena Prokharchyk

Software Engineer

Source

Fixing the Subpath Volume Vulnerability in Kubernetes

Fixing the Subpath Volume Vulnerability in Kubernetes

On March 12, 2018, the Kubernetes Product Security team disclosed CVE-2017-1002101, which allowed containers using subpath volume mounts to access files outside of the volume. This means that a container could access any file available on the host, including volumes for other containers that it should not have access to.

The vulnerability has been fixed and released in the latest Kubernetes patch releases. We recommend that all users upgrade to get the fix. For more details on the impact and how to get the fix, please see the announcement. (Note, some functional regressions were found after the initial fix and are being tracked in issue #61563).

This post presents a technical deep dive on the vulnerability and the solution.

Kubernetes Background

To understand the vulnerability, one must first understand how volume and subpath mounting works in Kubernetes.

Before a container is started on a node, the kubelet volume manager locally mounts all the volumes specified in the PodSpec under a directory for that Pod on the host system. Once all the volumes are successfully mounted, it constructs the list of volume mounts to pass to the container runtime. Each volume mount contains information that the container runtime needs, the most relevant being:

  • Path of the volume in the container
  • Path of the volume on the host (/var/lib/kubelet/pods/<pod uid>/volumes/<volume type>/<volume name>)

When starting the container, the container runtime creates the path in the container root filesystem, if necessary, and then bind mounts it to the provided host path.

Subpath mounts are passed to the container runtime just like any other volume. The container runtime does not distinguish between a base volume and a subpath volume, and handles them the same way. Instead of passing the host path to the root of the volume, Kubernetes constructs the host path by appending the Pod-specified subpath (a relative path) to the base volume’s host path.

For example, here is a spec for a subpath volume mount:

apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
– name: my-container
<snip>
volumeMounts:
– mountPath: /mnt/data
name: my-volume
subPath: dataset1
volumes:
– name: my-volume
emptyDir: {}

In this example, when the Pod gets scheduled to a node, the system will:

  • Set up an EmptyDir volume at /var/lib/kubelet/pods/1234/volumes/kubernetes.io~empty-dir/my-volume
  • Construct the host path for the subpath mount: /var/lib/kubelet/pods/1234/volumes/kubernetes.io~empty-dir/my-volume/ + dataset1
  • Pass the following mount information to the container runtime:
    • Container path: /mnt/data
    • Host path: /var/lib/kubelet/pods/1234/volumes/kubernetes.io~empty-dir/my-volume/dataset1
  • The container runtime bind mounts /mnt/data in the container root filesystem to /var/lib/kubelet/pods/1234/volumes/kubernetes.io~empty-dir/my-volume/dataset1 on the host.
  • The container runtime starts the container.

The Vulnerability

The vulnerability with subpath volumes was discovered by Maxim Ivanov, by making a few observations:

  • Subpath references files or directories that are controlled by the user, not the system.
  • Volumes can be shared by containers that are brought up at different times in the Pod lifecycle, including by different Pods.
  • Kubernetes passes host paths to the container runtime to bind mount into the container.

The basic example below demonstrates the vulnerability. It takes advantage of the observations outlined above by:

  • Using an init container to setup the volume with a symlink.
  • Using a regular container to mount that symlink as a subpath later.
  • Causing kubelet to evaluate the symlink on the host before passing it into the container runtime.

apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
initContainers:
– name: prep-symlink
image: “busybox”
command: [“bin/sh”, “-ec”, “ln -s / /mnt/data/symlink-door”]
volumeMounts:
– name: my-volume
mountPath: /mnt/data
containers:
– name: my-container
image: “busybox”
command: [“/bin/sh”, “-ec”, “ls /mnt/data; sleep 999999”]
volumeMounts:
– mountPath: /mnt/data
name: my-volume
subPath: symlink-door
volumes:
– name: my-volume
emptyDir: {}

For this example, the system will:

  • Setup an EmptyDir volume at /var/lib/kubelet/pods/1234/volumes/kubernetes.io~empty-dir/my-volume
  • Pass the following mount information for the init container to the container runtime:
    • Container path: /mnt/data
    • Host path: /var/lib/kubelet/pods/1234/volumes/kubernetes.io~empty-dir/my-volume
  • The container runtime bind mounts /mnt/data in the container root filesystem to /var/lib/kubelet/pods/1234/volumes/kubernetes.io~empty-dir/my-volume on the host.
  • The container runtime starts the init container.
  • The init container creates a symlink inside the container: /mnt/data/symlink-door -> /, and then exits.
  • Kubelet starts to prepare the volume mounts for the normal containers.
  • It constructs the host path for the subpath volume mount: /var/lib/kubelet/pods/1234/volumes/kubernetes.io~empty-dir/my-volume/ + symlink-door.
  • And passes the following mount information to the container runtime:
    • Container path: /mnt/data
    • Host path: /var/lib/kubelet/pods/1234/volumes/kubernetes.io~empty-dir/my-volume/symlink-door
  • The container runtime bind mounts /mnt/data in the container root filesystem to /var/lib/kubelet/pods/1234/volumes/kubernetes.io~empty~dir/my-volume/symlink-door
  • However, the bind mount resolves symlinks, which in this case, resolves to / on the host! Now the container can see all of the host’s filesystem through its mount point /mnt/data.

This is a manifestation of a symlink race, where a malicious user program can gain access to sensitive data by causing a privileged program (in this case, kubelet) to follow a user-created symlink.

It should be noted that init containers are not always required for this exploit, depending on the volume type. It is used in the EmptyDir example because EmptyDir volumes cannot be shared with other Pods, and only created when a Pod is created, and destroyed when the Pod is destroyed. For persistent volume types, this exploit can also be done across two different Pods sharing the same volume.

The Fix

The underlying issue is that the host path for subpaths are untrusted and can point anywhere in the system. The fix needs to ensure that this host path is both:

  • Resolved and validated to point inside the base volume.
  • Not changeable by the user in between the time of validation and when the container runtime bind mounts it.

The Kubernetes product security team went through many iterations of possible solutions before finally agreeing on a design.

Idea 1

Our first design was relatively simple. For each subpath mount in each container:

  • Resolve all the symlinks for the subpath.
  • Validate that the resolved path is within the volume.
  • Pass the resolved path to the container runtime.

However, this design is prone to the classic time-of-check-to-time-of-use (TOCTTOU) problem. In between steps 2) and 3), the user could change the path back to a symlink. The proper solution needs some way to “lock” the path so that it cannot be changed in between validation and bind mounting by the container runtime. All the subsequent ideas use an intermediate bind mount by kubelet to achieve this “lock” step before handing it off to the container runtime. Once a bind mount is performed, the mount source is fixed and cannot be changed.

Idea 2

We went a bit wild with this idea:

  • Create a working directory under the kubelet’s pod directory. Let’s call it dir1.
  • Bind mount the base volume to under the working directory, dir1/volume.
  • Chroot to the working directory dir1.
  • Inside the chroot, bind mount volume/subpath to subpath. This ensures that any symlinks get resolved to inside the chroot environment.
  • Exit the chroot.
  • On the host again, pass the bind mounted dir1/subpath to the container runtime.

While this design does ensure that the symlinks cannot point outside of the volume, it was ultimately rejected due to difficulties of implementing the chroot mechanism in 4) across all the various distros and environments that Kubernetes has to support, including containerized kubelets.

Idea 3

Coming back to earth a little bit, our next idea was to:

  • Bind mount the subpath to a working directory under the kubelet’s pod directory.
  • Get the source of the bind mount, and validate that it is within the base volume.
  • Pass the bind mount to the container runtime.

In theory, this sounded pretty simple, but in reality, 2) was quite difficult to implement correctly. Many scenarios had to be handled where volumes (like EmptyDir) could be on a shared filesystem, on a separate filesystem, on the root filesystem, or not on the root filesystem. NFS volumes ended up handling all bind mounts as a separate mount, instead of as a child to the base volume. There was additional uncertainty about how out-of-tree volume types (that we couldn’t test) would behave.

The Solution

Given the amount of scenarios and corner cases that had to be handled with the previous design, we really wanted to find a solution that was more generic across all volume types. The final design that we ultimately went with was to:

  • Resolve all the symlinks in the subpath.
  • Starting with the base volume, open each path segment one by one, using the openat() syscall, and disallow symlinks. With each path segment, validate that the current path is within the base volume.
  • Bind mount /proc/<kubelet pid>/fd/<final fd> to a working directory under the kubelet’s pod directory. The proc file is a link to the opened file. If that file gets replaced while kubelet still has it open, then the link will still point to the original file.
  • Close the fd and pass the bind mount to the container runtime.

Note that this solution is different for Windows hosts, where the mounting semantics are different than Linux. In Windows, the design is to:

  • Resolve all the symlinks in the subpath.
  • Starting with the base volume, open each path segment one by one with a file lock, and disallow symlinks. With each path segment, validate that the current path is within the base volume.
  • Pass the resolved subpath to the container runtime, and start the container.
  • After the container has started, unlock and close all the files.

Both solutions are able to address all the requirements of:

  • Resolving the subpath and validating that it points to a path inside the base volume.
  • Ensuring that the subpath host path cannot be changed in between the time of validation and when the container runtime bind mounts it.
  • Being generic enough to support all volume types.

Acknowledgements

Special thanks to many folks involved with handling this vulnerability:

  • Maxim Ivanov, who responsibly disclosed the vulnerability to the Kubernetes Product Security team.
  • Kubernetes storage and security engineers from Google, Microsoft, and RedHat, who developed, tested, and reviewed the fixes.
  • Kubernetes test-infra team, for setting up the private build infrastructure
  • Kubernetes patch release managers, for coordinating and handling all the releases.
  • All the production release teams that worked to deploy the fix quickly after release.

If you find a vulnerability in Kubernetes, please follow our responsible disclosure process and let us know; we want to do our best to make Kubernetes secure for all users.

– Michelle Au, Software Engineer, Google; and Jan Šafránek, Software Engineer, Red Hat

Source

Rancher Glossary: 1.6 to 2.0 Terms and Concepts

How to Migrate from Rancher 1.6 to Rancher 2.1 Online Meetup

Key terminology differences, implementing key elements, and transforming Compose to YAML

Watch the video

As we near the end of the development process for Rancher 2.0, we thought it might be useful to provide a glossary of terms that will help Rancher users understand the fundamental concepts in Kubernetes and Rancher.

In the move from Rancher 1.6 to Rancher 2.0, we have aligned more with the Kubernetes naming standard. This shift could be confusing for people who have only used Cattle environments under Rancher 1.6.

This article aims to help you understand the new concepts in Rancher 2.0. It can also act as an easy reference for terms and concepts between the container orchestrators Cattle and Kubernetes.

Rancher 1.6 Cattle compared with Rancher 2.0 Kubernetes

Rancher 1.6 offered Cattle as a container orchestrator and many users chose to use it. In Cattle, you have an environment , which is both an administrative and a compute boundary, i.e., the lowest level at which you can assign permissions; importantly, all hosts in that environment were dedicated to that environment and that environment alone. Then, to organize your containers, you had a Stack , which was a logical grouping of a collection of services, with a service being a particular running image.

So, how does this structure look under under 2.0?

If you are working in the container space, then it is unlikely that you haven’t heard some of the buzz words around Kubernetes, such as pods, namespaces and nodes. What this article aims to do is ease the transition from Cattle to Kubernetes by aligning the terms of both orchestrators. Along with some of the names changing, some of the capabilities have changed as well.

The following table gives a definition of some of the core Kubernetes concepts

Concept Definition
Cluster Collection of machines that run containerized applications managed by Kubernetes
Namespace A virtual cluster, multiple of which can be supported by a single physical cluster
Node One of the physical (virtual) machines that make up a cluster
Pod The smallest and simplest Kubernetes object. A Pod represents a set of running containers on your cluster
Deployment An API object that manages a replicated application
Workload Units of work that are running on the cluster, these can be pods, or deployments.

More detailed information on Kubernetes concepts can be found at
https://kubernetes.io/docs/concepts/

ENVIRONMENTS

The environment in Rancher 1.6 represented 2 things:

  • The Compute boundary
  • The administrative boundary

In 2.0 the environment concept doesn’t exist, instead it becomes replaced by:

  • Cluster – The compute boundary
  • Project – An administrative boundary

A Project is an administrative layer introduced by Rancher to ease the burden of administration in Kubernetes.

HOST

In Cattle, a host could only belong to one environment, things are similar in that nodes (the new name for hosts!) can only belong to one cluster. What used to be an environment with hosts, is now a cluster with nodes.

STACK

A stack in Rancher 1.6 is a way to group a number of services. In Rancher 2.0 this is done via namespaces.

SERVICE

In Rancher 1.6, a service was defined as one or more instances of the same container running. In Rancher 2.0, one or more instances of the same container running are defined as a workload , where a workload can be made up of a pod (s) running with a controller.

CONTAINER

A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings, etc. Within Rancher 1.6, a container was the minimal definition required to run an application. Under Kubernetes, a pod is the minimal definition. A pod can be a single image, or it can be a number of images that all share the same storage/network and description of how they interact. Pod contents are always co-located and co-scheduled, and run in a shared context.

LOAD BALANCER

In Rancher 1.6, a Load Balancer was used to expose your applications from within the Rancher environment for access externally. In Rancher 2.0, the concept is the same. There is a Load Balancer option to expose your services. In the language of Kubernetes, this function is more often referred to as an Ingress. In short, Load Balancer and Ingress play the same role.

Conclusion

In terms of concepts, Cattle was the closest orchestrator to Kubernetes out of all of the orchestrators. Hopefully this article will act as an easy reference for people moving from Rancher 1.6 to 2.0. Plus, the similarity between the two orchestrators should allow for an easier transition.

The following table gives a quick reference for the old versus new terms.

Rancher 1.6 Rancher 2.0
Container Pod
Services Workload
Load Balancer Ingress
Stack Namespace
Environment Project (Administration)/Cluster (Compute)
Host Node
Catalog Helm

For further reading and training, check out our free online training series: Introduction to Kubernetes and Rancher.

Chris Urwin

Chris Urwin
UK Technical Lead

Source

Container Storage Interface (CSI) for Kubernetes Goes Beta

 

Kubernetes LogoCSI Logo

The Kubernetes implementation of the Container Storage Interface (CSI) is now beta in Kubernetes v1.10. CSI was introduced as alpha in Kubernetes v1.9.

Kubernetes features are generally introduced as alpha and moved to beta (and eventually to stable/GA) over subsequent Kubernetes releases. This process allows Kubernetes developers to get feedback, discover and fix issues, iterate on the designs, and deliver high quality, production grade features.

Why introduce Container Storage Interface in Kubernetes?

Although Kubernetes already provides a powerful volume plugin system that makes it easy to consume different types of block and file storage, adding support for new volume plugins has been challenging. Because volume plugins are currently “in-tree”—volume plugins are part of the core Kubernetes code and shipped with the core Kubernetes binaries—vendors wanting to add support for their storage system to Kubernetes (or even fix a bug in an existing volume plugin) must align themselves with the Kubernetes release process.

With the adoption of the Container Storage Interface, the Kubernetes volume layer becomes truly extensible. Third party storage developers can now write and deploy volume plugins exposing new storage systems in Kubernetes without ever having to touch the core Kubernetes code. This will result in even more options for the storage that backs Kubernetes users’ stateful containerized workloads.

What’s new in Beta?

With the promotion to beta CSI is now enabled by default on standard Kubernetes deployments instead of being opt-in.

The move of the Kubernetes implementation of CSI to beta also means:
* Kubernetes is compatible with v0.2 of the CSI spec (instead of v0.1)
* There were breaking changes between the CSI spec v0.1 and v0.2, so existing CSI drivers must be updated to be 0.2 compatible before use with Kubernetes 1.10.0+.
* Mount propagation, a feature that allows bidirectional mounts between containers and host (a requirement for containerized CSI drivers), has also moved to beta.
* The Kubernetes VolumeAttachment object, introduced in v1.9 in the storage v1alpha1 group, has been added to the storage v1beta1 group.
* The Kubernetes CSIPersistentVolumeSource object has been promoted to beta.
A VolumeAttributes field was added to Kubernetes CSIPersistentVolumeSource object (in alpha this was passed around via annotations).
* Node authorizer has been updated to limit access to VolumeAttachment objects from kubelet.
* The Kubernetes CSIPersistentVolumeSource object and the CSI external-provisioner have been modified to allow passing of secrets to the CSI volume plugin.
* The Kubernetes CSIPersistentVolumeSource has been modified to allow passing in filesystem type (previously always assumed to be ext4).
* A new optional call, NodeStageVolume, has been added to the CSI spec, and the Kubernetes CSI volume plugin has been modified to call NodeStageVolume during MountDevice (in alpha this step was a no-op).

How do I deploy a CSI driver on a Kubernetes Cluster?

CSI plugin authors must provide their own instructions for deploying their plugin on Kubernetes.

The Kubernetes-CSI implementation team created a sample hostpath CSI driver. The sample provides a rough idea of what the deployment process for a CSI driver looks like. Production drivers, however, would deploy node components via a DaemonSet and controller components via a StatefulSet rather than a single pod (for example, see the deployment files for the GCE PD driver).

How do I use a CSI Volume in my Kubernetes pod?

Assuming a CSI storage plugin is already deployed on your cluster, you can use it through the familiar Kubernetes storage primitives: PersistentVolumeClaims, PersistentVolumes, and StorageClasses.

CSI is a beta feature in Kubernetes v1.10. Although it is enabled by default, it may require the following flag:
* API server binary and kubelet binaries:
* –allow-privileged=true
* Most CSI plugins will require bidirectional mount propagation, which can only be enabled for privileged pods. Privileged pods are only permitted on clusters where this flag has been set to true (this is the default in some environments like GCE, GKE, and kubeadm).

Dynamic Provisioning

You can enable automatic creation/deletion of volumes for CSI Storage plugins that support dynamic provisioning by creating a StorageClass pointing to the CSI plugin.

The following StorageClass, for example, enables dynamic creation of “fast-storage” volumes by a CSI volume plugin called “com.example.csi-driver”.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast-storage
provisioner: com.example.csi-driver
parameters:
type: pd-ssd
csiProvisionerSecretName: mysecret
csiProvisionerSecretNamespace: mynamespace

New for beta, the default CSI external-provisioner reserves the parameter keys csiProvisionerSecretName and csiProvisionerSecretNamespace. If specified, it fetches the secret and passes it to the CSI driver during provisioning.

Dynamic provisioning is triggered by the creation of a PersistentVolumeClaim object. The following PersistentVolumeClaim, for example, triggers dynamic provisioning using the StorageClass above.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-request-for-storage
spec:
accessModes:
– ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: fast-storage

When volume provisioning is invoked, the parameter type: pd-ssd and the secret any referenced secret(s) are passed to the CSI plugin com.example.csi-driver via a CreateVolume call. In response, the external volume plugin provisions a new volume and then automatically create a PersistentVolume object to represent the new volume. Kubernetes then binds the new PersistentVolume object to the PersistentVolumeClaim, making it ready to use.

If the fast-storage StorageClass is marked as “default”, there is no need to include the storageClassName in the PersistentVolumeClaim, it will be used by default.

Pre-Provisioned Volumes

You can always expose a pre-existing volume in Kubernetes by manually creating a PersistentVolume object to represent the existing volume. The following PersistentVolume, for example, exposes a volume with the name “existingVolumeName” belonging to a CSI storage plugin called “com.example.csi-driver”.

apiVersion: v1
kind: PersistentVolume
metadata:
name: my-manually-created-pv
spec:
capacity:
storage: 5Gi
accessModes:
– ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
csi:
driver: com.example.csi-driver
volumeHandle: existingVolumeName
readOnly: false
fsType: ext4
volumeAttributes:
foo: bar
controllerPublishSecretRef:
name: mysecret1
namespace: mynamespace
nodeStageSecretRef:
name: mysecret2
namespace: mynamespace
nodePublishSecretRef
name: mysecret3
namespace: mynamespace

Attaching and Mounting

You can reference a PersistentVolumeClaim that is bound to a CSI volume in any pod or pod template.

kind: Pod
apiVersion: v1
metadata:
name: my-pod
spec:
containers:
– name: my-frontend
image: nginx
volumeMounts:
– mountPath: “/var/www/html”
name: my-csi-volume
volumes:
– name: my-csi-volume
persistentVolumeClaim:
claimName: my-request-for-storage

When the pod referencing a CSI volume is scheduled, Kubernetes will trigger the appropriate operations against the external CSI plugin (ControllerPublishVolume, NodeStageVolume, NodePublishVolume, etc.) to ensure the specified volume is attached, mounted, and ready to use by the containers in the pod.

For more details please see the CSI implementation design doc and documentation.

How do I write a CSI driver?

CSI Volume Driver deployments on Kubernetes must meet some minimum requirements.

The minimum requirements document also outlines the suggested mechanism for deploying an arbitrary containerized CSI driver on Kubernetes. This mechanism can be used by a Storage Provider to simplify deployment of containerized CSI compatible volume drivers on Kubernetes.

As part of the suggested deployment process, the Kubernetes team provides the following sidecar (helper) containers:
* external-attacher
* watches Kubernetes VolumeAttachment objects and triggers ControllerPublish and ControllerUnpublish operations against a CSI endpoint
* external-provisioner
* watches Kubernetes PersistentVolumeClaim objects and triggers CreateVolume and DeleteVolume operations against a CSI endpoint
* driver-registrar
* registers the CSI driver with kubelet (in the future) and adds the drivers custom NodeId (retrieved via GetNodeID call against the CSI endpoint) to an annotation on the Kubernetes Node API Object
* livenessprobe
* can be included in a CSI plugin pod to enable the Kubernetes Liveness Probe mechanism

Storage vendors can build Kubernetes deployments for their plugins using these components, while leaving their CSI driver completely unaware of Kubernetes.

Where can I find CSI drivers?

CSI drivers are developed and maintained by third parties. You can find a non-definitive list of some sample and production CSI drivers.

What about FlexVolumes?

As mentioned in the alpha release blog post, FlexVolume plugin was an earlier attempt to make the Kubernetes volume plugin system extensible. Although it enables third party storage vendors to write drivers “out-of-tree”, because it is an exec based API, FlexVolumes requires files for third party driver binaries (or scripts) to be copied to a special plugin directory on the root filesystem of every node (and, in some cases, master) machine. This requires a cluster admin to have write access to the host filesystem for each node and some external mechanism to ensure that the driver file is recreated if deleted, just to deploy a volume plugin.

In addition to being difficult to deploy, Flex did not address the pain of plugin dependencies: Volume plugins tend to have many external requirements (on mount and filesystem tools, for example). These dependencies are assumed to be available on the underlying host OS, which is often not the case.

CSI addresses these issues by not only enabling storage plugins to be developed out-of-tree, but also containerized and deployed via standard Kubernetes primitives.

If you still have questions about in-tree volumes vs CSI vs Flex, please see the Volume Plugin FAQ.

What will happen to the in-tree volume plugins?

Once CSI reaches stability, we plan to migrate most of the in-tree volume plugins to CSI. Stay tuned for more details as the Kubernetes CSI implementation approaches stable.

What are the limitations of beta?

The beta implementation of CSI has the following limitations:
* Block volumes are not supported; only file.
* CSI drivers must be deployed with the provided external-attacher sidecar plugin, even if they don’t implement ControllerPublishVolume.
* Topology awareness is not supported for CSI volumes, including the ability to share information about where a volume is provisioned (zone, regions, etc.) with the Kubernetes scheduler to allow it to make smarter scheduling decisions, and the ability for the Kubernetes scheduler or a cluster administrator or an application developer to specify where a volume should be provisioned.
* driver-registrar requires permissions to modify all Kubernetes node API objects which could result in a compromised node gaining the ability to do the same.

What’s next?

Depending on feedback and adoption, the Kubernetes team plans to push the CSI implementation to GA in 1.12.

The team would like to encourage storage vendors to start developing CSI drivers, deploying them on Kubernetes, and sharing feedback with the team via the Kubernetes Slack channel wg-csi, the Google group kubernetes-sig-storage-wg-csi, or any of the standard SIG storage communication channels.

How do I get involved?

This project, like all of Kubernetes, is the result of hard work by many contributors from diverse backgrounds working together.

In addition to the contributors who have been working on the Kubernetes implementation of CSI since alpha:
* Bradley Childs (childsb)
* Chakravarthy Nelluri (chakri-nelluri)
* Jan Šafránek (jsafrane)
* Luis Pabón (lpabon)
* Saad Ali (saad-ali)
* Vladimir Vivien (vladimirvivien)

We offer a huge thank you to the new contributors who stepped up this quarter to help the project reach beta:
* David Zhu (davidz627)
* Edison Xiang (edisonxiang)
* Felipe Musse (musse)
* Lin Ml (mlmhl)
* Lin Youchong (linyouchong)
* Pietro Menna (pietromenna)
* Serguei Bezverkhi (sbezverk)
* Xing Yang (xing-yang)
* Yuquan Ren (NickrenREN)

If you’re interested in getting involved with the design and development of CSI or any part of the Kubernetes Storage system, join the Kubernetes Storage Special Interest Group (SIG). We’re rapidly growing and always welcome new contributors.

Source

Continuous Delivery Pipeline with Webhooks

Build a CI/CD Pipeline with Kubernetes and Rancher 2.0

Recorded Online Meetup of best practices and tools for building pipelines with containers and kubernetes.

Watch the training

I had the pleasure of attending KubeCon 2017 last year and it was an amazing experience. There were many informative talks on various Kubernetes features, I got a chance to attend some talks and definitely learned a lot. I also had the great opportunity of speaking in the AppOps/AppDev track, and turned my lecture into this guide.

The aim of this article is to show how to develop a continuous delivery pipeline to a Kubernetes cluster using Webhooks.

The diagram above is an example of how the continuous delivery pipeline will look. In this article we’ll go over each of the different components involved in the pipeline and how they fit together.

Why should I care about continuous delivery at all?

If you have your app running in a Kubernetes cluster, the process of making code changes to your app and getting the new version running will look something like this:

  1. Code review and merge
  2. Building new image
  3. Pushing the image to an image registry
  4. Updating your deployment in cluster to use the new image

Implementing all these steps every time you need to update your application is necessary – but time consuming. Obviously, automating some of these steps would help.

Let’s categorize the steps we can automate into 2 parts.

Part 1: Building of images when code changes and pushing this image to a registry.

Part 2: Updating the k8s app through the automated build.

We’ll go over Part 1 first, which is automated builds.

Part 1: Automated Builds

Automated builds can be achieved using an image registry that:

  1. Provides automated builds, and;
  2. A way of notifying when the build is finished.

These notifications are usually managed via webhooks, so we need to select an image registry that supports webhooks. But just getting notified via webhooks isn’t sufficient. Instead, we need to write some code that can receive and process this webhook, and, in response, update our app within the cluster.

We’ll also go over how to develop this code to process webhooks, and for the remainder of the article we’ll refer to this code as the webhook-receiver.

Part 2: Choose Your Application Update Strategy

Different strategies can be used to update an application depending on the kind of application. Let’s look at some of the existing update strategies and see which ones we can use in a k8s cluster.

A. Blue Green Strategy

The Blue Green Strategy requires an identical new environment for updates. First, we bring up the new environment to run the new version of our app.If that works as expected, we can take down the older version. The benefit of the Blue Green Strategy, therefore, is to guarantee zero downtime and provide a way to rollback.

B. Recreate Strategy

In this strategy, we need to delete the older instances of our application and then bring up new ones. This strategy is a good choice for applications where the old and new versions of the application can’t be run together, for example, if some data transformations are needed. However, downtime is incurred with this strategy since the old instances have to be stopped first.

C. Rolling Update Strategy

The third and last strategy we’ll look at is Rolling Update. This strategy, like Blue Green, guarantees zero downtime. It does so by updating only a certain percentage of instances at a time. So at anytime there are always some instances with old code up and running. In addition, the Rolling Update strategy, unlike the Blue Green strategy, does not require an identical production environment.

From among these, Kubernetes currently supports Rolling Update and Recreate Strategy. Rolling Update is the strategy selected by default, but we can explicitly specify which one to use through the spec of our deployment. Once we specify which strategy to use, Kubernetes handles all the orchestration logic for the update. I want to use Rolling Update for our continuous delivery pipeline.

Selecting and Describing the Strategy Type

This is a sample manifest for a deployment. In the spec we have a field called strategy.

Over here we can provide the type of the strategy as rolling update. (That’s the strategy selected by default). We get to describe how it will work in terms of two more fields, maxUnavailable and maxSurge. As the name suggests, maxUnavailable is the number of pods allowed to be unavailable, and maxSurge is the number of pods allowed to be scheduled above the desired number, during the update.

How do you trigger the rolling update?

Here’s an example of some commands we can run to trigger an update.

But manually running a command every time a new version is built will stand in the way of automating our continuous delivery pipeline. That’s why, we’re going to use the webhook-receiver to automate this process.

Now that we have covered bothPart 1 and Part 2 of automation in depth, let’s go back to the pipeline diagram and get the different components in place.

The image registry I’m using is Docker Hub, which has an automated builds feature. Plus, it comes with a webhooks feature, which we need to send the build completion notification to our webhook-receiver. So how does this webhook-receiver work, and where will it run?

How the Webhook-Receiver Works

The webhook-receiver will consume information sent by Docker Hub webhooks, and then take the pushed image tag and make k8s API call to the deployment resource.

The webhook-receiver can be placed anywhere, for example, lambda or run within the cluster. But Rancher 1.6 already has a webhook-receiver framework in place for such use cases. ( https://github.com/rancher/webhook-service) It is a micro service written in Golang. It provides webhook callback URLs. URLs are based on Rancher 1.6 API endpoints that can accept POST requests. When triggered, they perform pre-defined actions within Rancher.

The webhook-service framework

In the webhook-service framework, the receivers are referred to as drivers. When we add a new one, each driver should implement this WebhookDriver interface for it to be functional.

I added a custom driver for Kubernetes clusters to implement our webhook.Here’s the link to the custom driver https://github.com/mrajashree/webhook-service/tree/k8s

With that custom driver in place, we can go back to the pipeline diagram again to get the whole picture.

  1. In this first step, the user creates a webhook and registers it with Docker Hub.This needs to be done only once for the deployment.
  2. This second step shows the operation of the continuous delivery pipeline. Once the webhook is registered in the first step, users can keep working on the app and newer versions will continue to build because of the Docker Hub automated-builds feature. And the latest version will deploy on the cluster because of our webhook driver. Now the pipeline is in place and will update our app continuously.

In Conclusion

Let’s review what we did to create our continuous delivery pipeline. First, we chose our application update strategy. Because Kubernetes supports the Rolling Updates Strategy, which also offers zero downtime and doesn’t require identical production environments, we chose this option. Next, we entered the strategy type and described the strategy in the manifest. Finally, we automated the build command using Docker Hub and our webhook-receiver. With these components in place, the pipeline will continue to update our application without any further instruction from the user. Now users can get back to what they like to do: coding applications.

Rajashree Mandaogane

Rajashree Mandaogane

Software Engineer

Source

Migrating the Kubernetes Blog – Kubernetes

Migrating the Kubernetes Blog

We recently migrated the Kubernetes Blog from the Blogger platform to GitHub. With the change in platform comes a change in URL: formerly at http://blog.kubernetes.io, the blog now resides at https://kubernetes.io/blog.

All existing posts redirect from their former URLs with <rel=canonical> tags, preserving SEO values.

Why and how we migrated the blog

Our primary reasons for migrating were to streamline blog submissions and reviews, and to make the overall blog process faster and more transparent. Blogger’s web interface made it difficult to provide drafts to multiple reviewers without also granting unnecessary access permissions and compromising security. GitHub’s review process offered clear improvements.

We learned from Jim Brikman’s experience during his own site migration away from Blogger.

Our migration was broken into several pull requests, but you can see the work that went into the primary migration PR.

We hope that making blog submissions more accessible will encourage greater community involvement in creating and reviewing blog content.

How to Submit a Blog Post

You can submit a blog post for consideration one of two ways:

If you have a post that you want to remain confidential until your publish date, please submit your post via the Google form. Otherwise, you can choose your submission process based on your comfort level and preferred workflow.

Note: Our workflow hasn’t changed for confidential advance drafts. Additionally, we’ll coordinate publishing for time sensitive posts to ensure that information isn’t released prematurely through an open pull request.

Call for reviewers

The Kubernetes Blog needs more reviewers! If you’re interested in contributing to the Kubernetes project and can participate on a regular, weekly basis, send an introductory email to k8sblog@linuxfoundation.org.

Source

Evaluation of Serverless Frameworks for Kubernetes (K8s)

Take a deep dive into Best Practices in Kubernetes Networking

From overlay networking and SSL to ingress controllers and network security policies, we’ve seen many users get hung up on Kubernetes networking challenges. In this video recording, we dive into Kubernetes networking, and discuss best practices for a wide variety of deployment options.

Watch the video

In the early days of Pokemon Go, we were all amazed at how Niantic managed to scale its user-base at planetary scale, seamlessly adding additional nodes to its container cluster to accommodate additional players and environments, all made possible by using Kubernetes as a container orchestrator. Kubernetes is able to abstract away parts of processing and low-level dependencies from the programmer’s eyes in scaling and managing a container infrastructure. This makes it a very efficient platform to develop and maintain application services that span multiple containers. This whitepaper will explore how we can take the very useful design parameters and service orchestration features of K8s and marry them with serverless frameworks and Functions as a Service (FaaS). In particular, we will hone in on the features and functionalities, operational performance and efficiencies of three serverless frameworks that have been architected on a K8s structure: (i) Fission; (ii) OpenFaaS; and (iii) Kubeless.

A. Why Kubernetes is an excellent orchestration system for Serverless?

Serverless architectures refer to the application architecture that abstracts away server management tasks from the developer and enhances development speed and efficiency by dynamically allocating and managing compute resources. Function as a Service (FaaS) is a runtime on top of which a serverless architecture can be built. FaaS frameworks operate as ephemeral containers that have common language runtimes already installed on them and which allows code to be executed within those runtimes.

FaaS framework should be able to run on a variety of infrastructures to be truly useful including the public cloud, hybrid cloud and on-premise environments. Serverless frameworks built on top of FaaS runtimes in a real production environment should be able to rely on proven and tested orchestration and management capabilities to deploy containers at scale and for distributed workloads.

For orchestration and management, serverless FaaS frameworks are able to rely on Kubernetes due to its ability to:

  • Orchestrate containers across clusters of hosts.
  • Maximize hardware resources needed for enterprise applications.
  • Manage and automate application deployments and provide declarative updates.
  • Mount and add storage to run stateful apps.
  • Scale containerized applications on the fly and provide resources to support them.
  • Declaratively manage services.
  • Provide a barometer to check the health of apps and self-heal apps with auto-placement, auto-restart, auto-replication, and autoscaling.

A serverless system can consist of a function triggered via a client request or functions being executed as part of business services. Both these processes can be orchestrated using a Container Cluster Manager such as Kubernetes. Source: dzone.com

The three serverless frameworks that we will walk through in this article have their individual strengths and weaknesses. The common thread between these FaaS frameworks is that they are able to (1) turn functions into services; and (2) manage the lifecycle of these services by leveraging the Kubernetes platform. The engineering behind these frameworks differs in the exact modalities that they employ to accomplish these common goals, which we will explore in the next section. In particular, these are some of the differences between these frameworks that we will highlight in the following sections:

  1. Does the framework operate at the source-level or at the level of Docker images or in-between, e.g. buildpacks?
  2. What is the latency in cold-start performance or the time lag during the execution of the function due to the container being initiated with the common language runtime?
  3. How do they allocate memory or resources to services?
  4. How do they access and deploy the orchestration and container management functionalities of Kubernetes?

B. OpenFaaS and Deploying a Spring Boot Template

OpenFaaS ia a serverless platform that allows functions to be managed with a Docker or Kubernetes runtime since its basic primitive is a container in OCI format. OpenFaaS can be extended to leverage enterprise functionalities such as Docker Universal Control Plane enterprise-grade cluster management solution with Docker Enterprise or Tectonic for Kubernetes. OpenFaaS inherits existing container security features such as r/o filesystem, privilege drop and content trust. It is able to manage functions with a Docker or K8s scheduler/orchestrator and has their associated rich ecosystem of commercial and community vendors at its disposal. As well, any executable can be packaged into a function in OpenFaas due to its polyglot nature.

SpringBoot and Vertx are very popular frameworks to develop microservices and their ease of use has been extended to OpenFaaS via OpenFaaS templates. These templates allow a seamless way to develop and deploy serverless functions on the OpenFaaS platform. The templates are available in the github repository here. Let’s walk-thru how to deploy a SpringBoot template on the OpenFaaS platform.

Installing OpenFaaS locally

Downloading and installing templates on local machine

We will need to have FaaS CLI installed and configured to work with our local or remote K8s or Docker. In this exercise, we will use a local Docker client and will extend it to a cloud-based GKE cluster in the follow-up.

For the latest version of the CLI type in:

$ curl -sL https://cli.openfaas.com | sudo sh

[or via brew install faas-cli on MacOS.]

Use the following command to verify templates are installed locally:

Before we can create a serverless function we have to install these templates on our local machine.

TL;DR

faas-cli template pull [https://github.com/tmobile/faas-java-templates.git](https://github.com/tmobile/faas-java-templates.git)

The –help flag can be invoked for all the commands.

Manage your OpenFaaS functions from the command line

Usage:
faas-cli [flags]
faas-cli [command]

Available Commands:

build Builds OpenFaaS function containers

deploy Deploy OpenFaaS functions

help Help about any command

push Push OpenFaaS functions to remote registry (Docker Hub)

remove Remove deployed OpenFaaS functions

version Display the clients version information

Flags:
-h, –help help for faas-cli
-f, –yaml string Path to YAML file describing function(s)

Use faas-cli [command] –help for more information about a command.

Creating functions with installed templates

Using our function of interest from the github repository of Vertx/SpringBoot templates, we can create a function (replace text within curly brackets with our function, we used springboot but you can replace it with vertx for a vertx template):

faas-cli new –lang springboot

Using mvnw, the command is

faas-cli new mvnw –lang vertx|springboot
Folder: mvnw created.
Function created in folder: mvnw
Stack file written: mvnw.yml

The contents of mvnw.yml can now be used with the CLI.

Note: If your cluster is remote or not running on port 8080 – then edit this in the YAML file before continuing. A handler.java file was generated for our function. You can edit the pom.xml file and any dependencies will be installed during the “build” step.

Build function

Now that we’ve created the function logic, we can build the function using the faas cli build command. We will build the function into a docker image using the local Docker client.

$ faas-cli build -f mvnw.yml
Building: mvnw.
Clearing temporary build folder: ./build/mvnw/
Preparing ./mvnw/ ./build/mvnw/function
Building: mvnw with node template. Please wait..
docker build -t mvnw .
Sending build context to Docker daemon 8.704kB
Step 1/19 : FROM node:6.11.2-alpine
—&gt; 16566b7ed19e

Step 19/19 : CMD fwatchdog
—&gt; Running in 53d04c1631aa
—&gt; f5e1266b0d32
Removing intermediate container 53d04c1631aa
Successfully built f5e1266b0d32
Successfully tagged mvnw:latest
Image: mvnw built.

Push your Function (optional as we are working on a local install)

In order to deploy our function, we will edit the mvnw.yml file and set the “image” line to the applicable username on the Docker Hub such as: hishamhasan/mvnw. We will then build the function again.

$ faas-cli push -f mvnw.yml
Pushing: mvnw to remote repository.
The push refers to a repository [docker.io/hishamhasan/mvnw]

Once this is done, the image will be pushed up to the Docker Hub or a remote Docker registry and we can deploy and run the function.

Deploy Function

$ faas-cli deploy -f mvnw.yml
Deploying: mvnw.
No existing service to remove
Deployed.
200 OK
URL: [http://localhost:8080/function/mvnw](http://localhost:8080/function/mvnw)

Invoke Function

“`
$ faas-cli invoke -f mvnw.yml callme
Reading from STDIN – hit (Control + D) to stop.
This is my message

{“status”:”done”}
“`

We can also pipe a command into the function such as:

“`
$ date | faas-cli invoke -f mvnw.yml mvnw
{“status”:”done”}
“`

Installing OpenFaaS on the Google Cloud Platform

We are not restricted to any on-prem or cloud infrastructure in working with OpenFaaS. Now that we have deployed our template in a local Docker cluster, we can leverage the versatility of OpenFaaS by setting it up on GKE in the GCP.

  1. Create a GCP Project called
  2. Download and install the Google Cloud SDK here. After installing the SDK, run gcloud init and then, set the default project to penfaas.
  3. Install kubectl using gcloud:gcloud components install kubectl
  4. Navigate to API Manager > Credentials > Create Credentials > Service account key.
  5. Select JSON as key type. Rename the file to json and place it in the project
  6. Add the SSH key you just created under ComputeEngine > Metadata > SSH Keys and create a metadata entry named sshKeys with your public SSH key as the value.
  7. Create a three-node Kubernetes cluster with each node in a different zone. Read up here on cluster federation to find out about how to select the number of clusters and number of nodes in each cluster, which may change frequently according to load or growth.

k8s_version=$(gcloud container get-server-config –format=json | jq -r ‘.validNodeVersions[0]’)

gcloud container clusters create demo
–cluster-version=$
–zone=us-west1-a
–additional-zones=us-west1-b,us-west1-c
–num-nodes=1
–machine-type=n1-standard-2
–scopes=default,storage-rw

Increase the size of the default node pool to desired number of nodes (in this example we scale up by a factor of 3 to nine node):

gcloud container clusters resize –size=3

You can carry out a host of cluster management functions by invoking the applicable SDK command as described on this page, for example delete the cluster.

gcloud container clusters delete demo -z=us-west1-a

Complete administrative setup for

Set up credentials for kubectl:

gcloud container clusters get-credentials demo -z=us-west1-a

Create a cluster admin user:

kubectl create clusterrolebinding “cluster-admin-$(whoami)”

–clusterrole=cluster-admin

–user=”$(gcloud config get-value core/account)”

Grant admin privileges to kubernetes-dashboard (ensure this is done in non-production environment):

“`
kubectl create clusterrolebinding “cluster-admin-$(whoami)”

–clusterrole=cluster-admin

–user=”$(gcloud config get-value core/account)”
“`

You can access the kubernetes-dashboard on port-8080 by invoking kubectl proxy –port=8080 and navigating to http://localhost:8080/uion the browser (or at http://localhost:9099/ui) using the kubectl reverse proxy:

“`
kubectl proxy –port=9099 &
“`

A Kubernetes cluster consists of master and node resources – master resources coordinate the cluster and nodes run the application and they communicate via the Kubernetes API. We built our containerized application with the OpenFaaS CLI and wrote the .yml file to build and deploy the function, By deploying the function across nodes in the Kubernetes cluster, we allow GKE to distribute and schedule our node resources. Our nodes has been provisioned with tools to handle container operations which can be via the kubectl CLI.

Source: dzone.com

Deploy OpenFaaS with basic authentication.

Clone openfaas-gke repository:

git clone [https://github.com/tmobile/faas-java-templates.git](https://github.com/tmobile/faas-java-templates.git)

cd openfaas-gke

Create the openfaas and openfaas-fn namespaces to deploy OpenFaaS services in a multi-tenant setup:

kubectl apply -f ./namespaces.yaml

To deploy OpenFaaS services in the openfaas namespace:

kubectl apply -f ./openfaas

This will endow K8s pods, deployments, and services for an OpenFaaS gateway, FaaS-netesd (K8S controller), Prometheus, Alert Manager, Nats, and the Queue worker.

We need to secure our gateway before exposing OpenFaaS on the Internet by setting up authentication. We can create a generic basic-auth secret with a set of credentials:

kubectl -n openfaas create secret generic basic-auth

–from-literal=user=admin

–from-literal=password=admin

We can then deploy Caddy for our OpenFaaS gateway which functions both as a reverse proxy and a robust load balancer and supports WebSocket connections:

We will then use the external IP exposed by the K8s Service object to access the access the OpenFaaS gateway UI with our credentials at http://<EXTERNAL-IP>. We can get the external IP by running kubectl get svc.

get_gateway_ip() {

kubectl -n openfaas describe service caddy-lb | grep Ingress | awk'{ print $NF }’

}

until [[“$(get_gateway_ip)”]]

dosleep1;

echo-n”.”;

done

echo”.”

gateway_ip=$(get_gateway_ip)

echo”OpenFaaS Gateway IP: $”

Note: If the external IP address is shown as <pending>, wait for a minute and enter the same command again.

If you haven’t carried out the previous exercise, install the OpenFaaS CLI by invoking.

curl-sL cli.openfaas.com | sh

Then login with the CLI, credentials and the external IP exposed by the K8s service:

faas-cli login -u admin -p admin –gateway http://&lt;EXTERNAL-IP&gt;

Note: (a) You can expose the OpenFaaS gateway using Google Cloud L7 HTTPS load balancer by creating an Ingress Resource. Detailed guidelines for creating your load balancer can be found here. (b) You can create a text file with your password and use the file along with the -password-stdin flag in order to avoid having your password in the bash history.

You can use the image previously published in the last exercise and deploy your serverless function.

$ faas-cli deploy -f mvnw.yml

The deploy command looks for a mvnw.yml file in the current directory and deploys all of the functions in the openfaas-fn namespace.

Note: (a) You can set the minimum number of running pods with the com.openfaas.scale.min label and the minimum number of replicas for the autoscaler com.openfaas.scale.max. The default settings for OpenFaaS is one pod running per function and it scales up to 20 pods under load

Invoke your serverless function.

faas-cli invoke mvnw–gateway=http://&lt;GATEWAY-IP&gt;

You can log out at any time with:

faas-cli logout –gateway http://<EXTERNAL-IP>

C. Fission and Deploying a Simple HTTP request

Fission is a serverless framework that further abstracts away container images and allows HTTP services to be created on K8s just from functions. The container images in Fission contain the language runtime, a set of commonly used dependencies and a dynamic loader for functions. These images can be customized, for example to package binary dependencies. Fission is able to optimize cold start overheads by maintaining a running pool of containers. As new requests come in from client applications or business services, it copies the function into the container, loads it dynamically, and routes the request to that instance. It is therefore able to minimize cold-start overhead on the order of 100msec for NodeJS and Python functions.

By operating at the source level, Fission saves the user from having to deal with container image building, pushing images to registries, managing registry credentials, image versioning and other administrative tasks.

https://kubernetes.io/blog/2017/01/fission-serverless-functions-as-service-for-kubernetes

As depicted in the schematic above, Fission is architected as a set of microservices with the main components described below:

  1. A controller that keeps track of functions, HTTP routes, event triggers and environment images;
  2. A pool manager that manages the pool of idle environment containers, loads functions into these containers, and kills function instances periodically to manage container overhead;
  3. A router that receives HTTP requests and routes them to either fresh function instances from poolmgr or already-running instances.

We can use the K8s cluster we created on GCP in the previous exercise to deploy a HTTP request on Fission. Let’s walk-thru the process.

  1. Install Helm CLI
    Helm is a Kubernetes package manager. Let’s initiate Helm:
  2. Install Fission in GKE namespace

    $ helm install –namespace fission [https://github.com/fission/fission/releases/download/0.7.0/fission-all-0.7.0.tgz](https://github.com/fission/fission/releases/download/0.7.0/fission-all-0.7.0.tgz)

  3. Install Fission CLI

    OSX

    $ curl -Lo fission https://github.com/fission/fission/releases/download/0.7.0/fission-cli-osx && chmod +x fission && sudo mv fission /usr/local/bin/

    Windows
    Download the Windows executable here.

  4. Create your HTTP service
    We will create a simple HTTP service to print Hello World.

    $ cat &gt; hello.py

    def main(context):

    print “Hello, world!”

  5. Deploy HTTP service on Fission

    $ fission function create –name hello –env python –code hello.py –route /hello

    $ curl http://&lt;fission router&gt;/hello

    Hello, world!

D. Kubeless and Deploying a Spring Boot Template

Kubeless is a Kubernetes-native serverless framework that enables functions to be deployed on a K8s cluster while allowing users to leverages Kubernetes resources to provide auto-scaling, API routing, monitoring and troubleshooting. Kubeless uses Kubernetes Custom Resource Definitions to create functions as custom kubernetes resources. A custom resource is an endpoint in the Kubernetes API that stores a collection of API objects of a certain kind, for example K8s pod objects, and that represents a customization of a particular K8s installation. Custom resources are quite useful because they can be provisioned and then deleted in a running cluster through dynamic registration and cluster admins can update custom resources independently of the cluster itself. Kubeless leverages these functionalities and runs an in-cluster controller that keeps track of these custom resources and launches runtimes on-demand.

We can use the K8s cluster we created on GCP in the previous exercise to deploy a HTTP request on Fission. Let’s walk-thru the process.

  1. Access Kubernetes Dashboard

With the K8s cluster is running, we can make the dashboard available on port 8080 with kubectl:

kubectl proxy –port=8080

The dashboard can be accessed by navigating to [http://localhost:8080/ui](http://localhost:8080/ui)on your browser

  1. Install Kubeless CLI

OSX

$ curl -L https://github.com/kubeless/kubeless/releases/download/0.0.20/kubeless_darwin-amd64.zip &gt; kubeless.zip
$ unzip kubeless.zip
$ sudo cp bundles/kubeless_darwin-amd64/kubeless /usr/local/bin/

Windows

Download the Windows executable here.

  1. Deploy Kubeless in K8s cluster

We will deploy Kubless in our K8s cluster using a manifest found in this link. The manifest creates a kubeless Namespace, a function ThirdPartyResource, a kubeless Controller, and sets in process a kafka, zookeeper StatefulSet. One of the main advantages of Kubless is it’s highly-Kubernetes native nature and it can set up both non-rbac and rbac specific environments. The screenshot below shows how to deploy kubeless on a non-rbac environment using kubectl commands.

  1. Create function

    We can create a function that allows us to create a server, and pull the method, URL, headers and body out of the request.

    const http = require(‘http’);

    http.createServer((request, response) =&gt; {
    const { headers, method, url } = request;
    let body = [];
    request.on(‘error’, (err) =&gt; {
    console.error(err);
    }).on(‘data’, (chunk) =&gt; {
    body.push(chunk);
    }).on(‘end’, () =&gt; {
    body = Buffer.concat(body).toString();
    // At this point, we have the headers, method, url and body, and can now
    // do whatever we need to in order to respond to this request.
    });
    }).listen(8080); // Activates this server, listening on port 8080.

  2. Run functions in Kubeless environment

We can register the function with Kubeless by providing the following information:

  1. The name to be used to access the function over the Web
  2. The protocol to be used to access the function
  3. The language runtime to be executed to run the code
  4. The name of the file containing the function code
  5. The name of the function inside the file

By adding the variables 1-5 above, we invoke the following command to register and deploy the function in Kubeless:

kubeless function deploy serverequest–trigger-http –runtime nodejs6 –handler serverequest.createServer –from-file /tmp/serverequest.js

E. Evaluation of Serverless Platforms

Each of the serverless platforms we evaluated have their unique value propositions. With OpenFaas, any process or container can be packaged as a serverless function for either Linux or Windows. For enterprises, the architecture used by OpenFaaS provides a seamless ability to plug in to a schedule cluster and CI/CD workflow for their existing microservices, as OpenFaaS is built around Docker and all functions are packaged into Docker images. OpenFaaS also provides enterprises a seamless way to administer and execute functions via the external API, the Gateway and to manage the lifecycle of a function, including deployments, scale and secretes management via a Provider.

Fission has an event-driven architecture which makes it ideal for short-lived, stateless applications, including REST API or webhook implementations and DevOps automation. A good use case for using Fission might be a backend for chatbot development as Fission achieves good cold-start performance and delivers fast response times when needed by keeping a running pool of containers with their runtimes.

Finally, the Kubeless architecture leverages native Kubernetes concepts to deploy and manage functions, such as Custom Resource Definitions to define a function and custom controller to manage a function, deploy it as a Kubernetes deployment and expose it via Kubernetes service. This close alignment with Kubernetes native functionalities will appeal to existing Kubernetes users, lowering the required learning curve and seamlessly plugging into an existing Kubernetes architecture.

Hisham Hasan

Hisham is a consulting Enterprise Solutions Architect with experience in leveraging container technologies to solve infrastructure problems and deploy applications faster and with higher levels of security, performance and reliability. Recently, Hisham has been leveraging containers and cloud-native architecture for a variety of middleware applications to deploy complex and mission-critical services across the enterprise. Prior to entering the consulting world, Hisham worked at Aon Hewitt, Lexmark and ADP in software implementation and technical support.

Source

Zero-downtime Deployment in Kubernetes with Jenkins

Ever since we added the Kubernetes Continuous Deploy and Azure Container Service plugins to the Jenkins update center, “How do I create zero-downtime deployments” is one of our most frequently-asked questions. We created a quickstart template on Azure to demonstrate what zero-downtime deployments can look like. Although our example uses Azure, the concept easily applies to all Kubernetes installations.

Rolling Update

Kubernetes supports the RollingUpdate strategy to replace old pods with new ones gradually, while continuing to serve clients without incurring downtime. To perform a RollingUpdate deployment:

  • Set .spec.strategy.type to RollingUpdate (the default value).
  • Set .spec.strategy.rollingUpdate.maxUnavailable and .spec.strategy.rollingUpdate.maxSurge to some reasonable value.
    • maxUnavailable: the maximum number of pods that can be unavailable during the update process. This can be an absolute number or percentage of the replicas count; the default is 25%.
    • maxSurge: the maximum number of pods that can be created over the desired number of pods. Again this can be an absolute number or a percentage of the replicas count; the default is 25%.
  • Configure the readinessProbe for your service container to help Kubernetes determine the state of the pods. Kubernetes will only route the client traffic to the pods with a healthy liveness probe.

We’ll use deployment of the official Tomcat image to demonstrate this:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: tomcat-deployment-rolling-update
spec:
replicas: 2
template:
metadata:
labels:
app: tomcat
role: rolling-update
spec:
containers:
– name: tomcat-container
image: tomcat:$
ports:
– containerPort: 8080
readinessProbe:
httpGet:
path: /
port: 8080
strategy:
type: RollingUpdate
rollingUp maxSurge: 50%

If the Tomcat running in the current deployments is version 7, we can replace $ with 8 and apply this to the Kubernetes cluster. With the Kubernetes Continuous Deploy or the Azure Container Service plugin, the value can be fetched from an environment variable which eases the deployment process.

Behind the scenes, Kubernetes manages the update like so:

Deployment Process

  • Initially, all pods are running Tomcat 7 and the frontend Service routes the traffic to these pods.
  • During the rolling update, Kubernetes takes down some Tomcat 7 pods and creates corresponding new Tomcat 8 pods. It ensures:
    • at most maxUnavailable pods in the desired Pods can be unavailable, that is, at least (replicas – maxUnavailable) pods should be serving the client traffic, which is 2-1=1 in our case.
    • at most maxSurge more pods can be created during the update process, that is 2*50%=1 in our case.
  • One Tomcat 7 pod is taken down, and one Tomcat 8 pod is created. Kubernetes will not route the traffic to any of them because their readiness probe is not yet successful.
  • When the new Tomcat 8 pod is ready as determined by the readiness probe, Kubernetes will start routing the traffic to it. This means during the update process, users may see both the old service and the new service.
  • The rolling update continues by taking down Tomcat 7 pods and creating Tomcat 8 pods, and then routing the traffic to the ready pods.
  • Finally, all pods are on Tomcat 8.

The Rolling Update strategy ensures we always have some Ready backend pods serving client requests, so there’s no service downtime. However, some extra care is required:
– During the update, both the old pods and new pods may serve the requests. Without well defined session affinity in the Service layer, a user may be routed to the new pods and later back to the old pods.
– This also requires you to maintain well-defined forward and backward compatibility for both data and the API, which can be challenging.
– It may take a long time before a pod is ready for traffic after it is started. There may be a long window of time where the traffic is served with less backend pods than usual. Generally, this should not be a problem as we tend to do production upgrades when the service is less busy. But this will also extend the time window for issue 1.
– We cannot do comprehensive tests for the new pods being created. Moving application changes from dev / QA environments to production can represent a persistent risk of breaking existing functionality. The readiness probe can do some work to check readiness, however, it should be a lightweight task that can be run periodically, and not suitable to be used as an entry point to start the complete tests.

Blue/green Deployment

Blue/green deployment quoted from TechTarget

A blue/green deployment is a change management strategy for releasing software code. Blue/green deployments, which may also be referred to as A/B deployments require two identical hardware environments that are configured exactly the same way. While one environment is active and serving end users, the other environment remains idle.

Container technology offers a stand-alone environment to run the desired service, which makes it super easy to create identical environments as required in the blue/green deployment. The loosely coupled Services – ReplicaSets, and the label/selector-based service routing in Kubernetes make it easy to switch between different backend environments. With these techniques, the blue/green deployments in Kubernetes can be done as follows:

  • Before the deployment, the infrastructure is prepared like so:
    • Prepare the blue deployment and green deployment with TOMCAT_VERSION=7 and TARGET_ROLE set to blue or green respectively.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: tomcat-deployment-$
spec:
replicas: 2
template:
metadata:
labels:
app: tomcat
role: $
spec:
containers:
– name: tomcat-container
image: tomcat:$
ports:
– containerPort: 8080
readinessProbe:
httpGet:
path: /
port: 8080

  • Prepare the public service endpoint, which initially routes to one of the backend environments, say TARGET_ROLE=blue.

kind: Service
apiVersion: v1
metadata:
name: tomcat-service
labels:
app: tomcat
role: $
env: prod
spec:
type: LoadBalancer
selector:
app: tomcat
role: $
ports:
– port: 80
targetPort: 8080

  • Optionally, prepare a test endpoint so that we can visit the backend environments for testing. They are similar to the public service endpoint, but they are intended to be accessed internally by the dev/ops team only.

kind: Service
apiVersion: v1
metadata:
name: tomcat-test-$
labels:
app: tomcat
role: test-$
spec:
type: LoadBalancer
selector:
app: tomcat
role: $
ports:
– port: 80
targetPort: 8080

  • Update the application in the inactive environment, say green environment. Set TARGET_ROLE=green and TOMCAT_VERSION=8 in the deployment config to update the green environment.
  • Test the deployment via the tomcat-test-green test endpoint to ensure the green environment is ready to serve client traffic.
  • Switch the frontend Service routing to the green environment by updating the Service config with TARGET_ROLE=green.
  • Run additional tests on the public endpoint to ensure it is working properly.
  • Now the blue environment is idle and we can:
    • leave it with the old application so that we can roll back if there’s issue with the new application
    • update it to make it a hot backup of the active environment
    • reduce its replica count to save the occupied resources

Resources

As compared to Rolling Update, the blue/green up* The public service is either routed to the old applications, or new applications, but never both at the same time.
* The time it takes for the new pods to be ready does not affect the public service quality, as the traffic will only be routed to the new pods when all of them are tested to be ready.
* We can do comprehensive tests on the new environment before it serves any public traffic. Just keep in mind this is in production, and the tests should not pollute live application data.

Jenkins Automation

Jenkins provides easy-to-setup workflow to automate your deployments. With Pipeline support, it is flexible to build the zero-downtime deployment workflow, and visualize the deployment steps.
To facilitate the deployment process for Kubernetes resources, we published the Kubernetes Continuous Deploy and the Azure Container Service plugins built based on the kubernetes-client. You can deploy the resource to Azure Kubernetes Service (AKS) or the general Kubernetes clusters without the need of kubectl, and it supports variable substitution in the resource configuration so you can deploy environment-specific resources to the clusters without updating the resource config.
We created a Jenkins Pipeline to demonstrate the blue/green deployment to AKS. The flow is like the following:

Jenkins Pipeline

  • Pre-clean: clean workspace.
  • SCM: pulling code from the source control management system.
  • Prepare Image: prepare the application docker images and upload them to some Docker repository.
  • Check Env: determine the active and inactive environment, which drives the following deployment.
  • Deploy: deploy the new application resource configuration to the inactive environment. With the Azure Container Service plugin, this can be done with:acsDeploy azureCredentialsId: ‘stored-azure-credentials-id’,
    configFilePaths: “glob/path/to/*/resource-config-*.yml”,
    containerService: “aks-name | AKS”,
    resourceGroupName: “resource-group-name”,
    enableConfigSubstitution: true
  • Verify Staged: verify the deployment to the inactive environment to ensure it is working properly. Again, note this is in the production environment, so be careful not to pollute live application data during tests.
  • Confirm: Optionally, send email notifications for manual user approval to proceed with the actual environment switch.
  • Switch: Switch the frontend service endpoint routing to the inactive environment. This is just another service deployment to the AKS Kubernetes cluster.
  • Verify Prod: verify the frontend service endpoint is working properly with the new environment.
  • Post-clean: do some post clean on the temporary files.

For the Rolling Update strategy, simply deploy the deployment configuration to the Kubernetes cluster, which is a simple, single step.

Put It All Together

We built a quickstart template on Azure to demonstrate how we can do the zero-downtime deployment to AKS (Kubernetes) with Jenkins. Go to Jenkins Blue-Green Deployment on Kubernetes and click the button Deploy to Azure to get the working demo. This template will provision:

  • An AKS cluster, with the following resources:
    • Two similar deployments representing the environments “blue” and “green”. Both are initially set up with the tomcat:7 image.
    • Two test endpoint services (tomcat-test-blue and tomcat-test-green), which are connected to the corresponding deployments, and can be used to test if the deployments are ready for production use.
    • A production service endpoint (tomcat-service) which represents the public endpoint that the users will access. Initially it is routing to the “blue” environment.
  • A Jenkins master running on an Ubuntu 16.04 VM, with the Azure service principal credentials configured. The Jenkins instance has two sample jobs:
    • AKS Kubernetes Rolling Update Deployment pipeline to demonstrate the Rolling Update deployment to AKS.
    • AKS Kubernetes Blue/green Deployment pipeline to demonstrate the blue/green deployment to AKS.
    • We didn’t include the email confirmation step in the quickstart template. To add that, you need to configure the email SMTP server details in the Jenkins system configuration, and then add a Pipeline stage before Switch:stage(‘Confirm’) {
      mail (to: ‘to@example.com’,
      subject: “Job ‘$’ ($) is waiting for input”,
      body: “Please go to $.”)
      input ‘Ready to go?’
      }

      Follow the Steps to setup the resources and you can try it out by start the Jenkins build jobs.

Source

How to setup Rancher 2 in an air gapped environment

Expert Training in Kubernetes and Rancher

Join our free online training sessions to learn how to manage Kubernetes workloads with Rancher.

Sign up here

It’s sometimes not possible to use hosted services like GKE or AKS, and there are occasions where direct internet access is not possibe (offline/airgapped). In these instances it is still possible to use Rancher to manage your clusters.

In this post we’ll walk through what you need to do when you want to run Rancher 2.0 in an offline/air gapped environment.

Private Registry

Everything Rancher related runs in a container, so a place to store the containers in your environment is the first requirement. For this example we will use the Docker Registry. If you already have a registry in place, you can skip these steps.

Note: In Rancher 2.0, only registries without authentication are supported for getting all images needed to get Rancher 2.0 up and running. This does not affect configurable Registries to be used in Workloads.

To run the Docker Registry, you need to run an instance of the registry:2 image. We’ll be exposing the default port (5000), and mount a host directory to make sure we have enough space (we need at least 8GB) and get proper I/O performance.

docker run -d -p 5000:5000 –restart=always –name registry -v /opt/docker-registry:/var/lib/registry registry:2

Making the Rancher images available

When the registry is setup, you can start syncing the needed images to run Rancher 2.0. For this step, we will go through two scenarios:

  • Scenario 1: You have one host that can access DockerHub to pull and save the images, and a separate host that access your private registry to push the images.
  • Scenario 2: You have one host that can access both DockerHub and your private registry.

Scenario 1: One host that can access DockerHub, separate host that can access private registry

In every release (https://github.com/rancher/rancher/releases/tag/v2.0.0), the needed scripts for this scenario are provided. You will need the following:

  • rancher-save-images.sh: This script will pull all needed images from DockerHub, and save all of the images as a compressed file called rancher-images.tar.gz. This file can be transferred to your on-premise host that can access your private registry.
  • rancher-load-images.sh: This script will load images from rancher-images.tar.gz and push them to your private registry. You have to supply the hostname of your private registry as first argument to the script. rancher-load-images.sh registry.yourdomain.com:5000

Flow for scenario 1

Scenario 2: One host that can access both DockerHub and private registry

For this scenario, we provide a file called rancher-images.txt in every release (https://github.com/rancher/rancher/releases/tag/v2.0.0). This file contains every image needed to run Rancher 2.0. This can be tied into any existing automation to sync images you might have, or you can use my scripts/Docker image as shown below

Flow for scenario 2

Configuring Rancher to use the private registry

The last step in the process is to configure Rancher to use the private registry as source to get the images. This can be configured by using the setting system-default-registry in the Settings view.

Settings view

Configuring the setting for use of private registry, do not prefix with https:// or http://

This will make sure that the rancher/rancher-agent container that is used to add nodes to the cluster, will be prefixed with this value. All other images needed will also use this configuration.

If you want to configure the setting when starting the rancher/rancher container, you can use the environment variable CATTLE_SYSTEM_DEFAULT_REGISTRY.

Example:

docker run -d -p 80:80 -p 443:443 -e CATTLE_SYSTEM_DEFAULT_REGISTRY=registry.yourdomain.com:5000 registry.yourdomain.com:5000/rancher/rancher:v2.0.0

Creating a cluster

You can access the Rancher 2.0 UI by using the IP of the host the rancher/rancher container is running on. The initial start-up takes about a minute, and on first access you will be prompted to set a password

Set password

Next, you have to configure the URL that nodes will use to contact this Rancher 2 installation. By default, it will show the IP you are using to visit the UI, but if you are using a DNS name or a loadbalancer, you can change this here.

In the Global view, click Add Cluster

Adding a cluster

For this post, you will be creating a Custom cluster without any advanced options. Please refer to the documentation on configuring advanced options on your cluster.

Adding custom cluster called testcluster

Click Next to create the cluster testcluster.

In the next screen, you get a generated command to launch on your nodes that you want to add to the cluster. The image used in this command should automatically be prefixed with your configured private registry.

Adding nodes to your cluster

You can now select what roles you want to use for the node you want to add, and optionally, you can configure the IP’s used for the node. If not specified, it will auto-detect the IP. Please refer to the documentation on the meaning of the Node Roles.

Configuring access to the registry inside a project

As previously mentioned, at this point Rancher 2 does not support using private registry with authentication for images needed to run Rancher 2.0. It does support this scenario for workloads in projects.

To configure your registry with authentication, you can open your project in a cluster (Default is automatically created for you). When you are in the Default project, you can navigate to Resources -> Registries to configure your registry used for workloads.

Configuring Registries in project Default

Click Add Registry

Adding a registry

Fill in the needed information to access your registry.

Providing credentials for registry

Summary

I hope the information in this how-to was useful, and that you were able to setup Rancher 2.0 in your environment. I know a lot of environments also have a proxy, and we will add or create separate posts for proxy setups soon. Stay tuned.

I will finish by posting a gist with some commands used in this post; hopefully these will be helpful for use or inspiration.

If you have any questions, join our Rancher Users Slack by visiting https://slack.rancher.io and join the #2-0-tech-preview channel. You can also visit our forums to ask any questions you may have: https://forums.rancher.com/

Sebastiaan van Steenis

Sebastiaan is a support engineer at Rancher Labs, helping customers on their journey with containers. You can find him on Rancher Users Slack (https://slack.rancher.io) if you have any questions.

GitHub: superseb

Twitter: @svsteenis

Sebastiaan van Steenis

Sebastiaan van Steenis

Support Engineer

github

Source