Heptio Contour 0.7 Release Brings Improved Ingress Control and Request-Prefix Rewriting Support

Heptio Contour is an open source Kubernetes ingress controller that uses Envoy, Lyft’s open source edge and service proxy, to provide a modern way to direct internet traffic into a cluster. Last Friday, we released Contour version 0.7, which includes some helpful new features that you should know about if you’re evaluating options for incoming load balancing in Kubernetes.

Contour 0.7 enables:

Better traffic control within a cluster: With support for the ‘ingress.class’ annotation, you’ll now be able to specify where incoming traffic should go within a cluster. One key use case here is to be able to separate production traffic from staging and development; for example, if the ‘contour.heptio.com/ingress.class: production’ annotation is on an IngressRoute object, it will only be processed by Contour containers running with the flag ‘— ingress-class-name=production’.

Rewriting a request prefix: Need to route a legacy or enterprise application to a different path from your specified ingress route? You can now use Contour to rewrite a path prefix and ensure that incoming traffic goes to the right place without issue. (See Github for more detail on this.)

Cost savings through GZIP compression: Contour 0.7 features GZIP compression by default, so that you can see cost savings through reduced bandwidth, while speeding up load times for your customers.

Envoy health checking and 1.7 compatibility: Envoy’s now-exposed /healthz endpoint can be used with Kubernetes readiness probes, and Contour is also now compatible with Envoy 1.7, making it easier for you to get Prometheus metrics for HTTP/HTTPS traffic.

Source

5 Tips for Making Containers Faster

Take a deep dive into Best Practices in Kubernetes Networking

From overlay networking and SSL to ingress controllers and network security policies, we’ve seen many users get hung up on Kubernetes networking challenges. In this video recording, we dive into Kubernetes networking, and discuss best practices for a wide variety of deployment options.

Watch the video

One of the selling points of containers is that containerized
applications are generally faster to deploy than virtual machines.
Containers also usually perform better. But just because containers are
faster by default than alternative infrastructure doesn’t mean that
there are not ways to make them even faster. You can go beyond the
defaults by optimizing Docker container image build time, performance
and resource consumption. This post explains how.

Defining “Faster”

Before we delve into Docker optimization tips, let me first explain what
I mean when I write about making containers “faster.” Within the context
of a conversation about Docker, the word faster can have several
meanings. It can refer to the execution speed of a process or an
application that runs inside a container. It can refer to image build
time. It can refer to the time it takes to deploy an application, or to
push code through the entire delivery pipeline. In this post, I’ll
consider all of these angles by discussing approaches to making Docker
faster in multiple ways.

Make Docker Faster

The following strategies can help make Docker containers faster.

Take a Minimalist Approach to Images

The more code you have inside your image, the longer it will take to
build the image, and for users to download the image. In addition,
code-heavy containers may run sub-optimally because they consume more
resources than required. For all of these reasons, you should strive to
keep the code that goes into your container images to the bare minimum
that is required for whatever your image is supposed to do. In some
cases, designing minimalist container images may require you to
rearchitect your application itself. Bloated applications will always
suffer from slow deployment and weak performance, whether you deploy
them in containers or in something else. You should also resist the
temptation, when writing your Dockerfile, to add services or commands
that are not strictly necessary. For example, if your application
doesn’t need an SSH server, don’t include one. For another example,
avoid running apt-get upgrade if you don’t need to.

Use a Minimalist Operating System

One of the greatest benefits of containers as compared to virtual
machines is that containers don’t require you to duplicate an entire
operating system to host an application. To take advantage of this
feature to greatest effect, you should host your images with an
operating system that does everything you need, and nothing more. Your
operating system shouldn’t include extra services or data if they do not
advance the mission of your Docker environment. Anything extra is bloat,
which will undercut the efficiency of your containers. Fortunately, you
don’t have to build your own minimalist operating system for Docker.
Plenty of pre-built Linux distributions with small footprints are
available for hosting Docker, such as
RancherOS.

Optimize Build Time

Image build time is often the biggest kink in your continuous delivery
pipeline. When you have to wait a long time for your Docker images to
build, you delay your entire delivery process. One way to speed image
build time is to use registry mirrors. Mirrors make builds faster by
reducing the amount of time required to download components when
building an image. Combining multiple RUN commands into a single command
also improves build time for images because it reduces the number of
layers in your image, which improves build speed, and optimizes the
image size to boot. Docker’s build cache
feature

is another useful way to improve build speed. The cache allows you to
take advantage of existing cached images, rather than building each
image from scratch. Finally, creating minimalist images, as discussed
above, will speed build time, too. The less you have to build, the
faster your builds will be.

Use a Containers-as-a-Service Platform

For staff at many organizations, the greatest obstacle to deploying
containers quickly and efficiently results from the complexity of
building and managing a containerized environment themselves. This is
why using a Containers-as-a-Service platform, or CaaS, can be handy.
With a CaaS, you get preconfigured environments, as well as deployment
and management tools. A CaaS helps to prevent the bottlenecks that would
otherwise slow down a continuous delivery chain.

Use Resource Quotas

By default, each container can consume as many resources as it wants.
This may not always be ideal because poorly designed or malfunctioning
containers can eat up resources, thereby making other containers run
slowly. To help prevent this problem, you can set
quotas
on
each container’s compute, memory and disk I/O allotments. Just keep in
mind, of course, that misconfigured quotas can cause serious performance
problems, too; you therefore need to ensure that your containers are
able to access the resources they require.

Conclusion

Even if your containers are already fast, you can probably make them
faster. Optimizing what goes into your images, improving image build
time, avoiding operating system bloat, taking advantage of CaaS and
setting resource quotas are all ways to improve the overall speed and
efficiency of your Docker environment.

Source

Adopting a GKE Cluster with Rancher 2.0

Rancher 2.0 is out and
odds are, you’re wondering what’s so shiny and new about it. Well,
here’s a huge selling point for the next big Rancher release;
Kubernetes cluster adoption! That’s right, we here at Rancher wanted
more kids, so we decided it was time to adopt. In all seriousness
though, this feature helps make Rancher more relevant to developers who
already have Kubernetes clusters deployed and are looking for a new way
to manage them. One of the most powerful aspects of this feature is that
it allows you to build a cluster from multiple cloud container engines,
including GKE, and keep them all under one management roof. In order to
adopt a GKE cluster into Rancher 2.0, you’ll first need an instance of
Rancher 2.0 running. To bootstrap a 2.0 server, check out this
guide here. Stop
before selecting to Add Hosts and pop right back here when you’re
done.

Acquiring your Kubectl Command

Now that you have a Rancher 2.0 server up and running, you’ll be
presented with a page like the one below:

Rancher 2.0 Home PageRancher 2.0 Home Page

Of course, we want to select Use Existing Kubernetes as we’re trying
to adopt an existing GKE cluster. On the next page, we’re presented
with a Host Registration URL menu, where we want to provide the
publicly accessible hostname and port for our Rancher server. As I set
up my host using a domain name in the form rancher.<domain>.<tld>,
Rancher has already found the address for the site and defaults to the
proper hostname. If you’re using a different network setup, just know
that this hostname has to be accessible by all the machines in the
Kubernetes cluster we are adopting. Click Save when you’re finished
making your choice.

Host Registration URLHost Registration URL

Now we are presented with a kubectl command which can be copied to
your clipboard by clicking the little clipboard icon to the right of the
command. Copy the command and save it in a notepad to use later.

Kubectl ApplyKubectl Apply

Important note: if you already have a cluster in GKE, you can skip
the following section and go straight to Adopting a GKE
Cluster

Creating a GKE Cluster

Next, hop on over to your GKE control panel and, if you don’t yet have
a GKE cluster up and running, click the Create Cluster button.

Create ClusterCreate Cluster

The cluster settings can be configured however you wish. I’ll be using
the name eric-rancher-demo, the us-central1-a zone,
version 1.7.5-gke.1 (default) and 1vCPU, 2GB RAM machines. Leave the
OS as the Container-Optimized OS(cos) and all other settings at the
default. My finished settings looks like this:

Container Cluster Settings
[]

You might want to grab some coffee after clicking Create, as GKE takes
a while (5-10 minutes) to stand up your Kubernetes cluster.

Adopting a GKE Cluster

Now that we have both a Rancher 2.0 server stood up and a GKE cluster
running and ready, we want to configure gcloud utils and kubectl to
connect to our GKE cluster. In order to install gcloud utils on Debian
and Ubuntu based machines, follow the steps
outlined here on
Google’s website. For all other distros, find your
guide here. Once we get
the message gcloud has now been configured!, we can move onto the next
step. To install kubectl all we need to do is
type gcloud components install kubectl. Now that we
have kubectl installed, in order to connect to our cluster, we want to
click on the Connect to the cluster link at the top of the page in
GKE.

Connect to the clusterConnect to the cluster

A window will pop up providing instructions
to Configure kubectl command line access by running the following command.

Copy that command and paste it into your terminal window. In this case:

$ gcloud container clusters get-credentials eric-rancher-demo –zone us-central1-a –project rancher-dev
> Fetching cluster endpoint and auth data.
> kubeconfig entry generated for eric-rancher-demo.

Our kubectl is now hooked up to the cluster. We should be able to
simply paste the command from the Rancher 2.0 setup we did earlier, and
Rancher will adopt the cluster.

$ kubectl apply -f http://rancher.<domain>.<tld>/v3/scripts/2CFC9454A034E7C3E367:1483142400000:feQRTz4WmIemlAUSy4O37vuF0.yaml
> namespace “cattle-system” created
> serviceaccount “rancher” created
> clusterrolebinding “rancher” created
> secret “rancher-credentials-39124a03” created
> pod “cluster-register-39124a03” created
> daemonset “rancher-agent” created

Note: You cannot adopt a Kubernetes cluster if it has been previously
adopted by a Rancher 2.0 server unless you delete the
namespace cattle-system from the Kubernetes installation. You can do
this by typing kubectl delete –namespace cattle-system and waiting 5
minutes for the namespace to clear out.

Source

Local Kubernetes for Windows – MiniKube vs Docker Desktop

Moving your application into a Kubernetes cluster presents two major challenges. The first one is the adoption of Kubernetes deployments as an integral part of your Continuous Delivery pipelines. Thankfully this challenge is already solved using the native Codefresh-Kubernetes integration that also includes the GUI dashboard, giving you a full status of your cluster.

The second challenge for Kubernetes adoption is the way developers work locally on their workstations. In most cases, a well-designed 12-factor application can be developed locally without the need for a full cluster. Sometimes, however, the need for a cluster that is running locally is imperative especially when it comes to integration tests or any other scenario where the local environment must represent the production one.

There are several ways to run a Kubernetes cluster locally and in this article, we will examine the following solutions for Windows (future blog posts with cover Linux and Mac):

A local machine Kubernetes solution can help developers to configure and run a Kubernetes cluster in their local development environments and test their application during all development phases, without investing significant effort to configure and manage a Kubernetes cluster.

We are evaluating these solutions and providing a short comparison based on ease of installation, deployment, and management. Notice that Minikube is available in all major platforms (Windows, Mac, Linux). Docker for Windows works obviously only on Windows and even there it has some extra requirements.

Windows considerations

Docker-For-Windows has recently added native Kubernetes integration. To use it you need a very recent OS version (Windows 10 Pro). If you have an older version (e.g. Windows 7) or a non-Pro edition (e.g. Home) then Minikube is the only option.

Docker-for-windows uses Type-1 hypervisor, such as Hyper-V, which are better compared to Type-2 hypervisors, such as VirtualBox, while Minikube supports both hypervisors. Unfortunately, there are a couple of limitations in which technology you are using, since you cannot have Type-1 or Type-2 hypervisors running at the same time on your machine:

  • If you are running virtual machines on your desktop, such as Vagrant, then you will not be able to run them if you enable type-1 hypervisors.
  • If you want to run Windows containers, then using docker-for-windows is the only option you have.
  • Switching between these two hypervisors requires a machine restart.
  • To use Hyper-V hypervisor you need to have installed Windows 10 Pro edition on your development machine.

Depending on your needs and your development environment, you need to make a choice between docker-for-windows and Minikube.

Both solutions can be installed either manually or by using the Chocolatey package manager for Windows. Installation of Chocolatey is easy, just use the following command from PowerShell in administrative mode:

Set-ExecutionPolicy Bypass -Scope Process -Force; iex ((New-Object System.Net.WebClient).DownloadString(‘https://chocolatey.org/install.ps1’))

Complete installation instructions for Chocolatey can be found in the documentation.

Docker on Windows with Kubernetes support

If you want to run Windows containers then Docker-For-Windows is the only possible choice. Minikube will only run Linux based containers (in a VM).

This means that for Windows containers the considerations mentioned previously are actually hard requirements. If you want to run Windows Containers then:

  • You need to run Windows 10 Pro
  • You need to enable the hyper-v hypervisor

In addition, at the time of writing, Kubernetes is only available in Docker for Windows 18.06 CE Edge. Docker for Windows 18.06 CE Edge includes a standalone Kubernetes server and client, as well as Docker CLI integration. The Kubernetes server runs locally within your Docker instance as a single node cluster, and it is pre-configured in terms of clusters, users and contexts.

You have two options to install docker-for-windows, either download from the Docker Store, or use Chocolatey package manager. In the case that you are using Chocolatey (recommended), then you can install docker-for-windows with the following command:

choco install docker-for-windows -pre

Hint: If Hyper-V is available but not enabled, then you can enable Hyper-V using the following command in PowerShell with administrative support. Note that enabling/disabling hyper-v hypervisor requires a restart of your local machine.

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All

After the successful installation of docker-for-windows, you can verify that you have installed Kubernetes by executing the following command in Windows PowerShell:

You can view the Kubernetes configuration details (like name, port, context) using the following command:

Management

When Kubernetes support is enabled, you can deploy your workloads in parallel on Kubernetes, Swarm, and as standalone containers. Note that enabling or disabling the Kubernetes server does not affect your other workloads. If you are working with multiple Kubernetes clusters and different environments you will be familiar with switching contexts. You can view contexts using the kubectl config command:

kubectl config get-contexts

Set the context to use docker-for-desktop:

kubectl config use-context docker-for-desktop

Unfortunately, Kubernetes does not come by default with a dashboard and you need to enable it with the following command:

kubectl apply -f

https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml

To view the dashboard in your web browser run:

And navigate to your Kubernetes Dashboard at: http://localhost:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy

Deployment

Deploying an application is very straightforward. In this example, we install a cluster of nginx servers, using the following commands:

kubectl run nginx –image nginx

kubectl expose deployment nginx –port 80 –target-port 80 –name nginx

Once Kubernetes has finished downloading the containers, you can see them by using the command:

You can use the dashboard, as mentioned above, to verify that nginx is installed and your cluster is in working condition. You can deploy any other Kubernetes application you have developed in a similar manner.

Kubernetes on Windows using minikube

Another option of running Kubernetes locally is to use Minikube. In general, Minikube is a vbox instance running Linux and docker-daemon pre-installed. This is actually the only option if your machine does not satisfy the requirements mentioned in the first part of this article.

The main advantage of Minikube for Windows is that it supports several drivers including Hyper-V and VirtualBox, and you can use most of the Kubernetes add-ons. You can find a list of Minikube add-ons here.

Installation

Instead of manually installing all the packages for Minikube, you can install all prerequisites at once using the Chocolatey package manager. To install Minikube you can use the following command in the PowerShell:

choco install minikube -y

Option 1 – Hyper-V Support

To start Minikube cluster with hyper-v support, you need to first create an external network switch based on physical network adapters (Ethernet or Wi-fi). The following steps must be followed:

Step 1: Identify physical network adapters ( Ethernet and/or Wifi) using the command:

Step 2: Create external-virtual-switch using the following command

New-VMSwitch -Name “myCluster” -AllowManagement $True -NetAdapterName “<adapter_name>”

Finally, to start the Kubernetes cluster use the following command:

minikube start –vm-driver=hyperv — hyperv-virtual-switch=myCluster

If the last command was successful, then you can use the following command to see the Kubernetes cluster:

Finally, if you want to delete the created cluster, then you can achieve it with the following command:

Option 2 – VirtualBox Support

You can also use Minikube in an alternative mode where a full Virtual machine will be used in the form of Virtualbox. To start a Minikube cluster in this mode, you need to execute the following command:

Note that we need to disable hyper-v support in order for Minikube to install to virtualbox. A Virtualbox installation is required. Disabling the hyper-v hypervisor can be done with the following command:

Disable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V-All

Note that when you are using Minikube without a local Docker daemon (docker-for-windows) you need to instruct Docker CLI to send the commands to the remote docker daemon installed in the Minikube virtual machine and not to the local one, with the command docker ps, as shown in the figure below.

Management

After successfully starting a Minikube cluster, you have created a Minikube context called “Minikube”, which is set by default during startup. You can switch between any context using the command:

kubectl config use-context minikube

Furthermore, to access the Kubernetes dashboard, you need to execute/run the following command:

Additional information on how to configure and manage Minikube Kubernetes clusters can be found in the documentation.

Deployment

Deploying an application is similar for both cases (Hyper-V or VirtualBox). For example, you can deploy, expose, and scale a service by using the expected Kubernetes commands:

kubectl run my-nginx –image=nginx –port=80 // deploy

kubectl expose deployment my-nginx –type=NodePort //for exposing the service

Kubectl scale –replicas=3 deployment/my-nginx

You can navigate your Minikube cluster, either by visiting the Kubernetes dashboard or by using kubectl.

Conclusions

After looking at both solutions here are our results…

Minikube is a mature solution available for all major operating systems. Its main advantage is that it provides a unified way of working with a local Kubernetes cluster regardless of the operating system. It is perfect for people that are using multiple OS machines and have some basic familiarity with Kubernetes and Docker.

Pros:

  • Mature solution
  • Works on Windows (any version and edition), Mac, and Linux
  • Multiple drivers that can match any environment
  • Can work with or without an intermediate VM on Linux (vmdriver=none)
  • Installs several plugins (such as dashboard) by default
  • Very flexible on installation requirements and upgrades

Cons:

  • Installation and removal not as streamlined as other solutions
  • Can conflict with local installation of other tools (such as Virtualbox)

Docker for Windows is a solution exclusively for Windows with some strict requirements. Its main advantage is the user installation/experience and easy switch between Windows and Linux Containers.

Pros:

  • Very easy installation for beginners
  • The best solution for running Windows containers
  • Integrated Docker and Kubernetes solution

Cons:

  • Requires Windows 10 Pro edition and Hyper V
  • Cannot use simultaneously with Virtualbox, Vagrant etc
  • Relatively new, possibly unstable
  • The sole solution for running Windows containers

Let us know in the comments, which local Kubernetes solution you are using and why.

Source

Why Is Securing Kubernetes so Difficult?

Why Is Securing Kubernetes so Difficult?

If you’re already familiar with Kubernetes, the question in the title will probably resonate deep within your very being. And if you’re only just getting started on your cloud native journey, and Kubernetes represents a looming mountain to conquer, you’ll quickly come to realise the pertinence of the question.

Security is hard at the best of times, but when your software applications are composed of a multitude of small, dynamic, scalable, distributed microservices running in containers, then it gets even harder. And it’s not just the ephemerality of the environment that ratchets up the difficulties, it’s also the adoption of new workflows and toolchains, all of which bring their own new security considerations.

Let’s dig a little deeper.

Skills

Firstly, Kubernetes, and some of the other tools that are used in the delivery pipeline of containerised microservices, are complex. They have a steep learning curve, are subject to an aggressive release cycle with frequent change, and require considerable effort to keep on top of all of the nuanced aspects of platform security. If the team members responsible for security don’t understand how the platform should be secured, or worse, nobody has been assigned the responsibility, then its conceivable that glaring security holes could exist in the platform. At the very best, this could prove embarrassing, and at worst, could have pernicious consequences.

Focus

In the quest to become innovators in their chosen market, or to be nimbler in response to external market forces (for example, customer demands or competitor activity), organizations new and old, small and large, are busy adopting DevOps practices. The focus is on the speed of delivery of new features and fixes, with the blurring of the traditional lines between development and operations. It’s great that we consider the operational aspects as we develop and define our integration, testing and delivery pipeline, but what about security? Security shouldn’t be an afterthought; it needs to be an integral part of the software development lifecycle, considered at every step in the process. Sadly, this is often not the case, but there is a growing recognition of the need for security to ‘shift left’, and to be accommodated in the delivery pipeline. The practice is coined DevSecOps or continuous security.

Complexity

We’ve already alluded to the fact that Kubernetes and its symbiotic tooling, is complex in nature, and somewhat difficult to master. But it gets worse, because there are multiple layers in the Kubernetes stack, each of which has its own security considerations. It’s simply not enough to lock down one layer, whilst ignoring the other layers that make up the stack. This would be a bit like locking the door, whilst leaving the windows wide open.

Having to consider and implement multiple layers of security, introduces more complexity. But it also has a beneficial side effect; it provides ‘defence in depth’, such that if one security mechanism is circumvented by a would-be attacker, another mechanism in the same or another layer, can intervene and render the attack ineffective.

Securing All the Layers

What are the layers, then, that need to be secured in a Kubernetes platform?

First, there is an infrastructure layer, that comprises of machines and the networking connections between them. The machines may consist of physical or abstracted hardware components, and will run an operating system and (usually) the Docker Engine.

Second, there is further infrastructure layer that is composed of the Kubernetes cluster components; the control plane components running on the master node(s), and the components that interact with container workloads running on worker nodes.

The next layer deals with applying various security controls to Kubernetes, in order to control access to and from within the cluster, define policy for running container workloads, and for providing workload isolation.

Finally, a workload security layer deals with the security, provenance, and integrity of the container workloads, themselves. This security layer should not only deal with the tools that help to manage the security of the workloads, but should also address how those tools are incorporated into the end-to-end workflow.

Some Common Themes

It’s useful to know that there are some common security themes that run through most of the layers that need our consideration. Recognizing them in advance, and taking a consistent approach in their application, can help significantly in implementing security policy.

  • Principle of Least Privilege – a commonly applied principle in wider IT security, its concern is with limiting the access users and application services have to available resources, such that the access provided is just sufficient to perform the assigned function. This helps to prevent privilege escalation; if a container workload is compromised, for example, and it’s been deployed with just enough privileges for it to perform its task, the fallout from the compromise is limited to the privileges assigned to the workload.
  • Software Currency – keeping software up to date is crucial in the quest to keep platforms secure. It goes without saying that security-related patches should be applied as soon as is practically possible, and other software components should be exercised thoroughly in a test environment before being applied to production services. Some care needs to be taken when deploying brand new major releases (e.g. 2.0), and as a general rule, it’s not wise to deploy alpha, beta, or release candidate versions to production environments. Interestingly, this doesn’t necessarily hold true for API versions associated with Kubernetes objects. The API for the commonly used Ingress object, for example, has been at version v1beta1 since Kubernetes v1.1.
  • Logging & Auditing – having the ability to check back to see what or who instigated a particular action or chain of actions, is extremely valuable in maintaining the security of a platform. The logging of audit events should be configured in all layers of the platform stack, including the auditing of Kubernetes cluster activity using audit policy. Audit logs should be collected and shipped to a central repository using a log shipper, such as Fluentd or Filebeat, where they can be stored for subsequent analysis using a tool such as Elasticsearch, or a public cloud equivalent.
  • Security vs. Productivity Trade Off – in some circumstances, security might be considered a hindrance to productivity; in particular, developer productivity. The risk associated with allowing the execution of privileged containers, for example, in a cluster dedicated to development activities, might be a palatable one, if it allows a development team to move at a faster pace. The trade off between security and productivity (and other factors) will be different for a development environment, a production environment, and even a playground environment used for learning and trying things out. What’s unacceptable in one environment, may be perfectly acceptable in another. The risk associated with relaxing security constraints should not simply be disregarded, however; it should be carefully calculated, and wherever possible, be mitigated. The use of privileged containers for example, can be mitigated using Kubernetes-native security controls, such as RBAC and Pod Security Policies.

Series Outline

Configuring security on a Kubernetes platform is difficult, but not impossible! This is an introductory article in a series entitled Securing Kubernetes for Cloud Native Applications, which aims to lift the lid on aspects of security for a Kubernetes platform. We can’t cover every topic and every facet, but we’ll aim to provide a good overview of the security requirements in each layer, as well as some insights from our experience of running production-grade Kubernetes clusters for our customers.

The series of articles is comprised of the following:

  • Why Is Securing Kubernetes so Difficult? (this article)
  • Securing the Base Infrastructure of a Kubernetes Cluster
  • Securing the Configuration of Kubernetes Cluster Components
  • Applying Best Practice Security Controls to a Kubernetes Cluster
  • Managing the Security of Kubernetes Container Workloads

Source

Using Vagrant to Emulate Rancher Environments

I spend a large amount of my time helping clients implement Rancher
successfully. As Rancher is involved in just about every vertical, I
come across a large number of different infrastructure configurations,
including (but not limited to!) air-gapped, proxied, SSL, HA Rancher
Server, and non-HA Rancher Server.

Scenario & Criteria

What I wanted was a way to quickly emulate an environment to allow me to
more closely test or replicate an issue. Now this could be done in a
number of ways, but the solution I wanted had to meet the following
criteria:

  • Run locally — Yes, it can be done in the cloud, but running
    services in the cloud costs money, so I wanted to keep the cost
    down.
  • Be local host OS agnostic — So others can use it, I didn’t
    want to tie it to Windows, MacOS or Linux.
  • Emulate multiple infrastructure scenarios — As in the criteria
    above, but I must also be able to use it as a basis for doing local
    application development and testing of containers.
  • Minimise bandwidth
  • Be easy to configure — So that anyone can use it.

The Solution

To fulfill the first two criteria, I needed a hypervisor (preferably
free) that could run on all platforms locally. Having had a reasonable
amount of experience with
VirtualBox (and it meeting
my preferably free criterion), I decided to use it. The third criterion
was to allow me to emulate environments locally that had the following
configurations:

  • Air gap — Where there was no connection to the internet to
    install Rancher Server and nodes
  • Proxied — Where the Internet access was via a proxy server
  • SSL — Where the connection to the Rancher Server SSL
    terminated
  • HA — The ability to stand up Rancher Server in a HA
    configuration with an externalized database and a configurable
    number of nodes

To meet the fourth criterion to minimise bandwidth, I decided to run a
registry mirror. It allowed me to destroy the setup and start afresh
quickly as all the VMs had Docker engine set to pull via the mirror. Not
only does it save bandwidth, but it also significantly speeds up
rebuilds. Files for the mirror persist to the local disk to preserve
them between rebuilds. For the final criterion of easy configuration, I
decided that I was going to use Vagrant. For those reading who haven’t
used Vagrant:

  • It’s open-source software that helps you build and maintain
    portable, virtual software development environments.
  • It provides the same, easy workflow regardless of your role as a
    developer, operator, or designer. It leverages a declarative
    configuration file which describes all your software requirements,
    packages, operating system configuration, users, and more.
  • It works on Mac, Linux, Windows, and more. Remote development
    environments force users to give up their favorite editors and
    programs. Vagrant works with tools on your local system with which
    you’re already familiar.

After you download Vagrant, you can
specify that it consumes its config from a separate file. For this
setup, all configurable options are externalized into a config.yaml
file that is parsed at startup. This means that you can configure and
use it without having to have a deep, technical understanding of
Vagrant. I also added an NFS server to the solution, running on the
master node so that services could be tested with persistent storage.
So, what does the final solution look like?

The master node runs a bunch of supporting services like MYSQL and
HAProxy to help all of this hang together. In true Docker style, all of
these supporting services are containerised!

Minimum Setup

To run this solution, the setup will create a minimum of three VMs.
There is a master, a minimum of one Rancher Server, and one node. Below
is an example of the config.yaml file with the main parts that you
will change highlighted:

You can find more detail on the config options in our repository for
this setup
.

Quick Start

For those of you wanting a Quick Start, you need the following software
installed:

Then, it is as simple as dropping to a command prompt and running the
following commands:

git clone https://github.com/rancher/vagrant
cd vagrant
vagrant up

Rancher Server will start running Cattle with three nodes. This Quick
Start also supports Rancher 2.0, so if
you want to check out the Tech Preview with the minimum of effort, run:

git clone https://github.com/rancher/vagrant
cd vagrant
git checkout 2.0
vagrant up

Thanks to my colleague James, for helping me enhance this solution to
what it has become.

About the Author

Chris Urwin works
as a field engineer for Rancher Labs based out of the UK. He spends his
days helping our enterprise clients get the most out of Rancher, and his
nights wishing he had more hair on his head!

Source

The Top 6 Questions You Asked on Containerizing Production Apps

We recently hosted IDC research manager Gary Chen as a guest speaker on a webinar where he shared results from a recent IDC survey on container and container platform adoption in the enterprise. IDC data shows that more organizations are deploying applications into production using containers, driving the need for container platforms like Docker Enterprise that integrate broad management capabilities including orchestration, security and access controls.

The audience asked a lot of great questions about both the IDC data and containerizing production applications. We picked the top questions from the webinar and recapped them here.

If you missed the webinar, you can watch the webinar on-demand here.

Top Questions from the Webinar

Q: What are the IDC stats based on?

A: IDC ran a survey of 300+ container deployers from companies with more than 1,000 employees and have primary responsibility for container infrastructure in the US and modeled it from a variety of data sources they collect about the industry.

Q: IDC mentioned that 54% of containerized applications are traditional apps. Is there is simple ‘test’ to see if an app can be containerized easily?

Source: IDC, Container Infrastructure Market Assessment: Bridging Legacy and Cloud-Native Architectures — User Survey Summary, Doc # US43642018, March 2018

A: Docker works with many organizations to assess and categorize their application portfolios based on the type of app, their dependencies, and their deployment characteristics. Standalone, stateless apps such as load balancers, web, PHP, and JEE WAR apps are generally the easiest applications to containerize. Stateful and clustered apps are also candidates for containerization, but may require more preparation.

Q: How do we containerize applications that are already in production?

A: Docker has created a set of tools and services that help organizations containerize existing applications at scale. We help assess and analyze your application portfolio, and have the tools to automate application discovery and conversion to containers and a methodology to help integrate them into your existing software pipelines and infrastructure. Find out more here.

Q: How do we decide whether to use Swarm or Kubernetes for orchestration of applications in production?

A: It comes down to the type of application and your organization’s preferences. The best part of Docker Enterprise is that you can use either orchestrator within a single platform so your workflows and UI are consistent. Your application can be defined in Compose files or Kubernetes YAML files. Additionally, you can choose to deploy a Compose file to Swarm or Kubernetes within Docker Enterprise.

Q: How can containers be checked for vulnerabilities?

A: Containers are based on an image file. Vulnerability scanning in Docker Enterprise does a binary level scan on each layer of the image file, identifies the software components in each layer and compares it against the NIST CVE database. You can find more details here.

Q: We’re exploring different cloud-based Kubernetes services. Why should we look at Docker Enterprise?

A: The value of the Docker Enterprise platform’s integrated management and security goes well beyond a commercially-supported Kubernetes distribution. Specifically Docker Enterprise allows you to leverage these capabilities consistently regardless of the cloud provider.

With Docker Enterprise, you get an integrated advanced image registry solution that includes vulnerability scanning, registry mirroring and caching for distributed development teams, and policy-based image promotions for scalable operations. We also provide integrated operations capabilities around Kubernetes – simplifying things like setting up teams, defining roles and access controls, integrating LDAP, creating client certificates, and monitoring health. Docker Enterprise makes the detailed configurations of Kubernetes quick and easy to get started and use.

Source

Docker Certified Containers from Monitoring Partners

The Docker Certified Technology Program is designed for ecosystem partners and customers to recognize Containers and Plugins that excel in quality, collaborative support and compliance. Docker Certification gives organizations enterprises an easy way to run trusted software and components in containers on the Docker Enterprise container platform with support from both Docker and the publisher.

In this review, we’re looking at solutions to monitor Docker containers. Docker enables developers to iterate faster with software architectures consisting of many microservices. This poses a challenge to traditional monitoring solutions as the target processes are no longer statically allocated or tied to particular hosts. Monitoring solutions are now expected to track ephemeral and rapidly scaling sets of containers. The Docker Engine exposes APIs for container metadata, lifecycle events, and key performance metrics. Partner Monitoring solutions collect both system and Docker container events and metrics in real time to monitor the health and performance of the customers entire infrastructure, applications and services. These solutions are validated by both Docker and the partner company and integrated into a seamless support pipeline that provide customers the world class support they have become accustomed to when working with Docker.

Check out the latest certified Docker Monitoring Containers that are now available from our partners on Docker Store:

Source

3 Customer Stories You Don’t Want to Miss at DockerCon Barcelona 2018

One of the great things about DockerCon is the opportunity to learn from your peers and find out what they’re doing. We’re pleased to announce several of the sessions in our Customer Stories track. In the track, you’ll hear from your peers who are using Docker Enterprise to modernize legacy applications, build new services and products, and transform the customer experience.

These are just a few of the sessions in the catalog today. You can browse the full list of sessions here. We also have a few more we’ll announce over the coming weeks (some customers just like to keep things under wraps for a little longer).

Desigual Transforms the In-Store Experience with Docker Enterprise Containers Across Hybrid Cloud

Mathias Kriegel, IT Ops Lead and Cloud Architect

Joan Anton Sances, Software Architect

We’re particularly excited to have a local company share their story at DockerCon. In this session, find out how Docker Enterprise has helped Desigual, a global $1 billion fashion retailer headquartered in Barcelona, transform the in-store customer experience with a new “shopping assistant” application.

Not Because We Can, But Because We Have To: Tele2 Containerized Journey to the Cloud
Dennis Ekkelenkamp, IT Infrastructure Manager
Gregory Bohncke, Technical Architect

How does an established mobile phone provider transition from a market strategy of being the cheap underdog to a market challenger leading with high quality product, and awesome features that fearlessly liberates people to live a more connected life? Tele2 Netherlands, a leading mobile service provider, is transforming how it does business with Docker Enterprise.

From Legacy Mainframe to the Cloud: The Finnish Railways Evolution with Docker Enterprise
Niko Virtala, Cloud Architect

Finnish Railways (VR Group) joined us at DockerCon EU 2017 to share how they transformed their passenger reservation system. That project paid off. Today, they have containerized multiple applications, running both on-premises and on AWS. In this session, Finnish Rail will explain the processes and tools they used to build a multi-cloud strategy that lets them take advantage of geo-location and cost advantages to run in AWS, Azure and soon Google Cloud.

You can read more about these and other session in the Customer Stories track at DockerCon Barcelona 2018 here.

Docker Customers, docker enterprise, Docker Partner, DockerCon 2018, DockerCon Barcelona, Spotlight

Source

Reducing Your AWS Spend with AutoSpotting and Rancher

Back in older times, B.C. as in Before Cloud, to put a service live you
had to:

  1. Spend months figuring out how much hardware you needed
  2. Wait at least eight weeks for your hardware to arrive
  3. Allow another four weeks for installation
  4. Then, configure firewall ports
  5. Finally, add servers to config management and provision them

All of this was in an organised company!

The Now

The new norm is to use hosted instances. You can scale these up and down
based on requirements and demand. Servers are available in a matter of
seconds. With containers, you no longer care about actual servers. You
only care about compute resource. Once you have an orchestrator like
Rancher, you don’t need to worry
about maintaining scale or setting where containers run, as Rancher
takes care of all of that. Rancher continuously monitors and assesses
the requirements that you set and does its best to ensure that
everything is running. Obviously, we need some compute resource, but it
can run for days or hours. The fact is, with containers, you pretty much
don’t need to worry.

Reducing Cost

So, how can we take advantage of the flexibility of containers to help
us reduce costs? There are a couple of things that you can do. Firstly
(and this goes for VMs as well as containers), do you need all your
environments running all the time? In a world where you own the kit and
there is no cost advantage to shutting down environments versus keeping
them running, this practice was accepted. But in the on-demand world,
there is a cost associated with keeping things running. If you only
utilise a development or testing environment for eight hours a day, then
you are paying four times as much by keeping it running 24 hours a day!
So, shutting down environments when you’re not using them is one way to
reduce costs. The second thing you can do (and the main reason behind
this post) is using Spot Instances.
Not heard of them? In a nutshell, they’re a way of getting cheap
compute resource in AWS. Interested in saving up to 80% of your AWS EC2
bill? Then keep reading. The challenge with Spot Instances is that they
can terminate after only two minutes’ notice. That causes problems for
traditional applications, but containers handle this more fluid nature
of applications with ease. Within AWS, you can directly request Spot
Instances, individually or in a fleet, and you set a maximum price for
the instance. Once you breach this price, AWS gives you two minutes and
then terminates the instance.

AutoSpotting

What if you could have an Auto Scaling Group (ASG) with an On-Demand
Instance type to which you could revert if you breached the Spot
Instance price? Step forward an awesome piece of open-source software
called AutoSpotting. You can find the source and more information on
GitHub. AutoSpotting works by
replacing On-Demand Instances from within an ASG with individual Spot
Instances. AutoSpotting takes a copy of the launch config of the ASG and
starts a Spot Instance (of equivalent spec or more powerful) with the
exact same launch config. Once this new instance is up and running,
AutoSpotting swaps out one of the On-Demand Instances in the ASG with
this new Spot Instance, in the process terminating the more expensive
On-Demand Instance. It will continue this process until it replaces all
instances. (There is a configuration option that allows you to specify
the percentage of the ASG that you want to replace. By default, it’s
100%.) AutoSpotting isn’t application aware. It will only start a
machine with an identical configuration. It doesn’t perform any
graceful halting of applications. It purely replaces a virtual instance
with a cheaper virtual instance. For these reasons, it works great for
Docker containers that are managed by an orchestrator like Rancher. When
a compute instance disappears, then Rancher takes care of maintaining
the scale. To facilitate a smoother termination, I’ve created a helper
service, AWS-Spot-Instance-Helper, that monitors to see if a host is
terminating. If it is, then the helper uses the Rancher evacuate
function to more gracefully transition running containers from the
terminating host. This helper isn’t tied to AutoSpotting, and anyone
who is using Spot Instances or fleets with Rancher can use it to allow
for more graceful terminations. Want an example of what it does to the
cost of running an environment?

Can you guess which day I implemented it? OK, so I said up to 80%
savings but, in this environment, we didn’t replace all instances at the
point when I took this measurement. So, why are we blogging about it
now? Simple: We’ve taken it and turned it into a Rancher Catalog
application so that all our Rancher AWS users can easily consume it.

3 Simple Steps to Saving Money

Step 1

Go to the Catalog > Community and select AutoSpotting.

Step 2

Fill in the AWS Access Key and Secret Key. (These are the only
mandatory fields.) The user must have the following AWS permissions:

autoscaling:DescribeAutoScalingGroups

autoscaling:DescribeLaunchConfigurations

autoscaling:AttachInstances

autoscaling:DetachInstances

autoscaling:DescribeTags

autoscaling:UpdateAutoScalingGroup

ec2:CreateTags

ec2:DescribeInstances

ec2:DescribeRegions

ec2:DescribeSpotInstanceRequests

ec2:DescribeSpotPriceHistory

ec2:RequestSpotInstances

ec2:TerminateInstances

Optionally, set the Tag Name. By default, it will look for
spot-enabled. I’ve slightly modified the original code to allow the
flexibility of running multiple AutoSpotting containers in an
environment. This modification allows you to use multiple policies in
the same AWS account. Then, click Launch.

Step 3

Add the tag (user-specified or spot-enabled, with a value of
true) to any AWS ASGs on which you want to save money. Cheap (and
often more powerful) Spot Instances will gradually replace your
instances. To deploy the AWS-Spot-Instance-Helper service, simply
browse to the Catalog > Community and launch the application.

Thanks goes out to Cristian Măgherușan-Stanciu and other
contributors

for writing such a great piece of open-source software.

About the Author

Chris Urwin works
as a field engineer for Rancher Labs based out of the UK, helping our
enterprise clients get the most out of Rancher.

Source