Top 5 Post: Improved Docker Container Integration with Java 10

As 2018 comes to a close, we looked back at the top five blogs that were most popular with our readers. For those of you that had difficulties with memory and CPU sizing/usage when running Java Virtual Machine (JVM) in a container, we are kicking off the week with a blog that explains how to get improved Docker container integration with Java 10 in Docker Desktop ( Mac or Windows) and Docker Enterprise environments.

Docker and Java

Many applications that run in a Java Virtual Machine (JVM), including data services such as Apache Spark and Kafka and traditional enterprise applications, are run in containers. Until recently, running the JVM in a container presented problems with memory and cpu sizing and usage that led to performance loss. This was because Java didn’t recognize that it was running in a container. With the release of Java 10, the JVM now recognizes constraints set by container control groups (cgroups). Both memory and cpu constraints can be used manage Java applications directly in containers, these include:

  • adhering to memory limits set in the container
  • setting available cpus in the container
  • setting cpu constraints in the container

Java 10 improvements are realized in both Docker Desktop ( Mac or Windows) and Docker Enterprise environments.

Container Memory Limits

Until Java 9 the JVM did not recognize memory or cpu limits set by the container using flags. In Java 10, memory limits are automatically recognized and enforced.

Java defines a server class machine as having 2 CPUs and 2GB of memory and the default heap size is ¼ of the physical memory. For example, a Docker Enterprise Edition installation has 2GB of memory and 4 CPUs. Compare the difference between containers running Java 8 and Java 10. First, Java 8:

docker container run -it -m512 –entrypoint bash openjdk:latest

$ docker-java-home/bin/java -XX:+PrintFlagsFinal -version | grep MaxHeapSize
uintx MaxHeapSize := 524288000
openjdk version “1.8.0_162”

The max heap size is 512M or ¼ of the 2GB set by the Docker EE installation instead of the limit set on the container to 512M. In comparison, running the same commands on Java 10 shows that the memory limit set in the container is fairly close to the expected 128M:

docker container run -it -m512M –entrypoint bash openjdk:10-jdk

$ docker-java-home/bin/java -XX:+PrintFlagsFinal -version | grep MaxHeapSize
size_t MaxHeapSize = 134217728
openjdk version “10” 2018-03-20

Setting Available CPUs

By default, each container’s access to the host machine’s CPU cycles is unlimited. Various constraints can be set to limit a given container’s access to the host machine’s CPU cycles. Java 10 recognizes these limits:

docker container run -it –cpus 2 openjdk:10-jdk
jshell> Runtime.getRuntime().availableProcessors()
$1 ==> 2

All CPUs allocated to Docker EE get the same proportion of CPU cycles. The proportion can be modified by changing the container’s CPU share weighting relative to the weighting of all other running containers. The proportion will only apply when CPU-intensive processes are running. When tasks in one container are idle, other containers can use the leftover CPU time. The actual amount of CPU time will vary depending on the number of containers running on the system. These can be set in Java 10:

docker container run -it –cpu-shares 2048 openjdk:10-jdk
jshell> Runtime.getRuntime().availableProcessors()
$1 ==> 2

The cpuset constraint sets which CPUs allow execution in Java 10.

docker run -it –cpuset-cpus=”1,2,3″ openjdk:10-jdk
jshell> Runtime.getRuntime().availableProcessors()
$1 ==> 3

Allocating memory and CPU

With Java 10, container settings can be used to estimate the allocation of memory and CPUs needed to deploy an application. Let’s assume that the memory heap and CPU requirements for each process running in a container has already been determined and JAVA_OPTS set. For example, if you have an application distributed across 10 nodes; five nodes require 512Mb of memory with 1024 CPU-shares each and another five nodes require 256Mb with 512 CPU-shares each. Note that 1 CPU share proportion is represented by 1024.

For memory, the application would need 5Gb allocated at minimum.

512Mb x 5 = 2.56 Gb

256Mb x 5 = 1.28 Gb

The application would require 8 CPUs to run efficiently.

1024 x 5 = 5 CPUs

512 x 5 = 3 CPUs

Best practice suggests profiling the application to determine the memory and CPU allocations for each process running in the JVM. However, Java 10 removes the guesswork when sizing containers to prevent out of memory errors in Java applications as well allocating sufficient CPU to process work loads.

Source

Docker Certified Containers From IBM

The Docker Certified Technology Program is designed for ecosystem partners and customers to recognize containers and plugins that excel in quality, collaborative support and compliance. Docker Certification gives enterprises an easy way to run trusted software and components in containers on Docker Enterprise with support from both Docker and the publisher.

As cloud computing continues to transform every business and industry, developers at global enterprises and emerging startups alike are increasingly leveraging container technologies to accelerate how they build modern web, mobile and IoT applications.

IBM has achieved certification of its flagship Db2 database, Websphere-Liberty middleware server and Security Access Manager products now available on Docker Hub. These Certified Containers enable developers to accelerate building cloud-native applications for the Docker Enterprise platform. Developers can deploy these solutions from IBM to any on-premises infrastructure or public cloud. They are designed to assist in the modernization of traditional applications moving from on-premises monoliths into hybrid cloud microservices.

These solutions are validated by both Docker and IBM and are integrated into a seamless support pipeline that provides customers the world-class support they have become accustomed to when working with Docker and IBM.

Check out the latest certified technology available from IBM on Docker Hub:

Learn More:

container image, container platform, Db2, Docker certified, docker enterprise, IBM, IBM Db2, IBM POWER, IBM Z, ISAM, Liberty, Websphere

Source

Speak at DockerCon San Francisco 2019 – Call for Papers is Open

Whether you missed DockerCon EU in Barcelona, or you already miss the fun, connections and learning you experienced at DockerCon – you won’t have to wait long for the next one. DockerCon returns to San Francisco on April 29 and extends through May 2, 2019 and the Call for Papers is now open. We are accepting talk submissions through January 18th at 11:59 PST.

Submit a Talk

Attending DockerCon is an awesome experience, but so is speaking at DockerCon – it’s a great way to get to know the community, share ideas and collaborate. Don’t be nervous about proposing your idea – no topic is too small or too big. And for some speakers, DockerCon is their first time speaking publicly. Don’t be intimidated, DockerCon attendees are all looking to level up their skills, connect with fellow container fans and go home inspired to implement new containerization initiatives. Here are some suggested topics from the conference committee:

  • “How To” type sessions for developers or IT teams
  • Case Studies
  • Technical deep dives into container and distributed systems related components
  • Cool New Apps built with Docker containers
  • The craziest thing you have containerized
  • Wild Card – anything and everything!
  • The impact of change – both for organizations and ourselves as individuals and communities.
  • Inspirational stories

Note that our attendees expect practical guidance so vendor sales pitches will not be accepted.

Accepted speakers receive a complimentary conference pass, speakers gift and participate in a networking reception. Additionally, they receive help preparing their session, access to an online recording of their talk and the opportunity to share their experience with the broader Docker community.
Source

Introducing the New Docker Hub

Today, we’re excited to announce that Docker Store and Docker Cloud are now part of Docker Hub, providing a single experience for finding, storing and sharing container images. This means that:

  • Docker Certified and Verified Publisher Images are now available for discovery and download on Docker Hub
  • Docker Hub has a new user experience

Millions of individual users and more than a hundred thousand organizations use Docker Hub, Store and Cloud for their container content needs. We’ve designed this Docker Hub update to bring together the features that users of each product know and love the most, while addressing known Docker Hub requests around ease of use, repository and team management.

Here’s what’s new:

Repositories

  • View recently pushed tags and automated builds on your repository page
  • Pagination added to repository tags
  • Improved repository filtering when logged in on the Docker Hub home page

Organizations and Teams

  • As an organization Owner, see team permissions across all of your repositories at a glance.
  • Add existing Docker Hub users to a team via their email (if you don’t remember their of Docker ID)

New Automated Builds

  • Speed up builds using Build Caching
  • Add environment variables and run tests in your builds
  • Add automated builds to existing repositories

Note: For Organizations, GitHub & BitBucket account credentials will need to be re-linked to your organization to leverage the new automated builds. Existing Automated Builds will be migrated to this new system over the next few months. Learn more

Improved Container Image Search

  • Filter by Official, Verified Publisher and Certified images, guaranteeing a level of quality in the Docker images listed by your search query
  • Filter by categories to quickly drill down to the type of image you’re looking for

Existing URLs will continue to work, and you’ll automatically be redirected where appropriate. No need to update any bookmarks.

Verified Publisher Images and Plugins

Verified Publisher Images are now available on Docker Hub. Similar to Official Images, these images have been vetted by Docker. While Docker maintains the Official Images library, Verified Publisher and Certified Images are provided by our third-party software vendors. Interested vendors can sign up at https://goto.docker.com/Partner-Program-Technology.html.

Certified Images and Plugins

Certified Images are also now available on Docker Hub. Certified Images are a special category of Verified Publisher images that pass additional Docker quality, best practice, and support requirements.

  • Tested and supported on Docker Enterprise platform by verified publishers
  • Adhere to Docker’s container best practices
  • Pass a functional API test suite
  • Complete a vulnerability scanning assessment
  • Provided by partners with a collaborative support relationship
  • Display a unique quality mark “Docker Certified”

Source

Simplifying Kubernetes with Docker Compose and Friends

Today we’re happy to announce we’re open sourcing our support for using Docker Compose on Kubernetes. We’ve had this capability in Docker Enterprise for a little while but as of today you will be able to use this on any Kubernetes cluster you choose.

Compose on Kubernetes

Why do I need Compose if I already have Kubernetes?

The Kubernetes API is really quite large. There are more than 50 first-class objects in the latest release, from Pods and Deployments to ValidatingWebhookConfiguration and ResourceQuota. This can lead to a verbosity in configuration, which then needs to be managed by you, the developer. Let’s look at a concrete example of that.

The Sock Shop is the canonical example of a microservices application. It consists of multiple services using different technologies and backends, all packaged up as Docker images. It also provides example configurations using different tools, including both Compose and raw Kubernetes configuration. Let’s have a look at the relative sizes of those configurations:

$ git clone https://github.com/microservices-demo/microservices-demo.git
$ cd deployment/kubernetes/manifests
$ (Get-ChildItem -Recurse -File | Get-Content | Measure-Object -line).Lines
908
$ cd ../../docker-compose
$ (Get-Content docker-compose.yml | Measure-Object -line).Lines
174

Describing the exact same multi-service application using just the raw Kubernetes objects takes more than 5 times the amount of configuration than with Compose. That’s not just an upfront cost to author – it’s also an ongoing cost to maintain. The Kubernetes API is amazingly general purpose – it exposes low-level primitives for building the full range of distributed systems. Compose meanwhile isn’t an API but a high-level tool focused on developer productivity. That’s why combining them together makes sense. For the common case of a set of interconnected web services, Compose provides an abstraction that simplifies Kubernetes configuration. For everything else you can still drop down to the raw Kubernetes API primitives. Let’s see all that in action.

First we need to install the Compose on Kubernetes controller into your Kubernetes cluster. This controller uses the standard Kubernetes extension points to introduce the `Stack` to the Kubernetes API. You can use any Kubernetes cluster you like, but if you don’t already have one available then remember that Docker Desktop comes with Kubernetes and the Compose controller built-in, and enabling it is as simple as ticking a box in the settings.

To install the controller manually on any Kubernetes cluster, see the full documentation for the current installation instructions.

Next let’s write a simple Compose file:

version: “3.7”
services:
web:
image: dockerdemos/lab-web
ports:
– “33000:80”
words:
image: dockerdemos/lab-words
deploy:
replicas: 3
endpoint_mode: dnsrr
db:
image: dockerdemos/lab-db

We’ll then use the docker client to deploy this to a Kubernetes cluster running the controller:

$ docker stack deploy –orchestrator=kubernetes -c docker-compose.yml words
Waiting for the stack to be stable and running…
db: Ready [pod status: 1/1 ready, 0/1 pending, 0/1 failed]
web: Ready [pod status: 1/1 ready, 0/1 pending, 0/1 failed]
words: Ready [pod status: 1/3 ready, 2/3 pending, 0/3 failed]
Stack words is stable and running

We can then interact with those objects via the Kubernetes API. Here you can see we’ve created the lower-level objects like Services, Pods, Deployments and ReplicaSets automatically:

$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/db-85849797f6-bhpm8 1/1 Running 0 57s
pod/web-7974f485b7-j7nvt 1/1 Running 0 57s
pod/words-8fd6c974-44r4s 1/1 Running 0 57s
pod/words-8fd6c974-7c59p 1/1 Running 0 57s
pod/words-8fd6c974-zclh5 1/1 Running 0 57s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/db ClusterIP None <none> 55555/TCP 57s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d
service/web ClusterIP None <none> 55555/TCP 57s
service/web-published LoadBalancer 10.102.236.49 localhost 33000:31910/TCP 57s
service/words ClusterIP None <none> 55555/TCP 57s

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/db 1 1 1 1 57s
deployment.apps/web 1 1 1 1 57s
deployment.apps/words 3 3 3 3 57s

NAME DESIRED CURRENT READY AGE
replicaset.apps/db-85849797f6 1 1 1 57s
replicaset.apps/web-7974f485b7 1 1 1 57s
replicaset.apps/words-8fd6c974 3 3 3 57s

It’s important to note that this isn’t a one-time conversion. The Compose on Kubernetes API Server introduces the Stack resource to the Kubernetes API. So we can query and manage everything at the same level of abstraction as we’re building the application. That makes delving into the details above useful for understanding how things work, or debugging issues, but not required most of the time:

$ kubectl get stack
NAME STATUS PUBLISHED PORTS PODS AGE
words Running 33000 5/5 4m

Integration with other Kubernetes tools

Because Stack is now a native Kubernetes object, you can work with it using other Kubernetes tools. As an example, save the as `stack.yaml`:

kind: Stack
apiVersion: compose.docker.com/v1beta2
metadata:
name: hello
spec:
services:
– name: hello
image: garethr/skaffold-example
ports:
– mode: ingress
target: 5678
published: 5678
protocol: tcp

You can use a tool like Skaffold to have the image automatically rebuild and the Stack automatically redeployed whenever you change any of the details of your application. This makes for a great local inner-loop development experience. The following `skaffold.yaml` configuration file is all you need.

apiVersion: skaffold/v1alpha5
kind: Config
build:
tagPolicy:
sha256: {}
artifacts:
– image: garethr/skaffold-example
local:
useBuildkit: true
deploy:
kubectl:
manifests:
– stack.yaml

The future

We already have some thoughts about a Helm plugin to make describing your application with Compose and deploying with Helm as easy as possible. We have lots of other ideas for helping to simplify the developer experience of working with Kubernetes too, without losing any of the power of the platform. We also want to work with the wider Cloud Native community, so if you have ideas and suggestions please let us know.

Kubernetes is designed to be extended, and we hope you like what we’ve been able to release today. If you’re one of the millions of Compose users you can now more easily move to and manage your applications on Kubernetes. If you’re a Kubernetes user struggling with too much low-level configuration then give Compose a try. Let us know in the comments what you think, and head over to GitHub to try things out and even open your first PR:

Source

Docker App and CNAB – Docker Blog

Docker App is a new tool we spoke briefly about back at DockerCon US 2018. We’ve been working on `docker-app` to make container applications simpler to share and easier to manage across different teams and between different environments, and we open sourced it so you can already download Docker App from GitHub at https://github.com/docker/app.

In talking to others about problems they’ve experienced sharing and collaborating on the broad area we call “applications” we came to a realisation: it’s a more general problem that others have been working on too. That’s why we’re happy to collaborate with Microsoft on the new Cloud Native Application Bundle (CNAB) specification.

Multi-Service Distributed Applications

Today’s cloud native applications typically use different technologies, each with their own toolchain. Maybe you’re using ARM templates and Helm charts, or CloudFormation and Compose, or Terraform and Ansible. There is no single solution in the market for defining and packaging these multi-service, multi-format distributed applications.

CNAB is an open source, cloud-agnostic specification for packaging and running distributed applications that aims to solve some of these problems. CNAB unifies the management of multi-service, distributed applications across different toolchains into a single all-in-one packaging format.

The draft specification is available at cnab.io and we’re actively looking both for folks interested in contributing to the spec itself, and to people interested in building tools around the specification. The latest release of Docker App is one such tool that implements the current CNAB spec. That means it can be used to both build CNAB bundles for Compose (which can then be used with any other CNAB client), and also to install, upgrade and uninstall any other CNAB bundle.

Sharing CNAB bundles on Docker Hub

One of the limitations of standalone Compose files is that they cannot be shared on Docker Hub or Docker Trusted Registry. Docker App solves this issue too. Here’s a simple Docker application which launches a very simple Prometheus stack:

version: 0.1.0
name: monitoring
description: A basic prometheus stack
maintainers:
– name: Gareth Rushgrove
email: garethr@docker.com

version: ‘3.7’

services:
prometheus:
image: prom/prometheus:$
ports:
– $:9090

alertmanager:
image: prom/alertmanager:$
ports:
– $:9093

ports:
prometheus: 9090
alertmanager: 9093
versions:
prometheus: latest
alertmanager: latest

With that saved as `monitoring.dockerapp` we can now build a CNAB and share that on Docker Hub.

$ docker-app push –namespace <your-namespace>

Now on another machine we can still interact with the shared application. For instance let’s use the `inspect` command to get information about our application:

$ docker-app inspect <your-namespace>/monitoring:0.1.0
monitoring 0.1.0

Maintained by: Gareth Rushgrove <garethr@docker.com>

A basic prometheus stack

Services (2) Replicas Ports Image
———— ——– —– —–
prometheus 1 9090 prom/prometheus:latest
alertmanager 1 9093 prom/alertmanager:latest

Parameters (4) Value
————– —–
ports.alertmanager 9093
ports.prometheus 9090
versions.alertmanager latest
versions.prometheus latest

All the information from the Compose file is stored with the CNAB on Docker Hub, and if you notice, it’s also parameterized, so values can be substituted at runtime to fit the deployment requirements. We can install the application directly from Docker Hub as well:

docker-app install <your-namespace>/monitoring:0.1.0 –set ports.alertmanager=9095

Installing a Helm chart using Docker App

One question that has come up in the conversations we’ve had so far is how `docker-app` and now CNAB relates to Helm charts. The good news is that they all work great together! Here is an example using `docker-app` to install a CNAB bundle that packages a Helm chart. The following example uses the `hellohelm` example from the CNAB example bundles.

$ docker-app install -c local bundle.json
Do install for hellohelm
helm install –namespace hellohelm -n hellohelm /cnab/app/charts/alpine
NAME: hellohelm
LAST DEPLOYED: Wed Nov 28 13:58:22 2018
NAMESPACE: hellohelm
STATUS: DEPLOYED

RESOURCES:
==> v1/Pod
NAME AGE
hellohelm-alpine 0s

Next steps

If you’re interested in the technical details of the CNAB specification, either to see how it works under the hood or to maybe get involved in the specification work or building tools against it, you can find the spec at cnab.io.

If you’d like to get started building applications with Docker App you can download the latest release from github.com/docker/app and check out some of the examples provided in the repository.
Source

Announcing the Docker Customer Innovation Awards

We are excited to announce the first annual Docker Customer Innovation Award winners at DockerCon Barcelona today! We launched the awards this year to recognize customers who stand out in their adoption of Docker Enterprise platform to drive transformation within IT and their business.

38 companies were nominated, all of whom have spoken publicly about their containerization initiatives recently, or plan to soon. From looking at so many excellent nominees, we realized there were really two different stories — so we created two award categories. In each category, we have a winner and three finalists.

Business Transformation

Customers in this category have developed company-wide initiatives aimed at transforming IT and their business in a significant way, with Docker Enterprise as a key part if it. They typically started their journey two or more years ago and have containerized multiple applications across the organization.

WINNER:

FINALISTS:

  • Bosch built a global platform that enables developers to build and deliver new software solutions and updates at digital speed.
  • MetLife modernized hundreds of traditional applications, driving 66 percent cost savings and creating a self-funding model to fuel change and innovation. Cut new product time to market by two-thirds.

Rising Stars

Customers in this category are early in their containerization journey and have already leveraged their first project with Docker Enterprise as a catalyst to innovate their business — often creating new applications or services.

WINNER:

  • Desigual built a brand new in-store shopping experience app in less than 5 months to connect customers and associates, creating an outstanding brand and shopping experience.

FINALISTS:

  • Citizens Bank (Franklin American Mortgage) created a dedicated innovation team sparked cultural change at a traditional mortgage company, allowing it to bring new products to market in weeks or months.
  • The Dutch Ministry of Justice evaluated Docker Enterprise as a way to accelerate application development, which helped spark an effort to modernize juvenile custodian services from whiteboards and sticky notes to a mobile app.

We want to give a big thanks to the winners and finalists, and to all of our remarkable customers have started innovation journeys with Docker.

We’ve opened the nomination process for 2019 since we will be announcing winners at DockerCon 2019 on April 29-May 2. If you’re interested in submitting or want to nominate someone else, you can learn how here.

Desigual, docker, Docker Customer Innovation awards, rising star, Société Générale, transformation

Source

Introducing Docker Desktop Enterprise – Docker Blog

Nearly 1.4 million developers use Docker Desktop every single day because it is the simplest and easiest way for container-based development. Docker Desktop provides the Docker Engine with Swarm and Kubernetes orchestrators right on the desktop, all from a single install. While this is great for an individual user, in enterprise environments administrators often want to automate the Docker Desktop installation and ensure everyone on the development team has the same configuration following enterprise requirements and creating applications based on architectural standards.

Docker Desktop Enterprise is a new desktop offering that is the easiest, fastest and most secure way to create and deliver production-ready containerized applications. Developers can work with frameworks and languages of their choice, while IT can securely configure, deploy and manage development environments that align to corporate standards and practices. This enables organizations to rapidly deliver containerized applications from development to production.

Enterprise Manageability That Helps Accelerate Time-to-Production

Docker Desktop Enterprise provides a secure way to configure, deploy and manage developer environments while enforcing safe development standards that align to corporate policies and practices. IT teams and application architects can present developers with application templates designed specifically for their team, to bootstrap and standardize the development process and provide a consistent environment all the way to production.

Key new features for IT:

  • Packaged as standard MSI (Win) and PKG (Mac) distribution files that work with existing endpoint management tools with lockable settings via policy files
  • Present developers with customized and approved application templates, ready for coding

Enterprise Deployment & Configuration Packaging

Docker Desktop Enterprise enables IT desktop admins to deploy and manage Docker Desktop Enterprise across distributed developer teams with their preferred endpoint management tools using standard MSI and PKG files. No manual intervention or extra configuration from developers is required and desktop administrators can enable or disable particular settings within Docker Desktop Enterprise to meet corporate standards and provide the best developer experience.

Application Templates

Docker Application templates

Application architects can provide developers with trusted, customized application templates through the Application Designer interface in Docker Desktop Enterprise, helping to improve reliability and security by ensuring developers start from approved designs. Together, application teams and IT can implement consistent security and development practices across the entire software supply chain, from the developers’ desktops all the way to production.

Increase Developer Productivity and Ship Production-ready Containerized Applications

For developers, Docker Desktop Enterprise is the easiest and fastest way to build production-ready containerized applications working with frameworks and languages of choice and targeting every platform. Developers can rapidly innovate by leveraging company-provided application templates that instantly replicate production-approved application configurations on the local desktop.

Key new features for developers:

  • Configurable version packs instantly replicate production environment configurations on the local desktop
  • Application Designer interface allows for template-based workflows for creating containerized applications – no Docker CLI commands are required to get started

Configurable Version Packs

Desktop Enterprise Version Packs

Docker Desktop has Docker Engine and Kubernetes built-in and with the addition of swappable version packs you can now synchronize your desktop development environment with the same Docker API and Kubernetes versions that are used in production with Docker Enterprise. You get the assurance that your application will not break due to incompatible API calls, and if you have multiple downstream environments running different versions of the APIs, you can quickly change your desktop configuration with the click of a button.

Application Designer

Choice of GUI or CLI

The Application Designer is a new workflow that provides production-ready application and service templates that let you get coding quickly, with the reassurance that your application meets architectural standards. And even if you’ve never launched a container before, the Application Designer interface provides the foundational container artifacts and your organization’s skeleton code, getting you started with containers in minutes. Plus, Docker Desktop Enterprise integrates with your choice of development tools, whether you prefer an IDE or a text editor and command line interfaces.

The Docker Desktop Products

Docker Desktop Enterprise is a new addition to our desktop product portfolio which currently includes the free Docker Desktop Community products for MacOS and Windows. Docker Desktop is the simplest way to get started with container-based development on both Windows 10 and macOS with a set of features now available for the enterprise:

Desktop Comparison Table

To learn more about Docker Desktop Enterprise:

  • Sign up to learn more about Docker Desktop Enterprise as we approach general availability
  • Watch the livestreams of the DockerCon EU keynotes, Tuesday from 09:00 – 11:00 CET and Wednesday from 9:30am-11:00am CET. (Replays will also be available)
  • Download Docker Desktop Community and build your first containerized application in minutes [ Windows | mac OS ]

containers, desktop, Desktop Enterprise, docker, dockercon, DockerCon Europe, Kubernetes

Source

First impressions with DigitalOcean’s Kubernetes Engine

In this guide I’ll set up a Kubernetes cluster with DigitalOcean’s new Kubernetes Engine using CLI tooling and then work out the cost of the cluster running a Cloud Native workload – OpenFaaS. OpenFaaS brings portable Serverless Functions to Kubernetes for any programming language.

Kubernetes is everywhere

Since James Governor from RedMonk declared that Kubernetes had won the container orchestration battle we’ve seen cloud and service providers scramble to ship their own managed Kubernetes services – to win mindshare and to get their share of the pie.

Kubernetes won – so now what? https://t.co/JeZwBEWNHy

— RedMonk (@redmonk) May 25, 2018

One of the earliest and most complete Kubernetes services is probably Google Kubernetes Engine (GKE), followed by a number of newer offerings like:

Kubernetes services coming soon:

Kubernetes engines

The point of a managed Kubernetes service or engine as I see it is to abstract away the management of servers and the day to day running of the cluster such as requesting, joining and configuring nodes. In some senses – Kubernetes delivered as a Service is a type of serverless and each product gives a varying level of control and visibility over the nodes provisioned.

For instance with VMware’s Cloud PKS you have no way to even see the server inventory and the cluster is sized dynamically based upon usage. spotinst released a product called Ocean last week which also focuses on hiding the visibility of the servers backing your workloads. The Azure team at Microsoft is combining their Azure Container Instances (ACI) with Virtual Kubelet to provide a “node-less” experience.

The areas I would score a Kubernetes Engine are around:

  • Ease of use – what is the installation experience, bootstrap time and tooling like?
  • Reliability – does it work and stand up to testing?
  • Documentation – is it clear what to do when things go wrong?
  • Management interface – are a CLI and UI available, can I automate them?
  • Effective cost – is this cost effective and good value for money?

Anything else I count as a bonus such as node auto-scaling and native LoadBalancer support within Kubernetes.

Certification is also important for running in production, but if the above points are covered some divergence may be acceptable.

Get Kubernetes

At the time of writing the Kubernetes support is only by invitation only and is in beta. We saw a similar pattern with other Kubernetes services, so this seems to be normal.

Note: please bear in mind that this post is looking at pre-release beta product. Some details may change between now and GA including the CLI which is an early version.

Use the UI

To provision the cluster we can pick between the CLI or the familiar UI dashboard which gains a Kubernetes tab.

DigitalOcean Console

Here you can see three clusters I provisioned for testing OpenFaaS with my team at VMware and for a demo at Serverless Computing London.

From here we can create a new cluster using the UI.

Region

Initially you must pick a Region and a name for the cluster.

Node pool

Then pick the price you want to pay per month by configuring one or many Kubernetes Node pools. The suggestion is 3 nodes at 5USD / month working out at 15USD per month. This may be suitable for a simple workload, but for anyone who has used Kubernetes for real work will know – 1Gb is not enough to be productive.

2Gb RAM with 1 vCPU costs 10 USD month (3×10=30USD) and 4Gb RAM with 2 vCPU will come in at (3×20=60USD). This is probably the minimum cost you want to go with to run a serious application.

Each node gets a public IPv4 IP address, so an IngressController could be run on each node then load-balanced via DNS for free, or we could opt to use a DO load-balancer (not a Kubernetes-native one) at an additional fee.

Effective cost: 3/5

It is possible to create multiple node pools, so if you have lighter workloads you could assign them to cheaper machines.

As well as Standard Nodes we can pick from Optimized Nodes (best in class) and Flexible Nodes. For an Optimized Node with best in class 2x vCPU and 4Gb RAM you’ll be set back (2×40=80USD). This seems to compare favourably with the other services mentioned, but could be off-putting to newcomers. It also doesn’t seem like there is cost for the master node.

Within a minute or two of hitting the blue button we can already download the .kube/config file from the UI and connect to our cluster to deploy code.

The Kubernetes service from @digitalocean is looking to be one of the easiest and quickest I’ve used so far. Looking forward to writing some guides on this and seeing what you all make of it too. pic.twitter.com/0FsMIzEzwF

— Alex Ellis (@alexellisuk) November 16, 2018

Ease of use for the UI: 5/5

Use the CLI

Up to now the doctl CLI could be used to do most of what you could do in the console – apart from provision Kubernetes clusters.

The @DigitalOcean CLI (doctl) now supports their managed Kubernetes Service -> https://t.co/r2muCj8EHz 🎉👨‍💻💻👩‍💻

— Alex Ellis (@alexellisuk) December 3, 2018

Well that all changed today and now in v1.12.0 you can turn on an experimental flag and manage those clusters.

The typical flow for using the doctl CLI involves downloading a static binary, unpacking it yourself, placing it in the right place, logging into the UI and generating an access token and then running doctl auth init. Google Cloud does this better by opening a web-browser to get an access token over OAuth.

Create the cluster with the following command:

Usage:
doctl kubernetes create [flags]

Aliases:
create, c

Flags:
–name string cluster name (required)
–node-pools value cluster node pools in the form “name=your-name;size=droplet_size;count=5;tag=tag1;tag=tag2” (required) (default [])
–region string cluster region location, example value: nyc1 (required)
–tag-names value cluster tags (default [])
–version string cluster version (required)

Global Flags:
-t, –access-token string API V2 Access Token
-u, –api-url string Override default API V2 endpoint
-c, –config string config file (default is $HOME/.config/doctl/config.yaml)
–context string authentication context name
-o, –output string output format [text|json] (default “text”)
–trace trace api access
-v, –verbose verbose output

The –node-pools flag may be better split out into multiple individual flags rather than separated with ;. When it came to picking the slug size I found that bit cryptic too. The only ways I could find them listed were through the API and a release announcement, which may or may not be the most recent. This could be improved for GA.

I also had to enter –version but didn’t know what format the string should be in. I copied the exact value from the UI. This is early days for the team so I would expect this to improve before GA.

$ doctl kubernetes create –name ae-openfaas
–node-pools=”name=main;size=s-2vcpu-4gb;count=3;tag=tutorial”
–region=”lon1″
–tag-names=”tutorial”
–version=”1.11.1-do.2″

ID Name Region Version Status Endpoint IPv4 Cluster Subnet Service Subnet Tags Created At Updated At Node Pools
82f33a60-02af-4e94-a550-dd5afd06cf0e ae-openfaas lon1 1.11.1-do.2 provisioning 10.244.0.0/16 10.245.0.0/16 tutorial,k8s,k8s:82f33a60-02af-4e94-a550-dd5afd06cf0e 2018-12-03 19:38:27 +0000 UTC 2018-12-03 19:38:27 +0000 UTC main

The result is asynchronous and not blocking so now we need to poll / check the CLI for completion.

We can type in doctl kubernetes list or doctl kubernetes get ae-openfaas.

Once we see the Running state then type in: doctl kubernetes kubeconfig ae-openfaas and save the contents into a file.

In the future I’d like to see this config merged optionally into .kube/config like we see with Minikube or VMware’s Cloud PKS CLI.

$ doctl kubernetes kubeconfig ae-openfaas > config
$ export KUBECONFIG=config
$ kubectl get node
NAME STATUS ROLES AGE VERSION
kind-knuth-3o9d Ready <none> 32s v1.11.1
kind-knuth-3o9i Ready <none> 1m v1.11.1
kind-knuth-3o9v Ready <none> 1m v1.11.1

We now have a config on the local computer and we’re good to go!

Ease of use for the CLI: 3/5

Total ease of use: 4/5

I would give GKE’s Management interface 5/5 – DO would have got 2/5 without the CLI. Now that the CLI is available and we can use it to automate clusters I think this goes up to 3/5 leaving room to grow.

Deploying a workload

In order to figure out a Reliability score we need to deploy an application and run it for some time. I deployed OpenFaaS live during a demo at Serverless Computing London and ran the Colorizer from the Function Store – this was on one of the cheaper nodes so ran slower than I would have expected, but was feature complete on a budget.

I then set up OpenFaaS Cloud with GitLab – this is a more demanding task as it means running a container builder and responding to events from GitLab to clone, build and deploy new OpenFaaS functions to the cluster. Again this held up really well using 3x4Gb 2vCPU nodes with no noticeable slow-down.

Reliability 4/5

You can deploy OpenFaaS with helm here: OpenFaaS helm chart

In this Tweet from my talk at goto Copenhagen you can see the conceptual architecture for OpenFaaS on Kubernetes which bundles a Serverless Function CRD & Operator along with NATS Streaming for asynchronous execution and Prometheus for built-in metrics and auto-scaling.

Serverless beyond the hype by @alexellisuk. Donating to @Bornecancerfond in the live demo 💰💸 #serverless pic.twitter.com/n1rzcqRByd

— Martin Jensen (@mrjensens) November 19, 2018

Steps:

  • Install helm and tiller.
  • Create the OpenFaaS namespaces
  • Generate a password for the API gateway and install OpenFaaS the helm chart passing the option –set serviceType=LoadBalancer
  • Install OpenFaaS CLI and log into the API gateway

This is the point at which a “normal” Kubernetes engine would give us a LoadBalancer in the openfaas namespace. We would then query its state to find a public IP address. DigitalOcean gets full marks here because it will respond to this event and provision a load balancer which is around 10 USD / month – cheaper than a traditional cloud provider.

Type in the following and look for a public IP for gateway-external in the EXTERNAL-IP field.

$ kubectl get svc -n openfaas
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager ClusterIP 10.245.45.200 <none> 9093/TCP 1m
gateway ClusterIP 10.245.83.10 <none> 8080/TCP 1m
gateway-external LoadBalancer 10.245.16.17 188.166.136.202 8080:32468/TCP 1m
nats ClusterIP 10.245.208.70 <none> 4222/TCP 1m
prometheus ClusterIP 10.245.105.120 <none> 9090/TCP 1m

Now set your OPENFAAS_URL as follows: export OPENFAAS_URL=http://188.166.136.202:8080 and do the login step:

$ echo -n $PASSWORD | faas-cli login –username admin –password-stdin

Note you can also add the export command to your bash profile ~/.bash_profile file to have it set automatically on every session.

You’ll now be able to open the UI using the address above or deploy a function using the CLI.

Deploy certinfo which can check a TLS certificate for a domain:

$ faas-cli store deploy certinfo

Check the status:

$ faas-cli list -v
$ faas-cli describe certinfo

Invoke the function and check a TLS certificate:

$ echo -n www.openfaas.com | faas-cli invoke certinfo

For me this seemed to take less than 10 seconds from deployment to getting a successful response from the function.

We can even scale the function to zero and see it come back to life:

$ kubectl scale deploy/certinfo -n openfaas-fn –replicas=0

Give it a few seconds for the function to be torn-down, then invoke it again. You’ll see it block – scale up and then serve the request:

$ echo -n cli.openfaas.com | faas-cli invoke certinfo

The function itself will take 1-2 seconds to execute since it works with a remote website. You can try it out with one of your own functions or find out how to enable the OpenFaaS idler component by reading the docs.

Earlier in the year in August Richard Gee wrote up a guide and supporting automation with Ansible to create up a single-node development cluster with Kubernetes and OpenFaaS. What I like about the new service is that we we can now get the same result, but better with just a few CLI commands.

TLS with LetsEncrypt

This is the point at which I’d usually tell you to follow the instructions for LetsEncrypt and cert-manager to setup HTTPS/TLS for your OpenFaaS gateway. I’m not going to need to go there because DigitalOcean can automate all of this for us if we let them take control of our domain.

tls-domains

The user-experience for cert-manager is in my opinion firmly set at expert level, so this kind of automation will be welcomed by developers. It does come at a cost of automation however.

Tear down the cluster

A common use-case for Kubernetes services is running CI and other automation testing. We can now tear down the whole cluster in the UI or CLI:

$ doctl kubernetes delete ae-openfaas –force

That’s it – we’ve now removed the cluster completely.

Wrapping up

As a maintainer, developer, architect and operator – it’s great to see strong Kubernetes offerings appearing. Your team or company may not be able to pick the best fit if you already take all your services from a single vendor, but I hope that the current level of choice and quality will drive down price and drive up usability wherever you call home in the cloud.

If you have the choice to run your workloads wherever you like, or are an aspiring developer then the Kubernetes service from DigitalOcean provides a strong option with additional value adds and some high scores against my rating system. I hope to see the score go up even more with some minor refinements around the CLI ready for the GA.

If taken in isolation then my overall rating for this new Kubernetes service from DigitalOcean would be 4/5, but when compared to the much more mature, feature-rich services Kubernetes engines we have to take this rating in context.

Source

Announcing Cloud Native Application Bundle (CNAB)

As more organizations pursue cloud-native applications and infrastructures for creating modern software environments, it has become clear that there is no single solution in the market for defining and packaging these multi-service, multi-format distributed applications. Real-world applications can now span on-premises infrastructure and cloud-based services, requiring multiple tools like Terraform for the infrastructure, Helm charts and Docker Compose files for the applications, and CloudFormation or ARM templates for the cloud-services. Each of these need to be managed separately.

To address this problem, Microsoft in collaboration with Docker are announcing Cloud Native Application Bundle (CNAB) – an open source, cloud-agnostic specification for packaging and running distributed applications. CNAB unifies the management of multi-service, distributed applications across different toolchains into a single all-in-one packaging format.The CNAB specification lets you define resources that can be deployed to any combination of runtime environments and tooling including Docker Engine, Kubernetes, Helm, automation tools and cloud services.

Docker is the first to implement CNAB for containerized applications and will be expanding it across the Docker platform to support new application development, deployment and lifecycle management. Initially CNAB support will be released as part of our docker-app experimental tool for building, packaging and managing cloud-native applications. Docker lets you package CNAB bundles as Docker images, so you can distribute and share through Docker registry tools including Docker Hub and Docker Trusted Registry. Additionally, Docker will enable organizations to deploy and manage CNAB-based applications in Docker Enterprise in the upcoming months.

The draft specification is available at cnab.io and we’re actively looking for contributors to the spec itself and people interested in building tools around the specification. Docker will be contributing to the CNAB specification.

Source