Bringing CoreOS technology to Red Hat OpenShift to deliver a next-generation automated Kubernetes platform

Bringing CoreOS technology to Red Hat OpenShift to deliver a next-generation automated Kubernetes platform

In the months since CoreOS was acquired by Red Hat, we’ve been building on our vision of helping companies achieve greater operational efficiency through automation. Today at Red Hat Summit we’ve outlined our roadmap for how we plan to integrate the projects and technologies started at CoreOS with Red Hat’s, bringing software automation expertise to customers and the community.

Enterprise Kubernetes users can greatly benefit from the planned addition of many popular Tectonic features to Red Hat OpenShift Container Platform, the industry’s most comprehensive enterprise Kubernetes platform. Quay, the leading container registry, is now backed by Red Hat as Red Hat Quay. Container Linux will continue to provide a free, fast-moving, and automated container host, and is expected to provide the basis for new operating system projects and offerings from Red Hat. And open source projects including etcd, Ignition, dex, Clair, Operators and more will continue to thrive as part of Red Hat’s commitment to driving community innovation around containers and Kubernetes.

Essentially, CoreOS technologies are being woven into the very fabric of Red Hat’s container-native products and projects and we are excited to continue delivering on the vision to make automated operations a reality.

The original container-native Linux

Since Red Hat’s acquisition of CoreOS was announced, we received questions on the fate of Container Linux. CoreOS’s first project, and initially its namesake, pioneered the lightweight, “over-the-air” automatically updated container native operating system that fast rose in popularity running the world’s containers.

With the acquisition, Container Linux will be reborn as Red Hat CoreOS, a new entry into the Red Hat ecosystem. Red Hat CoreOS will be based on Fedora and Red Hat Enterprise Linux sources and is expected to ultimately supersede Atomic Host as Red Hat’s immutable, container-centric operating system.

Red Hat CoreOS will provide the foundation for Red Hat OpenShift Container Platform, Red Hat OpenShift Online, and Red Hat OpenShift Dedicated. Red Hat OpenShift Container Platform will also, of course, continue to support Red Hat Enterprise Linux for those who prefer its lifecycle and packaging as the foundation for their Kubernetes deployments.

Current Container Linux users can rest easy that Red Hat plans continue investing in the operating system and community. The project is an important base for container-based environments by delivering automated updates with strong security capabilities, and as a part of our commitment and vision we plan to support Container Linux as you know it today for the community and Tectonic users alike.

Integrating Tectonic Automated Operations Into OpenShift

CoreOS Tectonic was created with a vision of a fully automated container platform that would relieve many of the burdens of day-to-day IT operations. This vision will now help craft the next generation of Red Hat OpenShift Container Platform, providing an advanced container experience for operators and developers alike.

With automated operations coming to OpenShift, IT teams will be able to use the automated upgrades of Tectonic paired with the reliability, support, and extensive application development capabilities of Red Hat OpenShift Container Platform. This makes managing large Kubernetes deployments easier without sacrificing other enterprise needs, including platform stability or continued support for existing IT assets.

We believe this future integrated platform will help to truly change the way IT teams deliver applications by providing speed to market through consistent deployment methods and automated operations throughout the stack.

In the meantime, current Tectonic customers will continue to receive support and updates for the platform. They can also have confidence that they will be able to transition to Red Hat OpenShift Container Platform in the future with little to no disruption, as almost all Tectonic features will be retained in Red Hat OpenShift Container Platform.

Automated Applications via the Operator Framework

We are also focusing on automating the application layer of the stack. At KubeCon we introduced and open sourced the Operator Framework. Today we are showing how we plan to put Operators into practice. Red Hat is working on a future enhancement that will enable software partners to test and validate their Operators for Red Hat OpenShift Container Platform. More than 60 software partners have committed to supporting the Kubernetes Operator Framework initiative introduced by Red Hat, including Couchbase, Dynatrace, Black Duck Software and Crunchy Data, among others.

Our aim is to make it easier for ISVs to bring cloud services, including messaging, big data, analytics, and more, to the hybrid cloud and to address a broader set of enterprise deployment models while avoiding cloud lock-in. Eventually, Red Hat plans to extend the Red Hat Container Certification with support for Operators as tested and validated Kubernetes applications on Red Hat OpenShift. With the Operator Framework in place, software partners have a more consistent, common experience for delivering services on Red Hat OpenShift, enabling ISVs to bring their offerings to market more quickly on any cloud infrastructure where Red Hat OpenShift runs.

The Quay container registry becomes Red Hat Quay

Quay, the container registry, will also continue to live on in the Red Hat container portfolio.

While OpenShift provides an integrated container registry, customers who require more comprehensive enterprise grade registry capabilities now have the option to consume Quay Enterprise and Quay.io from Red Hat. Quay includes automated geographic replication, integrated security scanning with Clair, image time machine for viewing history, rollbacks and automated pruning, and more. Red Hat Quay is available both as an enterprise software solution and as a hosted service at Red Hat Quay.io, with plans for future enhancements and continued integration with Red Hat OpenShift in future releases.

With CoreOS now part of the Red Hat family, we’ve been busy working together to bring more capabilities to enterprise customers, and more muscle to community open source projects. We’re excited to work alongside you with our Red Hat fedoras on to help automate your infrastructure, all the way from the stack to the application layer.

Learn more at Red Hat Summit

Join us at Red Hat Summit in San Francisco or view the Red Hat Summit livestream to learn more. Red Hat is also hosting a press conference live from Red Hat Summit at 11 a.m. PT today to talk about this integration and other news from the event. The press conference is open to all – join or listen to a replay here.

Source

Getting Started with Amazon EKS – Provisioning and Adding Clusters

This is a simple tutorial on how to launch a new Amazon EKS cluster from scratch and attach to Codefresh.

Have an existing Kubernetes cluster you want to add? Please see the docs.

The source code for this tutorial can be found here:
https://github.com/codefresh-io/eks-installer

Overview

Amazon Elastic Container Service for Kubernetes (Amazon EKS) is the latest product release from AWS, offering fully-hosted Kubernetes clusters.

This is great news for AWS users, however it is not overly simple to understand how EKS fits in with various other AWS services.

To help out others get started with Amazon EKS, I’ve put together a Codefresh pipeline setup.yml that does the following:

  1. Bootstraps an EKS cluster and VPC in your AWS account using Terraform
  2. Saves the Terraform statefile in a Codefresh context
  3. Creates some base Kubernetes resources
  4. Initializes Helm in the cluster
  5. Adds the cluster to your Codefresh account

There is also a corresponding teardown.yml that:

  1. Loads the Terraform statefile from Codefresh context
  2. Destroys the EKS cluster from your AWS account using Terraform
  3. Removes the cluster from your Codefresh account

Follow the instructions below to setup these pipelines in your account. After clicking the “Build” button, your cluster should be ready to use in 10-20 minutes!

Setting up the Pipelines

Add Repository and setup.yml pipeline

In your Codefresh account, at the top right of your screen click the “Add Repository” button. Turn on “Add by URL”. Enter the following repository URL (or create and use a fork):
https://github.com/codefresh-io/eks-installer

Click “Next”. Click the “Select” button under “I have a Codefresh.yml file”. For the path to codefresh.yml, enter the following:
.codefresh/setup.yml

Click through the rest of the dialogue to create the setup.yml pipeline.

Configure Triggers

Before going forward, make sure to delete any unwanted trigger configuration that may result in an unexpected EKS cluster launch:

Add teardown.yml pipeline

In the same repository view, click the “Add Pipeline” link. Name this pipeline something like “eks-uninstaller”.

At the bottom of the page, in the “Workflow” section, select “YAML”. Click “Use YAML from Repository”. Enter the following:
.codefresh/teardown.yml

Click “Save”.

Setup Environment Variables

Under the “General” tab, add the following global variables to be used by both of the pipelines:

AWS_ACCESS_KEY_ID encrypted – AWS access key ID
AWS_SECRET_ACCESS_KEY encrypted – AWS secret access key
CLUSTER_NAME – unique EKS cluster name

Additionally, you can add the following optional variables for fine-tuned setup:

CLUSTER_SIZE – number of nodes in ASG (default: 1)
CLUSTER_REGION – AWS region to deploy to (default: us-west-2)
CLUSTER_INSTANCE_TYPE – EC2 instance type (default: m4.large)

Note that at the time of writing, EKS is only available in regions us-east-1 and us-west-2 (and seems to have reached capacity in us-east-1). Your best best is to stick with us-west-2 for now.

Click “Save”.

Create new EKS Cluster

At this point, all you need to do is click “Build” on the setup.yml pipeline (eks-installer)

and wait…

Once the build is complete, navigate to the Kubernetes services page to view your newly-created EKS cluster in Codefresh:

You can then use this cluster to deploy to from your pipelines etc.

Teardown EKS Cluster

Similar to steps above, all you need to do to teardown your EKS cluster is to click “Build” on the teardown.yml pipeline (eks-uninstaller)

and wait…

Once the build is complete, the EKS cluster and all associated AWS resources will be destroyed, and the cluster will be removed from your Codefresh account.

Source

Introducing the Non-Code Contributor’s Guide

Introducing the Non-Code Contributor’s Guide

Author: Noah Abrahams (InfoSiftr), Jonas Rosland (VMware), Ihor Dvoretskyi (CNCF)

It was May 2018 in Copenhagen, and the Kubernetes community was enjoying the contributor summit at KubeCon/CloudNativeCon, complete with the first run of the New Contributor Workshop. As a time of tremendous collaboration between contributors, the topics covered ranged from signing the CLA to deep technical conversations. Along with the vast exchange of information and ideas, however, came continued scrutiny of the topics at hand to ensure that the community was being as inclusive and accommodating as possible. Over that spring week, some of the pieces under the microscope included the many themes being covered, and how they were being presented, but also the overarching characteristics of the people contributing and the skill sets involved. From the discussions and analysis that followed grew the idea that the community was not benefiting as much as it could from the many people who wanted to contribute, but whose strengths were in areas other than writing code.

This all led to an effort called the Non-Code Contributor’s Guide.

Now, it’s important to note that Kubernetes is rare, if not unique, in the open source world, in that it was defined very early on as both a project and a community. While the project itself is focused on the codebase, it is the community of people driving it forward that makes the project successful. The community works together with an explicit set of community values, guiding the day-to-day behavior of contributors whether on GitHub, Slack, Discourse, or sitting together over tea or coffee.

By having a community that values people first, and explicitly values a diversity of people, the Kubernetes project is building a product to serve people with diverse needs. The different backgrounds of the contributors bring different approaches to the problem solving, with different methods of collaboration, and all those different viewpoints ultimately create a better project.

The Non-Code Contributor’s Guide aims to make it easy for anyone to contribute to the Kubernetes project in a way that makes sense for them. This can be in many forms, technical and non-technical, based on the person’s knowledge of the project and their available time. Most individuals are not developers, and most of the world’s developers are not paid to fully work on open source projects. Based on this we have started an ever-growing list of possible ways to contribute to the Kubernetes project in a Non-Code way!

Get Involved

Some of the ways that you can contribute to the Kubernetes community without writing a single line of code include:

The guide to get started with Kubernetes project contribution is documented on Github, and as the Non-Code Contributors Guide is a part of that Kubernetes Contributors Guide, it can be found here. As stated earlier, this list is not exhaustive and will continue to be a work in progress.

To date, the typical Non-Code contributions fall into the following categories:

  • Roles that are based on skill sets other than “software developer”
  • Non-Code contributions in primarily code-based roles
  • “Post-Code” roles, that are not code-based, but require knowledge of either the code base or management of the code base

If you, dear reader, have any additional ideas for a Non-Code way to contribute, whether or not it fits in an existing category, the team will always appreciate if you could help us expand the list.

If a contribution of the Non-Code nature appeals to you, please read the Non-Code Contributions document, and then check the Contributor Role Board to see if there are any open positions where your expertise could be best used! If there are no listed open positions that match your skill set, drop on by the #sig-contribex channel on Slack, and we’ll point you in the right direction.

We hope to see you contributing to the Kubernetes community soon!

Source

Adventures of the Kubernetes Vacuum Robots // Jetstack Blog

18/Jun 2018

By Hannah Morris

Have you ever wondered how to run kubelet on a vacuum robot?

Our guess is, you haven’t – and nor have many other people. However, this didn’t stop Christian’s talk from attracting a large following at KubeCon Europe 2018, nor did it deter some curious conference goers from attempting to win a robot of their own!

You’ll be happy to hear that the robots will be back on stage in Hamburg, where Christian will be talking at Containerdays 2018.

This blog post recounts Christian’s journey with his 3 vacuum robots.

robot

One of the Team

Words of Wisdom from a Domestic God

Christian’s talk starred the Xiamio Mi Vacuum Robot, an affordable piece of kit (in case you were interested in investing). Inspired by a talk at 34C3 in 2017 – which revealed how to gain root access to the Ubuntu Linux operating system of the vacuum – Christian set about to first explain how the vacuum can be provisioned as a node in a Kubernetes cluster, and then how Kubernetes primitives can be used to control it:

  • CronJobs periodically schedule drives
  • a custom Prometheus exporter is used to track metrics of a vacuum’s life

Using custom controllers and CRDs, extended features of the vacuum can be utilised to:

  • request raw sensor readings
  • dump a map of your home
  • allow the vacuum to drive custom paths

Robots in Training

Along with 14 Jetstackers, 3 vacuum robots flew to Copenhagen in early May for the conference. They stayed with Christian in a nice Danish houseboat, which became the designated robot training ground. Christian had them running circles around the living room, as well as fetching him the necessary fuel to keep his strength up ready for the talk…

K8s Beer Run

Christian trained his robots up well

Why running kubelet on your vacuum robot is (not) a good idea!

Those who attended Christian’s talk learnt all about running kubelet on a vacuum robot to make their household chores more interesting, if not easier.

Another thing that we all took away from the talk, is that conference WiFi should never be trusted: the robots were disobedient in the live demo, and – alas! – the stage at KubeCon was left dusty.

Robot Relocation

Following the talk, we had a surprise in store: It was revealed that vacuum robot #3 was to be rehomed, and that one lucky conference goer would have the privilege of taking it away with them!

We decided to pick names from a hat to find our winner. In the moments leading up to the draw, the Jetstack stand was surrounded by a crowd of budding domestic gods and goddesses, all eager to be in with the chance to vacuum their homes with the aid of Kubernetes.

Christian drew the name of the lucky winner from Richard’s cap: Congratulations were in order for Carolina Londoño, who took her new vacuum robot home with her – all the way to Colombia!

carolinarobot

Christian with Carolina in Copenhagen; vacuum robot #3 en route to Colombia

Containerdays, Hamburg 2018

Catch Christian and his vacuum robots at Containerdays 2018 in Hamburg on Tuesday 19th June at 17.20. Here’s to hoping they clean up this time (literally!)

Source

Community Contribution: Heptio Ark Plugin for DigitalOcean

A central tenet of Heptio’s culture is building honest technology. To hold ourselves accountable, we measure the impact of our projects in the open source community and the number of contributions from partners.

Today, we’re excited to announce that StackPointCloud expanded the Heptio Ark ecosystem through their development of a Heptio Ark Block Storage plugin for DigitalOcean. You can read their blog on the plugin, including a simple how-to description, right here.

Out of the box, Heptio Ark enables disaster recovery by backing up Kubernetes resources and snapshotting Persistent Volumes for clusters running in Google Cloud, Amazon Web Services, and Microsoft Azure. Disaster recovery for other Persistent Volumes is delivered through a Restic integration that enables filesystem-based snapshotting.

Now with the DigitalOcean Heptio Ark plugin, users can take native snapshots of their DigitalOcean Block Storage Volumes. Additionally, DigitalOcean Spaces provide an s3-compatible API so users can store their Heptio Ark backups in a local object storage. These features improve the speed of backups and offers a consistent user experience across cloud providers.

It’s remarkable how quickly Kubernetes has grown as evidenced by the recently announced DigitalOcean Kubernetes offering. DigitalOcean joins the increasing number of cloud service providers that offer managed Kubernetes clusters. Developers and operators can have confidence that Heptio Ark provides consistent disaster recovery regardless of where they run Kubernetes.

Learn more

Want to learn more about managing Kubernetes disaster recovery using Heptio Ark or need to develop a custom plugin? We recommend joining our Google group and Slack channel. Or, if you’re interested in contributing to Heptio Ark, you’ll find several GitHub issues labeled as Good First Issue and Help Wanted. Take a look — we would welcome your participation!

Source

Giant Swarm vs OpenShift – Giant Swarm

Giant Swarm vs OpenShift

At Giant Swarm, we’ve often been asked to compare our infrastructure with that of Red Hat OpenShift. We’d like to shed some light on this subject and give you a rundown of the differences between Giant Swarm and OpenShift.

No doubt Red Hat OpenShift is a leading container platform, or as they put it themselves “The Kubernetes platform for big ideas”. Red Hat is one of the major contributors to the open-source Kubernetes project and announced they would use Kubernetes for container orchestration with OpenShift Enterprise 3 in summer 2015. Many enterprises decided early on to use OpenShift as a platform for their containerized applications.

As OpenShift is widely used today, this raises the question of why the world needs another offering such as Giant Swarm, and how it’s different. So, let’s take a deeper look at how Giant Swarm compares to OpenShift.

At Giant Swarm we’re driven by customer obsessions. This means we’re always starting with the “WHY”. Challenging what drives our customers, their problems, and the pains they face, again and again – to understand what they really need and want.

From working with our enterprise customers, and the talks we have with many others, we’ve learned that they want to increase their business agility and developer productivity. This is to gain a competitive edge in the digital era in the first place. They want their applications to be running resiliently and to be scalable across their own data-centers and the public clouds. Additionally, most want to stay flexible and avoid lock-in in this ever-changing world. Obviously, cost savings often play a role too.

To gain business agility and increase developer productivity they want their development teams to be empowered, have the freedom to use the right tools for the job, and easily run Cloud Native projects on-demand at scale, reliably, and without having to manage the underlying infrastructure. This is so they can focus on what they do best – working on their applications.

Key motivations to become Cloud Native

Business Agility
Developer Productivity
Resiliency
Scalability
Cost Savings

These customer motivations have strong implications on our product, our architecture, and even our business model. Next, we’ll cover the key differentiators that explain how Giant Swarm is different, how we add value during our customer’s Cloud Native journeys, and why recently more and more enterprises are choosing Giant Swarm over traditional PaaS platforms like OpenShift.

1. Multitenancy: Hard Isolation vs. Soft Isolation

Giant Swarm believes that soft in-cluster separation by namespaces doesn’t provide the high level of security often required by enterprises or the freedom that development teams need to ship features quickly. We’re not alone with this opinion as Kelsey Hightower, Staff Developer Advocate at Google Cloud Platform, recommends in his Keynote at KubeCon Austin 2017 “Run Kubernetes clusters by org-chart”.

You need to put a lot of effort into securing in-cluster environments and it gets harder the larger the company and the more tenants you want to run in a cluster. This is especially true for teams that are just getting started with Kubernetes, as they’re sometimes running wild and breaking things. It’s a lot better to provide each team with a separate cluster to start with and then re-evaluate further down the line towards consolidating some workloads onto larger clusters.

At Giant Swarm we saw this early on, and decided to build an API driven platform enabling customers to easily provision and scale fully isolated Kubernetes clusters. These tenant clusters have both network and resource isolation from all other tenant clusters. In on-premises data-centers, we run the tenant clusters on top of a host cluster which also runs Giant Swarm’s control plane. In public clouds, such as AWS and Azure, the tenant clusters are managed by the same control plane, but run independently, each in their own virtual network.

The control plane consists of many micro-services and operators. These operators are custom controllers that automate managing the tenant clusters and related components. This allows our customers to have as many fully isolated Kubernetes clusters as they require, meaning they enjoy a higher security standard due to the additional isolation. But that’s not all – it empowers development teams as they can easily create their own clusters using their preferred config and tooling. They can also decide for themselves when it’s time for an upgrade – this truly allows for multi-project operations and prevents the team from becoming blockers for others when upgrading, something that we’ve seen happening often at large organizations using OpenShift.

Furthermore, Giant Swarm’s solution allows you to easily provision a new cluster with the latest Kubernetes version, test it, and then move applications. Most of Giant Swarm’s customers are using more than 10 tenant clusters within a few weeks. As well as per team clusters they separate per environment for dev, staging, pre-prod, and especially production. We even have customers that have integrated provisioning clusters into their CI/CD pipeline so their tests are executed in a fresh cluster and cleaned up afterward.

Key benefits as a result of Giant Swarm’s Architecture with Hard Isolation

Efficient Multi-Project Operations – you can easily start as many fully isolated clusters as you need.
Empowered Development Teams – each team can get their own clusters with their preferred config and tooling.
Enhanced Security – due to the hard isolation of each tenant cluster.
Faster Update Cycles – teams can upgrade independently of each other, instead of becoming a blocker for each other.
Increased Flexibility – you can easily provision and test clusters with different Kubernetes versions on different infrastructures.

2. Continuous Updates vs. 1-2 Major Upgrades per Year

Giant Swarm believes that the fast eat the slow. That’s why Giant Swarm has a CI/CD pipeline into every installation, enabling us to upgrade all the components of the whole stack of open source products anytime, and to keep our customers always on the edge of the fast-evolving Cloud Native ecosystem. This is especially important in times where most projects such as Kubernetes have a major release every quarter. Additionally, having a CI/CD pipeline allows for daily zero-touch updates to continuously improve the container platform and customer experience as well as to rollout hot-fixes immediately to prevent customers from running into more serious issues. Of course, our system asks our customers for permission and they need to actively accept major releases with the possibility of breaking changes.

With smaller continuous updates fewer changes are made at a time, so it’s easier to identify problems. The toolchain and automation becomes more robust the more frequent teams perform upgrades. As they say: “If it hurts do it more often”. Upgrades usually do hurt and people tend to refuse to do them regularly. This is getting worse with increasing complexity – at some point you will get into trouble with interlocking between components. For example, you can’t upgrade Prometheus because it requires a newer version of Kubernetes but you want to, as in the latest version of Prometheus a bug was fixed that you’re experiencing in production.

Across the hundreds of Kubernetes clusters we’re managing in production for our customers we’re experiencing more than 80% of all issues on clusters that have not been updated for 90+ days. This clearly shows that 1 or 2 major upgrades per year are simply not enough to stay secure and run reliably.

This also brings us back to point 1. We’ve encountered large enterprises that couldn’t upgrade yet from OpenShift 3.4. This means they’re still running on Kubernetes 1.4 which is a long way behind. Whereas Giant Swarm guarantees to provide the latest version of Kubernetes within 30 days of its release. At the time being, Giant Swarm customers can already use Kubernetes 1.11 which is 18 months ahead of version 1.4.

Key benefits as a result of Continuous Updates

Staying Always Ahead – Giant Swarm ships improvements every day and guarantees to provide updates of the many open source components within 30 days of the latest release.
Increased Reliability – Giant Swarm keeps your cluster always up-to-date and prevents you from running into many possible issues, suffering from bugs that have already been fixed or even worse any interlocking between components.
Enhanced Security – Giant Swarm fixes any security issue and rolls out the update immediately via our CI/CD pipeline into your installation.

3. Managed Solution vs. Managing a Third-Party Platform

Giant Swarm believes in the DevOps concept: “You build it, you run it“. This approach has been shown to empower companies to build better software faster, gain business agility and developer productivity as development cycles are much shorter. You might have experienced this already from building your applications the DevOps way. The same is now true for infrastructure, as infrastructure has become code.

That’s why Giant Swarm is not only providing you with a container platform but also managing it 24/7 for you. This means we’re not just selling you a product as a traditional PaaS player will do but taking full responsibility that it is up-to-date and operational at all times – and it also allows us to make our product better every day.

Today, we’re already managing hundreds of Kubernetes clusters in production for world-class companies across on-premises data-centers and public clouds. This gives us unique insights as we are running into more issues than anyone else in the marketplace. We see issues early on and at scale and can respond to them accordingly. Whenever we discover an issue in one of the many clusters of our customers we create a postmortem and fix it with code. Every release and change is tested automatically in our CI/CD pipeline and then rolled out immediately into all installations ensuring that every other customer would not run into the same issue. This creates a positive network effect for all our customers as they simply run into fewer issues as more customers are joining Giant Swarm. It makes Giant Swarm’s container platform and all the Kubernetes clusters of our customers more secure, robust and reliable.

Additionally, our approach comes with economies of scale. It’s simply more efficient to manage hundreds – if not thousands of clusters – on a platform you control and can update anytime instead of managing a third-party platform where you have little or no influence on changes.

Hiring a third-party provider to manage another company’s platform is even worse – think how long it would take if one of your factories in China were to experience a problem with a cluster from an external service provider. They would have to report this back to your HQ, who would then forward it to the provider, who may then even need to open a support ticket with the vendor. It would take ages until you get a response – and even longer until you get a solution that resolves the problem. Now imagine this with a critical problem, where you’re experiencing downtime.

To make our customers lives better and to prevent the frustration and long response-times of a long support chain we give our customers direct access to our engineers via a private Slack channel that allows us to provide an immediate qualified response. We’re basically becoming part of our customer’s internal platform team, taking full responsibility that their container platform and all their clusters are up-to-date and operational at all times. As one of our Fortune 500 customers says: “We break it. Giant Swarm fixes it”.

Key benefits as a result of a Fully Managed Solution

Faster Development – development teams can focus on their applications instead of spending time and energy taking care of the complex underlying infrastructure.
Increased Reliability – Giant Swarm manages hundreds of Kubernetes clusters meaning problems are often found and fixed for another customer before they can affect your clusters.
Enhanced Security – Giant Swarm takes care of keeping all your tenant clusters secure and running well at all times.

4. Open Source with Freedom of Choice vs. Distribution with Limitations and Lock-In

Giant Swarm believes in providing customers with freedom of choice to allow their development teams to choose the right tool for the job. Instead of having to use some pre-configured, opinionated tools provided by the traditional PaaS platforms such as OpenShift, which can limit them.

That’s why Giant Swarm provides vanilla Kubernetes anywhere. You’re not only getting a conformant Kubernetes that you can get from many vendors but the same plain vanilla Kubernetes in your on-premises data-centers and at leading cloud providers to prevent lock-in. This allows you to move your workloads easily from a Kubernetes cluster managed by Giant Swarm to another vanilla Kubernetes cluster.

Betting on plain vanilla Kubernetes and working closely with the community has allowed Giant Swarm to use alpha features early on and some large enterprises to get into production using RBAC and Pod Security Policies while they were still in alpha.

At Giant Swarm we keep our open source code in public Github repositories where it is free to use, instead of trying to lock customers into our solution. Obviously, this has some implications on our business model as we will explain to you in the next section.

Key benefits as a result of a Pure Open Source Solution

Faster Development – Giant Swarm allows your development teams to use the right tool for the job – instead of requiring to use some pre-configured opinionated tools.
Increased Scalability + Flexibility – Giant Swarm provides pure vanilla Kubernetes on any infrastructure and prevents lock-in as you can easily move your workloads to another infrastructure provider.
Staying Always Ahead – Giant Swarm always provides you with the latest Kubernetes version – supporting you to even run Kubernetes alpha features in production.

5. Managed Service Subscription vs. Licenses + Enterprise Support

Giant Swarm believes that infrastructure software is becoming open source as the Cloud Native community is building better solutions than closed source vendors could do alone. The development and improvements of all these open source components are happening so fast that providing a platform with 1 or 2 major upgrades per year is simply not enough anymore. The added value is in providing a fully managed solution, taking responsibility that your business is operational at all times.

That’s why Giant Swarms doesn’t charge a license fee plus additional enterprise support as traditional PaaS providers do. Instead, Giant Swarm charges only a usage-based subscription fee for its Fully Managed Cloud Native Platform.

When it comes to Total Cost of Ownership (TCO) Giant Swarm’s offering will always be more cost-efficient. Compare this to building and managing a container platform yourself, or paying a license fee for a traditional PaaS and managing this yourself – or even hiring an external service provider to manage it for you. This is simply because of the closed loop that Giant Swarm owns of building and managing the platform – there are economies of scale of managing hundreds – if not thousands – of Kubernetes clusters on top of it in the near future. Several of our customers have confirmed that when considering TCO, Giant Swarm’s offering has clear cost-savings advantages, in comparison to managing another vendors platform (such as OpenShift) themselves, or hiring a third-party service provider.

Key Benefits as a result of a Managed Service Model

Cost Savings – as Giant Swarm has lower total costs of ownership and passes these savings on to our customers.

Giant Swarm has rethought the traditional PaaS model of vendors such as Red Hat OpenShift and has come up with a solution that is in many ways fundamentally different. These differences allow Giant Swarm to add a lot of extra value, especially for enterprises key objectives:

Business Agility – Giant Swarm’s customers can efficiently run multiple Cloud Native projects on-demand, at scale, reliably. Giant Swarm takes the complexity away from our customers, taking care that their Cloud Native infrastructure is up-to-date and operational at all times, as well as providing excellent hands-on support. This further accelerates our customer’s Cloud Native journey, so they can thrive in the digital era.
Developer Productivity – Giant Swarm provides development teams with freedom of choice instead of making them use pre-configured tools. Developers can easily get their own clusters with their preferred config and tooling, upgrade when they are ready without blocking others, and the freedom to break clusters while Giant Swarm fixes it.
Resiliency – Giant Swarm keeps the Cloud Native platform and all tenant clusters up-to-date and their customer’s workloads operational at all times. Giant Swarm proactively prevents more than 80% of all potential issues thanks to the positive network effect of managing clusters for many enterprises and daily zero-touch updates via our CI/CD pipeline into every installation. This is just getting better the more companies join Giant Swarm.
Security – Giant Swarm provides a higher security standard thanks to the hard isolation of every cluster, updating all the open source components at all times, and the immediate rollouts of hot-fixes for potential security issues via the CI/CD pipeline into every installation.
Scalability – Giant Swarm provides and manages plain vanilla Kubernetes in your data-centers and preferred cloud providers. Customers can efficiently start and scale 100+ clusters across private and public clouds around the globe.
Cost Savings – Customers have stated the total cost of ownership of Giant Swarm’s solution is clearly lower than doing it yourself, managing a traditional packaged PaaS such as OpenShift, or hiring a third-party service provider.

These huge benefits have convinced world-class companies including several Fortune 500 companies to choose Giant Swarm over OpenShift – and even some to move away from OpenShift to Giant Swarm.

While Red Hat OpenShift is an established product trusted by many, with plenty of companies having close relationships with the market leader, we are convinced that if you’re optimizing for the key objectives mentioned in this article, that Giant Swarm is the better solution. We will keep working hard to support you to win in the digital era. We will let you focus on what you do best, taking complexity away from you, allowing you to break clusters and have a coffee during the day and rest well at night while we will fix it for you.

Want to learn more? Please, get in touch via our website or schedule a discovery call with me.

Source

Image Management & Mutability in Docker and Kubernetes

May 15, 2018

by Adrian Mouat

Kubernetes is a fantastic tool for building large containerised software systems in a manner that is both resilient and scalable. But the architecture and design of Kubernetes has evolved over time, and there are some areas that could do with tweaking or rethinking. This post digs into some issues related to how image tags are handled in Kubernetes and how they are treated differently in plain Docker.

First, let’s take a look at one of the first issues that people can face. I have the following demo video that shows a developer trying to deploy a new version of a Rust webapp to a Kubernetes cluster:

The video starts by building and pushing a version of the pizza webapp that serves quattro formaggi pizza. The developer then tries to deploy the webapp to Kubernetes and ends up in a confusing situation where it’s running, yet not serving the kind of pizza we expect. We can see what’s going on by doing some more inspection:

It turns out 3 different versions of our webapp are running inside a single Kubernetes Replica Set, as evidenced by the 3 different digests.

The reason this can happen comes down to the Kubernetes imagePullPolicy. The default is IfNotPresent, which means nodes will use an existing image rather than pull a new one. In our demo, each node happened to have a different version of the image left over from previous runs. Personally, I’m disappointed that this is the default behaviour, as it’s unexpected and confusing for new users. I understand that it evolved over-time and in some cases it is the wanted behaviour, but we should be able to change this default for the sake of usability.

The simplest mitigation for the problem is to set the pull policy to AlwaysPull:

This can even be made the default for all deployments by using the AlwaysPullImages Admission Controller.

However, there is still a rather large hole in this solution. Imagine a new deployment occurs concurrently with the image being updated in the registry. It’s quite likely that different nodes will pull different versions of the image even with AlwaysPull set. We can see a better solution in the way Docker Swarm Mode works – the Swarm Mode control plane will resolve images to a digest prior to asking nodes to run the image, that way all containers are guaranteed to run the same version of the image. There’s no reason we can’t do something similar in Kubernetes using an Admission Controller, and my understanding is that Docker EE does exactly this when running Kubernetes pods. I haven’t been able to find an existing open source Admission Controller that does this, but we’re working on one at CS and I’ll update this post when I have something.

Going a little deeper, the real reason behind this trouble is a difference between the way image tags are viewed in Kubernetes and Docker. Kubernetes assumes image tags are immutable. That is to say, if I call my image amouat/pizza:today, Kubernetes assumes it will only ever refer to that unique image; the tag won’t get reused for new versions in the future. This may sound like a pain at first, but immutable images solve a lot of problems; any potential confusion about which version of an image a tag refers to simply evaporates. It does require using an appropriate naming convention; in the case of amouat/pizza:today a better version would be to use the date e.g. amouat/pizza:2018-05-12, in other cases SemVer or git hashes can work well.

In contrast, Docker treats tags as mutable and even trains us to think this way. For example, when building an application that runs in a container, I will repeatedly run docker build -t test . or similar, constantly reusing the tag so that the rest of my workflow doesn’t need to change. Also, the official images on the Docker Hub typically have tags for major and minor versions of images that get updated over time e.g. redis:3 is the same image as redis:3.2.11 at the time of writing, but in the past would have pointed at redis:3.2.10 etc.

This split is a real practical problem faced by new users. Solving it seems reasonably straightforward; can’t we have both immutable and mutable tags? This would require support from registries and (preferably) the Docker client, but the advantages seem worth it. I am hopeful that the new OCI distribution specification will tackle this issue.

To sum up; be careful when deploying images to Kubernetes and make sure you understand how images actually get deployed to your cluster. And if you happen
to have any influence on the direction of Kubernetes or the Distribution spec; can we please try to make the world a bit nicer?

Because of these and some other issues, Container Solutions have started work on Trow; an image management solution for Kubernetes that includes a registry component that runs inside the cluster. Trow will support immutable tags and include admission controllers that pin images to digests. If this sounds useful to you, please head over to trow.io and let us know!

Further Viewing

This blog was based on my talk Establishing Image Provenance and Security in Kubernetes given at KubeCon EU 2018, which goes deeper into some of the issues surrounding images.

Looking for a new challenge? We’re hiring!

Source

What’s new in Kubernetes 1.12

A Detailed Overview of Rancher’s Architecture

This newly-updated, in-depth guidebook provides a detailed overview of the features and functionality of the new Rancher: an open-source enterprise Kubernetes platform.

Get the eBook

Kubernetes 1.12 will be released this week on Thursday, September 27, 2018. Version 1.12 ships just three months after Kubernetes 1.11 and marks the third major release of this year. The short cycle is inline with the quarterly release cycle the project has followed since it’s GA in 2015.

Kubernetes releases 2018

| Kubernetes Release | Date |
|——————–|——————–|
| 1.10 | March 26, 2018 |
| 1.11 | June 27, 2018 |
| 1.12 | September 27, 2018 |

Whether you are a developer using Kubernetes or an admin operating clusters, it’s worth getting an idea about the new features and fixes that you can expect in Kubernetes 1.12.

A total of 38 features are included in this milestone. Let’s have a look at some of the highlights.

Kubelet certificate rotation

Kubelet certificate rotation was promoted to beta status. This functionality allows for automated renewal of key and a certificate for the kubelet API server as the current certificate approaches expiration. Until the official 1.12 docs have been published, you can read the beta documentation on this feature here.

Network Policies: CIDR selector and egress rules

Two formerly beta features have now reached stable status: One of them is the ipBlock selector, which allows specifying ingress/egress rules based on network addresses in CIDR notation. The second one adds support for filtering the traffic that is leaving the pods by specifying egress rules. The below example illustrates the use of both features:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: app
policyTypes:
– Egress
egress:
– to:
– ipBlock:
cidr: 10.0.0.0/24
(…)

As previoulsy beta features, both egress and ipBlock are already described in the official network policies documentation.

Mount namespace propagation

Mount namespace propagation, i.e. the ability to mount a volume rshared so that any mounts from inside the container are reflected in the root (= host) mount namespace, has been promoted to stable. You can read more about this feature in the Kubernetes volumes docs.

Taint nodes by condition

This feature introduced in 1.8 as early alpha has been promoted to beta. Enabling it’s featureflag causes the node controller to create taints based on node conditions and the scheduler to filter nodes based on taints instead of conditions. The official documentation is available here.

Horizontal pod autoscaler with custom metrics

While support for custom metrics in HPA continuous to be in beta status, version 1.12 adds various enhancements like the the ability to select metrics based on the labels available in your monitoring pipeline. If you are interested in autoscaling pods based on application-level metrics provided by monitoring systems such as Prometheus, Sysdig or Datadog, I recommend to checkout the design proposal for external metrics in HPA.

RuntimeClass

RuntimeClass is a new cluster-scoped resource “that surfaces container runtime properties to the control plane”. In other words: This early alpha feature will enable users to select and configure (per pod) a specific container runtime (such as Docker, Rkt or Virtlet) by providing the runtimeClass field in the PodSpec. You can read more about it in these docs.

Resource Quota by priority

Resource quotas allow administrators to limit the resource consumption in namespaces. This is especially practical in scenarios where the available compute and storage resources in a cluster are shared by several tenants (users, teams). The beta feature Resource quota by priority allows admins to fine-tune resource allocation within the namespace by scoping quotas based on the PriorityClass of pods. You can find more details here.

Volume Snapshots

One of the most exciting new 1.12 features for storage is the early alpha implementation of persistent volume snapshots. This feature allows users to create and restore snapshots at a particular point in time backed by any CSI storage provider. As part of this implementation three new API resources have been added:
VolumeSnapshotClass defines how snapshots for existing volumes are provisioned. VolumeSnapshotContent represents existing snapshots and VolumeSnapshot allows users to request a new snapshot of a persistent volume like so:

apiVersion: snapshot.storage.k8s.io/v1alpha1
kind: VolumeSnapshot
metadata:
name: new-snapshot-test
spec:
snapshotClassName: csi-hostpath-snapclass
source:
name: pvc-test
kind: PersistentVolumeClaim

For the nitty gritty details take a look at the 1.12 documentation branch on Github.

Topology aware dynamic provisioning

Another storage related feature, topology aware dynamic provisioning, was introduced in v1.11 and has been promoted to beta in 1.12. It addresses some limitations with dynamic provisioning of volumes in clusters spread across multiple zones where single-zone storage backends are not globally accessible from all nodes.

Enhancements for Azure Cloud provider

These two improvements regarding running Kubernetes in Azure are shipping in 1.12:

Cluster autoscaler support

The cluster autoscaler support for Azure was promoted to stable. This will allow for automatic scaling of the number of Azure nodes in Kubernetes clusters based on global resource usage.

Azure availability zone support

Kubernetes v1.12 adds alpha support for Azure availability zones (AZ). Nodes in an availability zone will be added with label failure-domain.beta.kubernetes.io/zone=<region>-<AZ> , and topology-aware provisioning is added for Azure managed disks storage class.

Anything else?

Kubernetes 1.12 contains many bug fixes and improvements of internal components, clearly focusing on stabilising the core, maturing existing beta features and improving the release velocity by adding more automated tests to the projects CI pipeline. A noteworthy example for the latter is the addition of CI e2e conformance tests for arm, arm64, ppc64, s390x and windows platforms to the projects test harness.

For a full list of changes in 1.12 see the release notes.

Rancher will support Kubernetes 1.12 on hosted clusters as soon as it becomes available on the particular provider. For RKE provisioned clusters it will be supported starting with Rancher 2.2.

Source

The Operator Metering project is now available

We recently open sourced the Operator Framework and today we’re happy to share the next milestone: Operator Metering. Operator Metering is designed to help you gain more knowledge about the usage and costs to run and manage Kubernetes native applications (Operators). This joins the other Operator Framework components – SDK and Lifecycle Management – that are a part of the Operator Framework family, an open source toolkit designed to manage Operators in a more effective, automated, and scalable way.

Now available as an open source version, Operator Metering enables usage reporting for Operators that provide specialized lifecycle services for applications running on Kubernetes. The project is designed to tie into the cluster’s CPU and memory reporting, as well as calculate Infrastructure-as-a-Service (IaaS) costs and customized metrics such as licensing. Examples of such services could be metering products running in Kubernetes for use in on-demand billing or to derive DevOps insights, such as tracking heal operations across Gluster storage clusters.

We believe Operator Metering will enable Kubernetes operations teams to be able to associate the cost of their underlying infrastructure to the applications running on their Kubernetes clusters in a consistent way, across any environment, be it on public cloud infrastructure or on premises.

Metering is an important tool for organizations that use capacity from a Kubernetes cluster to run their own services, as well as for IT departments that manage these clusters. In the past, many departments tended to overestimate their resource needs. This could easily result in wasted capacity and wasted capital.

Today, management teams want to understand more concretely where budget is spent and by whom, and for which service. Metering provides that information, providing an understanding of how much it costs to run specific services, while also providing usage information that can lead to improved budgeting and capacity planning. With this information, IT can also internally bill departments to reflect the costs directly associated with their actual infrastructure usage, driving accountability for service costs. This helps to eliminate some of the more manual IT “plumbing” work in tallying costs and usage by hand or managing spreadsheets – instead, by using metering, IT teams can free up their time to tackle bigger problems and even drive business-wide innovation.

Here are some examples of how metering could be applied in the real world:

  • Cloud budgeting: Teams can gain insight into how cloud resources are being used, especially in autoscaled clusters or hybrid cloud deployments.
  • Cloud billing: Resource usage can be tracked by billing codes or labels that reflect your internal hierarchy.
  • Telemetry/aggregation: Service usage and metrics can be viewed across many namespaces or teams, such as a Postgres Operator running hundreds of databases.

We are extremely excited to share Operator Metering with the community as part of our commitment to making Kubernetes more extensible and widely usable. Operator Metering is currently in alpha, so we are looking forward to your feedback and comments on bugs, tweaks, and future updates. Our current plan is to incorporate feedback, stabilize the code base, and fix any critical problems before moving on to adding more features.

Learn more about the Operator Metering project at https://github.com/operator-framework/operator-metering.

Source

Deploying to Azure Kubernetes with Helm, Draft, and Codefresh

In this tutorial, we will use Azure Kubernetes Service (AKS) to deploy an application in the form of a Helm package. We will use Codefresh for CI/CD and also get a brief look at Draft for creating Helm charts.

What you will learn

  • How to create a Kubernetes cluster in Azure
  • How to install Helm -the Kubernetes package manager – on the cluster
  • How to add/view/edit the cluster in Codefresh
  • How to use Draft to create a starting Helm chart for the application
  • How to push Helm charts with Codefresh
  • How to deploy Helm charts with Codefresh

Prerequisites

If you want to follow along as you read the tutorial, you will need:

  • A free Codefresh Account
  • An Azure subscription (trial should work fine)
  • A Github repository with your application (Codefresh also supports Gitlab and Bitbucket)

The programming language of your application does not really matter, as long as it comes with a Dockerfile. Even if you don’t have a Dockerfile at hand we will see how you can create one easily with Draft.

Creating a Kubernetes cluster in Azure

The documentation for AKS explains how you can create a cluster using the command line or the GUI of Azure Portal. Both ways are valid but since we are also going to use Helm from the command line, it makes sense to dive into the terminal right away. For the sake of convenience, however, we will use the terminal offered in the Azure portal (Azure cloud shell) which offers some nice features such as automatic authentication as well as preinstalled kubectl and helm executables.

Login into Azure portal with your account and launch the cloud shell from the top right of the GUI.

Azure Cloud shellAzure Cloud shell

After a few moments, you should have access to the command line right from your browser!

Before you create your first Kubernetes cluster in Azure you need to understand the concept of Resource Groups. Everything in Azure is created under a Resource Group. If you don’t have a Resource Group yet you need to create one first following the documentation.

A Resource Group was already available for the purposes of this tutorial and therefore creating an Azure cluster was a single line in the terminal:

az aks create –resource-group kostis-rg –name myDemoAKSCluster –node-count 1 –generate-ssh-keys

Adjust the node count according to your preference but be aware that this affects billing. In this tutorial we’ll use one, but adding more won’t change how you interact with Kubernetes.

After a few minutes, the cluster will be created and you can even visit it from the GUI and see its details.

Azure Kubernetes clusterAzure Kubernetes cluster

The next step is to setup kubectl access to the cluster. Kubectl is already available in the command line but you still need to match its config with the cluster we just created. The command that does this uses the Azure CLI:

az aks get-credentials –resource-group kostis-rg –name myDemoAKSCluster

Once that is done, we have full access to the cluster with kubectl. We can now enter any of the familiar commands and see the resources of the cluster (pods, deployments, services etc).

Connecting the Azure Kubernetes cluster to Codefresh

Codefresh has native support for Azure Kubernetes clusters which includes a nice dashboard that can be used both for viewing existing resources as well as modifying them and built-in deploy steps through pipelines.

To connect the Azure Kubernetes cluster, log in to your Codefresh account and click Kubernetes on the left sidebar. Then click “Add Cluster”

Codefresh integrationsCodefresh integrations

From the drop-down menu select Microsoft Azure and enter your cluster credentials (url, certificate, and token). To obtain these values, Codefresh documentation explains the kubectl commands that you need to execute. You can execute them in the Azure cloud shell directly, as we have already seen in the previous section that kubectl is configured to point at the newly created cluster.

Once all the details are filled in, test your settings and save the cluster configuration. If everything goes well, you will be able to click on Kubernetes on the left sidebar and view all the cluster details within Codefresh.

Kubernetes DashboardKubernetes Dashboard

The last step is to create a registry pull secret so that the cluster can pull Docker images from the Codefresh Registry. Each Codefresh account comes with a built-in free Docker registry that can be used for image storage. Of course, you can also use your own external registry instead. (Azure offers a Docker Registry as well.)
First, you need to create a token for the Codefresh registry, then execute the following kubectl command in the Azure cloud shell:

kubectl create secret docker-registry my-codefresh-registry –docker-server=r.cfcf.io –docker-username=<codefresh_username> –docker-password=<codefresh_token> –docker-email=<email>

Of course, you need to enter your own values here. Notice also that this secret is for the default namespace, so if you want to use another namespace add it as a parameter on the command above.

Tip – If you don’t want to run the command line to create the secret, you can goto the Kubernetes dashboard and click “Add Service” then click on “Image Pull Secret” to add a secret for any of your connected repositories.

That’s it! The Kubernetes cluster is now ready to deploy Docker applications. At this point, we could use plain Kubernetes manifests to deploy our application. In this tutorial, however, we will see how we can use Helm.

Installing Helm on the Azure Kubernetes cluster

Helm is a package manager for Kubernetes. It offers several extra features on top of vanilla Kubernetes deployments, some of which are:

  • The ability to group multiple Kubernetes manifests together and treat them as a single entity (for deployments, rollbacks, and storage).
  • Built-in templating for Kubernetes manifests, putting an end to custom template systems that you might use for replacing things such as the Docker tag inside a manifest.
  • The ability to package the collection of manifests in Charts which contain the templates as well as default values.
  • The ability to create catalogs of applications (Helm repositories) that function similar to traditional package repositories (think deb, rpm, nuget,brew, npm etc).

Helm comes in two parts, the client (Helm) and the server component called Tiller. The Azure cloud shell comes with the client already preinstalled. The client can be used to install the server part as well. Therefore, just execute in the Azure cloud shell the following:

The helm executable uses the same configuration as kubectl. Because kubectl was already configured to point to our Kubernetes cluster in the beginning of this tutorial, the helm init command works out of the box without any extra configuration.

Codefresh also comes with a Helm dashboard that you can now visit. It should be empty because we haven’t deployed anything yet.

Empty Helm DashboardEmpty Helm Dashboard

Creating a starter Helm chart using Draft

Creating a Helm template and associated default values is a well-documented process. Maybe you are already familiar with Helm templates and your application already has its own chart. In that case, skip this section.

In this tutorial, however, we will select the easy way out and auto-generate a chart that will serve as a starting point for the Helm package. We will use Draft for this purpose.

Draft is a tool geared towards local Kubernetes deployments (i.e. before you actually commit your code) such as Minikube. It is developed by the same team that develops Helm so obviously, they play along very well together.

We will explore the deployment capabilities of Draft in a future article. For the present situation, we will exploit its capability to create a Helm chart for any of the supported languages.

Download Draft for your OS and then execute it at the root directory of your project.

This will create a default Helm chart under the charts folder. Notice that Draft will even create a Dockerfile for your application if it doesn’t contain one already.

Edit the values.yaml file to your preference. At the very least you should change:

  • The ports for the Kubernetes service
  • The type of Service from ClusterIP to LoadBalancer

You can also add the Docker repository/tag values, although this is not strictly necessary as Codefresh will do this for us during the actual deployment.

Once you are finished, commit the chart folder to your Git repository.

Preparing a Helm Chart with Codefresh

Apart from a built-in Docker registry, all Codefresh accounts also include an integrated Helm Repository. A Helm Repository can be used to catalog your own Helm Charts in addition to the public charts already included with Codefresh from KubeApps.

To push the Helm Chart, create a new Codefresh Pipeline with the following yml build file:

version: ‘1.0’

steps:

BuildingDockerImage:

title: Building Docker Image

type: build

image_name: kostis-codefresh/python-flask-sampleapp

working_directory: ./

tag: ${}

dockerfile: Dockerfile

StoreChart:

title: Storing Helm Chart

image: ‘codefresh/cfstep-helm:2.9.1’

environment:

– ACTION=push

– CHART_REF=charts/python

This yml file contains two steps. The first step, BuildingDockerImage creates a Docker image from a Dockerfile that exists at the root of the project.

The second step uses the premade Codefresh Helm Docker image and pushes the Helm chart located at charts/python in the root folder. The names of steps are arbitrary.

In order to make this pipeline have access to the internal Helm Repository you also need to import its configuration. Select “import from shared configuration” in your Codefresh pipeline and choose the CF_HELM_DEFAULT config.

Codefresh Helm configurationCodefresh Helm configuration

After you execute the pipeline you will see the chart getting pushed in the Codefresh Helm Repository. To browse this repository select Kubernetes->Helm Releases from the left sidebar and expand the CF_HELM_DEFAULT Repo.

Codefresh Helm repositoryCodefresh Helm repository

You can also manually deploy this chart to your Azure cluster by clicking the Install icon as shown on the right-hand side. Even though this is a convenient way to deploy applications via the Codefresh GUI, it is best if we fully automate this process as you will see in the next section.

Automatically deploying a Helm Chart with Codefresh

In the previous section, we saw how you can store the Helm chart in the integrated Codefresh Helm repository. The final step is to actually deploy the application and get a full CI/CD pipeline. Here are the respective pipeline steps:

DeployChart:

image: ‘codefresh/cfstep-helm:2.9.1’

environment:

– CHART_REF=charts/python

– RELEASE_NAME=mypython-chart-prod

– KUBE_CONTEXT=myDemoAKSCluster

– VALUE_image_pullPolicy=Always

– VALUE_image_tag=${}

This is a Helm step as before, but with the following parameters:

  • We define again which chart we will deploy
  • We select a release name. (This is optional and if not provided, Helm with autogenerate one for us using a funky naming pattern.)
  • The Kube context selects the Azure Cluster as the target of the deployment (Codefresh supports linking multiple Kubernetes clusters from multiple cloud providers).

The last two lines show the power of Helm. Instead of having to use custom replacement scripts or external templating system that are needed for plain Kubernetes manifests, Helm has built-in templating. The two lines override the default values for the chart.

We change the pull policy to “always” (by default it was IfNotPresent) and also we make sure that we use the Docker image from the branch that we are building from. This is just a suggestion and you can use any other combination of tags on your Docker images. You can find all built-in Codefresh variables in the documentation.

Once the application is deployed you should be able to access it from its external endpoint. If you visit again the Codefresh Helm dashboard you should now see the release along with its templates and values.

Codefresh Helm releaseHelm Release

Congratulations! You have now deployed a Helm chart in the Azure Kubernetes cluster and the application is ready to be used.

You can also see the release from the Azure Cloud shell if you execute

This will print out the release details in the terminal.

Rolling back deployments without re-running the pipelines

Another big advantage of Helm is the way it gives you easy rollbacks for free. If you make some commits in your project, Helm will keep the same deployment and add new revisions on it.

You can easily rollback to any previous version without actually re-running the pipeline.

Helm RollbackHelm Rollback

The server part of Helm keeps a history of all releases and knows the exact contents of each respective Helm package.

Codefresh allows you to do this right from the GUI. Select the History tab in the Helm release and from the list of revisions you can select any of them as the rollback target. Notice that rolling back will actually create a new revision. So you can go backward and forward in time to any revision possible.

Conclusion

We have reached the end of this tutorial. You have now seen:

  • How to create an Azure Kubernetes cluster
  • How to install Helm
  • How to give access of the Codefresh Registry to the Azure cluster
  • How to connect the Azure cluster in Codefresh
  • How to inspect Kubernetes resources and Helm chart/releases from the Codefresh GUI
  • How to create a Helm chart using Draft and how to edit default values
  • How to use Codefresh to store the chart in the integrated Helm repository
  • How to use Codefresh to deploy applications to the Azure cluster from a Helm chart
  • How to easily rollback Helm releases

Now you know how to create a complete CI/CD pipeline that deploys applications to Kubernetes using Helm packaging.

New to Codefresh? Create Your Free Account Today!

Source