How Giant Swarm Enables a New Workflow

How Giant Swarm Enables a New Workflow

By now we all know that Amazon AWS changed computing forever and it actually started as an internal service. The reason for the existence of AWS is pretty easy to understand once you understand Jeff Bezos and Amazon. Sit tight.

Jeff and his team deeply believe in the two pizza team rule, meaning that you need to be able to feed a team with two pizzas or it is too big. This is due to the math behind communication, namely the fact that the number of communication links in a group can be calculated based on the members of the team n :

In a team of 10, there are 45 possible links, possible communication paths. At 20, there are 190, at 100 people there are 5.000. You get the idea. You need to allow a small team to be in full control and that is really where DevOps comes from: You build it, you run it and if you want to make your corporate overlord fully tremble in fear, you need a third point: you decide it.

The problem Amazon had though, was that their teams were losing a lot of time because they had to care for the servers running their applications and that part of the equation was just not integrated into their workflow yet. “Taking care of servers” was a totally separate thing than the rest of their work, where one (micro-)service simply talked to another service when needed. The answer, in the end, was simple. Make infrastructure code and give those teams APIs to control compute resources, creating an abstraction layer to the servers. There should not be a difference between talking to a service built by another team, the API for a message queue, charge a credit card or start a few servers.

This allows for a lot of efficiency on both sides and is great. Developers have a nice API and the Server Operations people can do whatever needs to be done as long as they keep the API stable.

Everything becomes part of the workflow. And once you have it internally as a service there was no reason to not make it public and hence have better utilization of your servers.

Kubernetes Appears on the Scene

Now think about how Kubernetes has started to gain traction within bigger companies. It actually normally starts out with a team somewhere that installs Kubernetes however they want, sometimes as a strategic DevOps decision. Of course, these teams would never think about buying their own servers and building their own datacenter, but as K8s is code, it is seen as being more on the developer side. This means you end up with a disparate set of K8s installations until the infrastructure team gets interested and wants to provide it centrally.

While the corporation might think that with providing a centralized K8s they are actually doing what Amazon did with providing the API to K8s, being API driven, but that is not what the Amazon way is. The Amazon way, the right way, is to provide an API to start a K8s cluster and abstract all other things, like security and storage provisioning, away as far as possible. For efficiency, you might want to provide a bigger production cluster at some point, but first and foremost, this is about development speed.

Giant Swarm – Your Kubernetes Provisioning API

This is where the Giant Swarm Platform comes in, soon including more managed services around it. Be it in the cloud or on-premise, we offer you an API that allows teams to start as many of their own K8s clusters, in a specific and clear-cut version, as they see fit, integrating the provisioning of K8s right into their workflows. The infrastructure team, or cluster operations team as we tend to call them, makes sure that all security requirements are met, there is some tooling around the clusters like CI/CD, possibly supply versioned helm chart templates and so on. This is probably worth a totally separate post.

At the same time, Giant Swarm provides you with fully managed Kubernetes clusters, keeping them always up to date, with the latest security fixes and in-place upgrades, so you are not left with a myriad of different versions run by different teams in different locations. Giant Swarm clusters of one version always look the same. “Cloud Native projects at demand at scale in consistently high quality”, as one of our partners said.

Through Giant Swarm, customers can put their teams back into full control, going as far as allowing them to integrate Giant Swarm in their CI/CD pipeline and quickly launch and tear down test clusters on demand. They can give those teams the freedom they planned by letting them launch their own K8s clusters by themselves, not having to request it somewhere, while keeping full control of how these clusters are secured, versioned and managed, so that they know that applications can move easily through their entire K8s ecosystem, in different countries and locations.

Giant Swarm is the Amazon EC2 for Kubernetes in any location. API-driven Kubernetes, where the teams stay in control and can really live DevOps and API-driven as a mindset and way of doing things. Request your free trial of the Giant Swarm Infrastructure here.

Source

Docker vs. Kubernetes: Is Infrastructure Still At War?

Jul 31, 2018

by Pini Reznik

Docker and Kubernetes are undoubtedly the biggest names in Cloud Native infrastructure right now. But: are they competing technologies, or complementary ones? Do we need to worry about joining the right side? Is there risk here for enterprises, or is the war already won? If so, who won it?

Some context for this query is in order. Three years ago, war was raging inside the data centre. By introducing a whole new way of packaging and running applications called “containers,” Docker created the Cloud Native movement. Of course other companies quickly began producing rival container engines to compete with Docker. Who was going to win?

As if the tournament of container systems was not enough, Docker had to fight another, simultaneous, battle. Once containers and container packaging arrived, the obvious next killer app had to be scheduling and orchestrating them. Docker introduced its own Swarm scheduler, which jousted with rival tools from Google (Kubernetes), Mesos (Mesosphere), Nomad (Hashicorp) and a hosted solution from AWS (ECS).

These wars were a risky time for an enterprise wanting to go Cloud Native. First, which platform and tooling to choose? And, once chosen, what if it ultimately lost out to the competition and disappeared? The switching costs for all of these products was very high, so the cost of making the wrong choice would be significant. Unfortunately, while so many contenders vied for primacy in such an emergent sector, the likelihood of making that wrong choice was dangerously high.

So what happened? Who won? Well, both wars got completely and soundly won — and by different combatants.

Docker won a decisive victory in the container engine war. There is one container engine and they are it.

Kubernetes, which was finally released by Google as its own open source entity, has won the scheduler wars. There may yet be a few skirmishers about, but Kubernetes is basically THE orchestrator.

The final result is that any Cloud Native system is likely to utilize Docker as the container engine and Kubernetes as the orchestrator. The two have effectively become complimentary: Shoes and socks. Bows and arrows. Impossible to think of one without the other.

Does this mean we have no decisions left to make?

Unfortunately things are not yet completely simple and decided. Yes, there are still decisions to make, with new factors which must now be considered. Looking over the current container landscape we still see all kinds of competitors.

First, a large number of container platform products designed to abstract the orchestrator and container engine away from you. Among these are Docker EE, DC/OS from Mesos and OpenShift from Redhat.

Second, there are now managed container services from many of the original punters, like EKS from Amazon, Google’s GKE, and Rancher Lab’s offerings. In addition we’ve also got newcomer offerings like AKS from Microsoft and Cisco’s Container Platform.

So, that sounds like we have even more options, and even more work to do when choosing our Cloud Native platform and services.

No, The Situation Has Changed

While it is true that there are even more platforms to choose between now than three years ago, the situation is actually very different now. All of the current platforms are based on Kubernetes. This alone makes them far more similar than the three pioneering orchestration options — Kubernetes, Mesos and Swarm — were in the past. Today’s choices may vary somewhat on small details, like slightly different costs, approaches to security, operational complexity, etc. Fundamentally, though, they are all pretty similar under the hood. This means one big important change: the switching cost is vastly less.

So, for example, imagine you pick EKS to start out on. One year down the line you decide you’d like to manage things yourself and want to move to OpenShift. No problem. The transition costs are not zero, but they are also not severe. You can afford to change your mind. You are not held prisoner with your platform decision by fear of retooling costs.

That said, it is important to recognize that there can be additional costs in changing between platforms when that means moving away from nonstandard or proprietary tools bundled with the original choice. Services that are simple to use on the public cloud service you started with may not be so simple, or even available at all, on others. A significant example of this is services like DynamoDB, Amazon’s proprietary database service; while easily consumable on AWS, it can only be used there. Same with OpenShift’s Source to Image tool for creating Docker images.

Basically, to reduce the cost (risk) of future migration it is advisable to use, from the start, standard tools with standard formats. Such as the native Kubernetes API, or tools like Istio (which sits on top of K8s, and seems to be leading the service mesh market) that work on all the platforms.

Other than that you should be fine. You can move. The cost of being wrong has been dramatically reduced.

This is such great news for enterprises that we shall say it again: You can move! In the early days, there used to be a very difficult decision to make. Should you move quickly and risk getting locked into what could become a dead end technology…Or wait until it’s safe, but suffer opportunity costs?

No more tradeoffs. Now that the risks are reduced, there is no need to wait any more. Pick your platform!

 

Source

Introduction to Container Security | Rancher Labs

 

Expert Training in Kubernetes and Rancher

Join our free online training sessions to learn how to manage Kubernetes workloads with Rancher.

Sign up here

Containers are still a relatively new technology, but they have already had a massive impact on software development and delivery. Companies all around the world are starting to migrate towards microservices, and containers enable developers to quickly spin up services with minimal effort. In the past it took a fair amount of time to download, install, and configure software, but now you can take advantage of solutions that have already been packaged up for you and only require a single command to run. These packages, known as images, are easily extensible and configurable so that teams can customize them to suit their needs and reuse them for many projects. Companies like Docker and Google helped to make containers easier to use and deploy, introducing orchestration tools like Docker Compose, Docker Swarm, and Kubernetes.

While the usefulness and power of containers continues to grow, security is still something that prevents containers from receiving wider adoption. In this article, we’re going to take a look at the security mechanisms of containers, some of the big issues with container security, and some of the methods that you can use to address those issues.

Cgroups and Namespaces

We will begin by talking about two of the Linux kernel features that helped make containers as we know them today possible. The first of these features is called cgroups. This feature was developed as a way to group processes and provide more control over the resources that were available to the group, such as CPU, memory, and I/O. This feature also allows for better accounting of the usage, such as when teams need to report the usage for billing purposes. The cgroups feature of containers allows them to scale in a controllable way and have a predictable capacity. It is also a good security feature because processes running in containers can not easily consume all the resources on a system – for example, a denial of service attack by starving other processes of required resources.

The other feature is called namespaces which essentially allows a process to have its own dedicated set of resources, such as files, users, process ids, and hostnames (there are different namespace types for each of the resource types). In a lot of ways, this can make a container process seem like it is a virtual machine, but the process still executes system calls on the main kernel.

Namespaces limit what the process running inside a container can see and do. Container processes can only see processes running in the same namespace (in general, containers only have a single process, but it is possible to have more than one). These processes see a filesystem which is a small subset of the real filesystem. The user ids inside the container can be mapped from different ids outside the container (you could make the user root have user id 0 inside the container but actually has user id 1099 outside the container – thus appearing to give administrative control when not actually doing so). This feature allows containers to isolate processes, making them more secure than they would normally be.

Priviledged Containers

When you are running processes in containers, there is sometimes a need to do things that do require elevated privileges. A good example is running a web server that needs to listen on a privileged port, such as 80. Ports under 1024 are privileged and usually assigned to more sensitive network processes such as mail, secure shell access, HTTP, and network time synchronization. Opening these ports requires elevated access as a security feature so that rogue processes can’t just open them up and masquerade as legitimate ones. If you wanted to run an Apache server (which is often used as a secure entry point to an application) in a container and listen on port 80, you would need to give that container privileged access.

The problem with giving a container elevated rights is that it makes it less secure in a lot of different ways. Your intent was to give the process the ability to open a privileged port, but now the process has the ability to do other things that require privileged access. The limitations imposed by the cgroups controller have been lifted, and the process can do almost anything that is possible to do running outside the container. To avoid this issue, it is possible to map a non-privileged port outside the container to a privileged port inside the container. For example, you map port 8080 on the host to port 80 inside the container. This will allow you to run processes that normally require privileged ports without actually giving them privileged access.

Seccomp Profiles

Seccomp and seccomp-bpf are Linux kernel features that allow you to restrict the system calls that a process can make. Docker allows you to define seccomp security profiles to do the same to processes running inside a container. The default seccomp profile for Docker disables around 40 system calls to provide a baseline level of security. These profiles are defined in JSON and use whitelisting for allowed calls (making any calls not listed prohibited). This whitelisting approach is safer because added system calls don’t immediately become available until added to the whitelist.

The issue with these seccomp profiles is that they must be specified at the start of the container and are difficult to manage. Detailed knowledge of the available Linux system calls is required to create effective profiles, and it can be difficult to find the balance between a policy too restrictive (preventing some applications from running) and a policy too flexible (possibly creating an unnecessary security risk).

Capabilities

Capabilities are another way of specifying privileges that need to be available to a process running in a container. The advantage of capabilities is that groups of permissions are bundled together into meaningful groups which makes it easier to collect the privileges required for doing common tasks.

In Docker, a large number of capabilities are enabled by default and can be dropped, such as the ability to change owners of files, open up raw sockets, kill processes, or run processes as other users using setuid or setgid.

More advanced capabilities can be added, such as load and unload kernel modules, override resource limits, set the system clock, make socket broadcasts and listen to multicasts, and perform various system admin operations.

Using capabilities is much more secure than simply running a container as privileged, and a lot easier to manage than using seccomp profiles. Next we’ll talk about some system wide security controls that can also apply to containers.

SELinux and AppArmor

A lot of the security concerns for processes running in containers apply to processes on a host in general, and a couple of different security tools have been written to address the issue of better controlling what processes can do.

SELinux is a Linux kernel security module that provides a mandatory access control (MAC) mechanism for providing stricter security enforcement. SELinux defines a set of users, roles, and domains that can be mapped to the actual system users and groups. Processes are mapped to a combination of user, role, and domain, and policies define exactly what can be done based on the combinations used.

AppArmor is a similar MAC mechanism that aims to confine programs to a limited set of resources. AppArmor is more focused on binding access controls to programs rather than users. It also combines capabilities and defining access to resources by path.

These solutions allow for fine grain control over what processes are allowed to do and make it possible to secure processes to the bare minimum set of privileges that will be required to run. The issue with solutions like these is that the policies can take a long time to develop and tune properly.

Policies that are too strict will block a lot of applications that may expect to have more privileges than they really need. Policies that are too loose are effectively lowering the overall level of security on the system. A lot of companies would like to use these solutions, but they are simply too difficult to maintain.

Some Final Thoughts

Container security controls are an interesting subject that goes back to the beginning of containers with cgroups and namespaces. Because of a lot of the things that we want to do with containers, extending privileges is often something that we must do. The easiest and least secure approach is simply using privileged containers, but we can make that a lot better by using capabilities. More advanced techniques like seccomp profiles, SELinux or AppArmor allow more fine grained control but require more effort to manage. The key is to find a balance between giving the process the least amount of privileges possible and the ease by which that can be done.

Containers are, however, a quickly evolving technology, and with security become more and more of an important focus in software engineering, we should see better controls continue to emerge, especially for large organizations which may have hundreds or thousands of containers to manage. The platforms that make managing so many containers possible are likely to guide the way in building the next generation of security controls. Some of those controls will likely be new Linux kernel features, and we may even see a hybrid approach where containers use a virtual kernel instead of the real one to provide even more security. The future of container security is looking promising.

Jeffrey Poore

Jeffrey Poore

Senior App Architect and Manager

Source

Happy birthday, Kubernetes: Here’s to three years of collaborative innovation

Three years ago the community celebrated the first production-ready release of Kubernetes, what is now a de facto standard system for container orchestration, at the 1.0 launch day at OSCON. Today we celebrate Kubernetes to not only acknowledge it on the project’s birthday but to also thank the community for the extensive work and collaboration to drive the project forward.

Let’s look back at what has made this one of the fastest moving modern open source projects, how we arrived at production maturity, and look forward to what’s to come.

Kubernetes: A look back

You’ve probably heard by now that Kubernetes was a project born at Google and based on the company’s internal infrastructure known as Borg. Early on, Google introduced the project to Red Hat and asked us to participate in it and help build out the community. In 2014, Kubernetes saw its first alpha release by a team of engineers who sought to simplify systems orchestration and management by decoupling applications and infrastructure and by also decoupling applications from their own components.

At the same time, enterprises around the world were increasingly faced with the pressure to innovate more quickly and bring new, differentiated applications to bear in crowded marketplaces. Industry interest began to consolidate around Kubernetes, thanks to its capacity for supporting rapid, iterative software development and the development of applications that could enable change across environments, from on-premise to the public cloud.

Before Kubernetes, the IT world attempted to address these enterprise needs with traditional Platform-as-a-Service (PaaS) offerings, but frequently these solutions were too opinionated in terms of the types of applications that you could run on them and how those applications were built and deployed. Kubernetes provided a much more un-opinionated, open platform, that enabled customers to deploy a broader range of applications with greater flexibility, and as a result, Kubernetes has been used as a core building block for both Containers-as-a-Service (CaaS) and PaaS-based platforms.

In July 2015, Kubernetes 1.0 was released and the Cloud Native Computing Foundation (CNCF) was born, a vendor-neutral governing body intended to host Kubernetes and related ecosystem projects. Red Hat was a founding member at the CNCF’s launch and we are pleased to see its growth. We also continue to support and contribute to the Kubernetes upstream, much as we did even pre-CNCF, and are excited to be a part of these critical milestones.

Dive in more with this explainer from Brendan Burns, a creator of Kubernetes, from CoreOS Fest in 2015 for a brief technical picture of Kubernetes.

Kubernetes as the new Linux of the Cloud

So what makes Kubernetes so popular?

It is the demand for organizations to move to hybrid cloud and multi-cloud infrastructure. It is the demand for applications paired with the need to support cloud-native and traditional applications on the same platform. It is the desire to manage distributed systems with containerized software and a microservices infrastructure. It is the need for developers and administrators to focus on innovation rather than just keeping the lights on.

Kubernetes has many of the answers to address these demands. The project now provides the ability to self-heal when there is a problem, separate developer and operational concerns, and update itself in near real-time.

And, it’s all open source, which makes it available to all and enables contributors all around the world to better solve this next era of computing challenges together, in the open, unbeholden to siloed environments or proprietary platforms.

Pushing the project forward is the Kubernetes community, a diverse community with innovative ideas, discipline, maintenance, and a consensus-driven decision model. In more than 25 years of contributing to open source projects, ranging from the Linux kernel to OpenStack, we’ve seen few projects that can claim the velocity of Kubernetes. It is a testament to the project’s contributors’ ability to work collaboratively to solve a broad enterprise need that Kubernetes has moved so quickly from 1.0 to broad industry support in three years.

Kubernetes has won over the support of hundreds of individuals and companies, both large and small, including major cloud providers. Red Hat has been contributing to the project since it was open sourced in 2014, and today is the second leading corporate contributor (behind only Google) working on and contributing code to the project. Not to mention, we are the experts behind Red Hat OpenShift, the industry’s most comprehensive enterprise Kubernetes platform.

We’ve observed that Kubernetes is becoming nearly as ubiquitous as Linux in the enterprise IT landscape, with the potential to serve as the basis for nearly any type of IT initiative.

Kubernetes Major Milestones

Some major milestones over the years to note include contributions making it more extensible:

  • September 2015: Kubernetes’s use of the Container Network Interface (CNI) has enabled a rich ecosystem of networking options since the early days of Kubernetes.
  • December 2016: The addition of the Container Runtime Interface (CRI), the way containers start and stop, was a major step forward in Kubernetes 1.5 and on, and helped move towards OCI-compliant containers and tooling.
  • January 2017: etcd v3, a backbone of large-scale distributed systems created by CoreOS and maintained at Red Hat, came into production in Kubernetes 1.6.
  • June 2017: Custom resource definitions (CRD) were introduced to enable API extension by those outside the core platform.
  • October 2017: The stability of role-based access control (RBAC), which lets admins control access to the Kubernetes API, made Kubernetes even more dependable with this security feature enterprises care about. It reached stable in Kubernetes 1.8 but had been widely used in the platform since 1.3.
  • March 2018: How storage is provided and consumed has moved along well in three years with the availability of local persistent volumes (PVs) and dynamically provisioned PVs. Notably, at this time, the Container Storage Interface (CSI), which makes the Kubernetes volume plugin layer more extensible, moved to beta this year in Kubernetes 1.10.
  • June 2018: Custom Resource Definition (CRD) versioning was introduced as beta in Kubernetes 1.11 and is a move toward lowering the barrier and making it easier for you to start building Operators.

Check out some other notable mentions from last year.

Honorable Mentions

But what about the heroic parts of Kubernetes that may not get enough applause? Here are some honorable mentions from our team.

Kubernetes is built for workload diversity

“The scope of the workloads that can/could/will potentially be tackled needs some appreciation,” said Scott McCarty, principal technology product manager, Linux containers at Red Hat. “It’s not just web workloads; it’s much more than that. Kubernetes today solves the 80/20 rule. Imagine what other workloads could come to the project.”

Kubernetes is focused

“The fact that it is focused and is a low-level tool, similar to docker containers or the Linux kernel, is what makes it so broadly exciting. It’s also what makes it not a solution by itself,” said Brian Gracely, director of OpenShift product strategy at Red Hat. “The fact that it’s not a PaaS, and is built to be multi-cloud driven makes it widely usable.”

Kubernetes is extensible

“As Kubernetes matures, the project has shifted its attention to support a broad set of extension points that enable a vibrant ecosystem of solutions to build on top of the platform. Developers are able to extend the Kubernetes API, prove out the pattern, and contribute it back to the broader community,” said Derek Carr, senior principal software engineer and OpenShift architect at Red Hat, and Kubernetes Steering Committee member and SIG-Node co-lead.

Kubernetes is all about collaboration

“At three years old, Kubernetes is now proving itself as one of the most successful open source collaboration efforts since Linux. Having learned lessons from prior large scale, cross-community collaboration initiatives, such as OpenStack, the Kubernetes community has managed to leapfrog to a new level of effective governance that embraces diversity and an ethos of openness – all of which has driven incredible amounts of innovation into all aspects of the project,” said Diane Mueller, director, community development, Red Hat Cloud Platform.

The Next Frontier

Kubernetes is being used in production by many companies globally, with Red Hat OpenShift Container Platform providing a powerful choice for organizations looking to embrace Kubernetes for mission-critical roles, but we expect still more innovation to come.

A major innovation on the rise is the Operator Framework that helps manage Kubernetes native applications in an effective, automated, and scalable way. Follow the project here: https://github.com/operator-framework.

If you want to learn more about Kubernetes in general, Brian Gracely discussed what’s next for Kubernetes and you can learn more listening to a recent webinar about what to look forward to.

Source

Codefresh adds native integration for Azure Kubernetes Service

Deploying an application to Kubernetes is a very easy process when you use Codefresh as your CI/CD solution. Codefresh comes with its own integrated Kubernetes dashboard that allows you to view pods, deployments, and services in a unified manner regardless of the cloud provider behind the cluster.

This makes Codefresh the perfect solution for multi-cloud installations as you can gather all cluster information on a single view even when they come from multiple providers. At the most basic level, you can add any Kubernetes cluster in Codefresh using its credentials (token, certificate, url). This integration process is perfectly valid but it involves some manual steps that are time-consuming in order to gather these credentials.

Today we are happy to announce that you can now add your Azure Kubernetes cluster (AKS) in a quicker way using native integration. The process is much more simple than the “generic” Kubernetes integration.

First, navigate to the integration screen from the left sidebar and select “Kubernetes”. Click the drop-down menu “add provider”. You will see the new option for “Azure AKS”

Adding Azure clusterAdding Azure cluster

Click the “Authenticate” button and enter your login information for Azure. You should also accept the permissions that Codefresh asks.
At the time or writing you will need a company/organizational account with Azure.

Once Codefresh gets the required permissions, you will see your Azure subscriptions and available clusters.

Selecting your clusterSelecting your cluster

Click “ADD” and Codefresh will show you the basic details of your cluster:

Basic cluster detailsBasic cluster details

That’s it! The process of adding an Azure Kubernetes cluster is much easier now. There is no need to manually enter token and certificate information anymore.

The cluster is now available in the Kubernetes dashboard along with all your other clusters.

Kubernetes dashboardKubernetes dashboard

You are now ready to start deploying your applications using Codefresh pipelines.

New to Codefresh? Create Your Free Account Today!

Source

Kubernetes v1.12: Introducing RuntimeClass – Kubernetes

 

Kubernetes v1.12: Introducing RuntimeClass

Author: Tim Allclair (Google)
Kubernetes originally launched with support for Docker containers running native applications on a Linux host. Starting with rkt in Kubernetes 1.3 more runtimes were coming, which lead to the development of the Container Runtime Interface (CRI). Since then, the set of alternative runtimes has only expanded: projects like Kata Containers and gVisor were announced for stronger workload isolation, and Kubernetes’ Windows support has been steadily progressing. 
With runtimes targeting so many different use cases, a clear need for mixed runtimes in a cluster arose. But all these different ways of running containers have brought a new set of problems to deal with:

  • How do users know which runtimes are available, and select the runtime for their workloads?
  • How do we ensure pods are scheduled to the nodes that support the desired runtime?
  • Which runtimes support which features, and how can we surface incompatibilities to the user?
  • How do we account for the varying resource overheads of the runtimes?

RuntimeClass aims to solve these issues.

RuntimeClass in Kubernetes 1.12

RuntimeClass was recently introduced as an alpha feature in Kubernetes 1.12. The initial implementation focuses on providing a runtime selection API, and paves the way to address the other open problems.
The RuntimeClass resource represents a container runtime supported in a Kubernetes cluster. The cluster provisioner sets up, configures, and defines the concrete runtimes backing the RuntimeClass. In its current form, a RuntimeClassSpec holds a single field, the RuntimeHandler. The RuntimeHandler is interpreted by the CRI implementation running on a node, and mapped to the actual runtime configuration. Meanwhile the PodSpec has been expanded with a new field, RuntimeClassName, which names the RuntimeClass that should be used to run the pod.
Why is RuntimeClass a pod level concept? The Kubernetes resource model expects certain resources to be shareable between containers in the pod. If the pod is made up of different containers with potentially different resource models, supporting the necessary level of resource sharing becomes very challenging. For example, it is extremely difficult to support a loopback (localhost) interface across a VM boundary, but this is a common model for communication between two containers in a pod.

What’s next?

The RuntimeClass resource is an important foundation for surfacing runtime properties to the control plane. For example, to implement scheduler support for clusters with heterogeneous nodes supporting different runtimes, we might add NodeAffinity terms to the RuntimeClass definition. Another area to address is managing the variable resource requirements to run pods of different runtimes. The Pod Overhead proposal was an early take on this that aligns nicely with the RuntimeClass design, and may be pursued further.
Many other RuntimeClass extensions have also been proposed, and will be revisited as the feature continues to develop and mature. A few more extensions that are being considered include:

  • Surfacing optional features supported by runtimes, and better visibility into errors caused by incompatible features.
  • Automatic runtime or feature discovery, to support scheduling decisions without manual configuration.
  • Standardized or conformant RuntimeClass names that define a set of properties that should be supported across clusters with RuntimeClasses of the same name.
  • Dynamic registration of additional runtimes, so users can install new runtimes on existing clusters with no downtime.
  • “Fitting” a RuntimeClass to a pod’s requirements. For instance, specifying runtime properties and letting the system match an appropriate RuntimeClass, rather than explicitly assigning a RuntimeClass by name.

RuntimeClass will be under active development at least through 2019, and we’re excited to see the feature take shape, starting with the RuntimeClass alpha in Kubernetes 1.12.

Learn More

Source

Introducing Jetstack’s Kubernetes for Application Developers Course // Jetstack Blog

9/Oct 2018

By Charlie Egan

Introduction

Our Kubernetes training programme forms a considerable part of our services at Jetstack. In 2017 alone we trained more than 1,000 engineers from over 50 different companies, and so far in 2018 we have already delivered over 60 courses. We are constantly making an effort to ensure that our course content is refined and up-to-date, and that it reflects both the real-world experience of our engineers and also the evolving Kubernetes ecosystem. 2018 has seen us dedicate a lot of time to our training programme: As well as maintaining our current courses, we have developed online resources available through Flight Deck, the Jetstack Subscription customer portal, for self-paced learning.

people

However, we had a recurring theme in feedback from attendees and customers. As more and more of them deployed Kubernetes, they wanted to learn how to make best use of the features it has to offer. For this reason, we decided to build the Application Developer’s course, to be announced at Google Cloud Next London ‘18. This blog post details our motivation for building the course, as well as some of the main topics.

Why Application Development?

Jetstack’s current course offering is largely aimed at those in an operational role – deploying and managing clusters. After the beginner level course, we immediately get stuck into more ‘cluster admin’ type tasks as well as the internals of Kubernetes itself.

With this new course, we’re introducing a new ‘track’ for a developer audience. This course is for developers building and architecting applications to be deployed on a Kubernetes-based platform.

fig2

Agenda

The course is very hands-on and entirely based around workshops where attendees extend a simple application to explore Kubernetes from an application developer’s perspective.

We start the day with a number of workshops around better integrating your applications with Kubernetes, covering topics such as probes and managing pod environment. The course then progresses to cover a number of features that are specifically useful to application developers, including CronJobs, Lifecycle Hooks and Init Containers.

Further workshops are designed to be selected based on the interests of attendees with more in-depth workshops available on the following topics:

  • Wrapping an application in a Helm chart
  • Developing in-cluster with skaffold
  • Debugging with kubectl
  • Connecting to an database deployed with Helm
  • Logging and custom metrics

Kubernetes features for Application Developers

One highlight is the workshops covering features that can directly reduce complexity versus a more traditional deployment.

fig3

As an example, CronJobs greatly simplify the common problem of running cron for recurring jobs on a fleet of autoscaling application instances. Solutions to handle duplicate jobs and the issues that arise from this architecture are often handmade and a bit of a hack. In this workshop we see how this is much simpler leveraging Kubernetes to run new containers for jobs on a schedule.

Recurring jobs are a feature that a huge number of applications need. Kubernetes offers a language-agnostic standard for running this type of work – it’s really valuable.

Conclusion

If you think that you and your team would benefit from this course, contact us to take part in the pilot scheme. We will be running one-day courses from November 2018.

Source

I am so proud of this company… – Heptio

It has been a truly amazing quarter for Heptio. Kubernetes has emerged as the de facto standard for container orchestration and Heptio has stepped up as a partner to many Fortune 500 companies on their cloud native journey. It is hard to believe that we started the company less than two years ago given the size and strength of the team — we recently passed our 100 employee threshold.

Today we were listed as a winner of the CNBC Upstart 100 award, following on the heels of being recognized with the Seattle Tech Impact Gold Award (in the cloud category). These accolades are greatly appreciated and are a credit to the amazing folks that joined our company. Our collective mission is to make enterprise developers more intrinsically productive and to empower operators. And we believe that the best way to deliver that impact is through a highly consistent multi-cloud and on-premises computing environment without fear of vendor lock-in.

Since announcing HKS (Heptio Kubernetes Subscription) earlier this year, Heptio has been selected as a partner to organizations large and small to help them understand the technology, align it with their internal operating practices and provide the mission critical support needed to bring it into production. We are humbled that a company of our age and size has a customer base that includes 3 of the 4 largest retailers in the world, 4 of the 5 largest telecommunication companies in the United States, and several global financial services institutions. Working alongside these customers has been immensely rewarding and their experiences fuel our future direction.

Looking forward we will continue to drive innovation in the open source ecosystem, and will continue to invest heavily in the OSS projects we have already created (collectively they have garnered more than 4700 stars on Github). We have established a strong release cadence with our existing technologies (with recent community excitement around the release of Ark 0.9, and Contour 0.6). You can expect to see some interesting new capabilities being shipped in the not too distant future.

A big thank you to the whole team at Heptio for another successful quarter. I could not be more excited about the coming months and years!

Source

Why You Should Not Neglect Your Developer’s Kubernetes Clusters

Why You Should Not Neglect Your Developer's Kubernetes ClustersImage attribution below

So you’ve finally succeeded in convincing your organization to use Kubernetes and you’ve even gotten first services in production. Congratulations!

You know uptime of your production workloads is of utmost importance so you set up your production cluster(s) to be as reliable as possible. You add all kinds of monitoring and alerting, so that if something breaks your SREs get notified and can fix it with the highest priority.

But this is expensive and you want to have staging and development clusters, too – maybe even some playgrounds. And as budgets are always tight, you start thinking…

What’s with DEV? Certainly can’t be as important as PROD, right? Wrong!

The main goal with all of these nice new buzzwordy technologies and processes was Developer Productivity. We want to empower developers and enable them to ship better software faster.

But if you put less importance on the reliability of your DEV clusters, you are basically saying “It’s ok to block my developers”, which indirectly translates to “It’s ok to pay good money for developers (internal and external) and let them sit around half a day without being able to work productively”.

Ah yes, the SAP DEV Cluster is also sooo important because of that many external and expensive consultants. Fix DEV first, than PROD which is earning all the money.

— Andreas Lehr (@shakalandy) September 13, 2018

Furthermore, no developer likes to hear that they are less important than your customers.

We consider our dev cluster a production environment, just for a different set of users (internal vs external).

— Greg Taylor (@gctaylor) September 18, 2018

What could go wrong?

Let’s look at some of the issues you could run into, when putting less importance on DEV, and the impact they might have.

I did not come up with these. We’ve witnessed these all happen before over the last 2+ years.

Scenario 1: K8s API of the DEV cluster is down

Your nicely built CI/CD pipeline is now spitting a mountain of errors. Almost all your developers are now blocked, as they can’t deploy and test anything they are building.

This is actually much more impactful in DEV than in production clusters as in PROD your most important assets are your workloads, and those should still be running when the Kubernetes API is down. That is, if you did not build any strong dependencies on the API. You might not be able to deploy a new version, but your workloads are fine.

Scenario 2: Cluster is full / Resource pressure

Some developers are now blocked from deploying their apps. And if they try (or the pipeline just pushes new versions), they might increase the resource pressure.

Pods start to get killed. Now your priority and QoS classes kick in – you did remember to set those, right? Or was that something that was not important in DEV? Hopefully, you have at least protected your Kubernetes components and critical addons. If not, you’ll see nodes going down, which again increases resource pressure. Thought DEV clusters could do with less buffer? Think again.

This sadly happens much more in DEV because of two things:

  1. Heavy CI running in DEV
  2. Less emphasis on clean definition of resources, priorities, and QoS classes.

Scenario 3: Critical addons failing

In most clusters, CNI and DNS are critical to your workloads. If you use an Ingress Controller to access them, then that counts also as critical. You’re really cutting edge and are already running a service mesh? Congratulations, you added another critical component (or rather a whole bunch of them – looking at you Istio).

Now if any of the above starts having issues (and they do partly depend on each other), you’ll start seeing workloads breaking left and right, or, in the case of the Ingress Controller, them not being reachable outside the cluster anymore. This might sound small on the impact scale, but just looking at our past postmortems, I must say that the Ingress Controller (we run the community NGINX variant) has the biggest share of them.

What happened?

A multitude of thinkable and unthinkable things can happen and lead to one of the scenarios above.

Most often we’ve seen issues arising because of misconfiguration of workloads. Maybe you’ve seen one of the below (the list is not conclusive).

  • CI is running wild and filling up your cluster with Pods without any limits set
  • CI “DoSing” your API
  • Faulty TLS certs messing up your Ingress Controller
  • Java containers taking over whole nodes and killing them

Sharing DEV with a lot of teams? Gave each team cluster-admin rights? You’re in for some fun. We’ve seen pretty much anything, from “small” edits to the Ingress Controller template file, to someone accidentally deleting the whole cluster.

Conclusion

If it wasn’t clear from the above: DEV clusters are important!

Just consider this: If you use a cluster to work productively then it should be considered similarly important in terms of reliability as PROD.

DEV clusters usually need to be reliable at all times. Having them reliable only at business hours is tricky. First, you might have distributed teams and externals working at odd hours. Second, an issue that happens at off-hours might just get bigger and then take longer to fix once business hours start. The latter is one of the reasons why we always do 24/7 support, even if we could offer only business hours support for a cheaper price.

Some things you should consider (not only for DEV):

  • Be aware of issues with resource pressure when sizing your clusters. Include buffers.
  • Separate teams with namespaces (with access controls!) or even different clusters to decrease the blast radius of misuse.
  • Configure your workloads with the right requests and limits (especially for CI jobs!).
  • Harden your Kubernetes and Addon components against resource pressure.
  • Restrict access to critical components and do not give out cluster-admin credentials.
  • Have your SREs on standby. That means people will get paged for DEV.
  • If possible enable your developers to easily rebuild DEV or spin up clusters for development by themselves.

Why don’t devs have the capability to rebuild dev 🤷‍♂️

— Chris Love (@chrislovecnm) September 14, 2018

If you really need to save money, you can experiment with downscaling in off-hours. If you’re really good at spinning up or rebuilding DEV, i.e. have it all automated from cluster creation to app deployments, then you could experiment with “throw-away-clusters”, i.e. clusters that get thrown away at the end of the day and start a new shortly before business hours.

Whatever you decide, please, please, please, do not block your developers, they will be much happier, and you will get better software, believe me.

P.S. Thanks to everyone responding and giving feedback on Twitter!

Image attribution:
Image created using https://xkcd-excuse.com/ by Mislav Cimperšak.
Original image created by Randall Munroe from XKCD. Released under Creative Commons Attribution-NonCommercial 2.5 License.

Source

Deploying configurable frontend web application containers

Sep 19, 2018

Alternative Text by José Moreira

The approach for deploying a containerised application typically involves building a Docker container image from a Dockerfile once and deploying that same image across several deployment environments (development, staging, testing, production).

If following security best practices, each deployment environment will require a different configuration (data storage authentication credentials, external API URLs) and the configuration is injected into the application inside the container through environment variables or configuration files. Our Hamish Hutchings takes a deeper look at 12-factor app in this blog post. Also, the possible configuration profiles might not be predetermined, if for example, the web application should be ready to be deployed both on public or private cloud (client premises), as it is also common for several configuration profiles to be added to source code and the required profile to be loaded at build time.

The structure of a web application project typically contains a ‘src’ directory with source code and executing npm run-script build triggers the Webpack asset build pipeline. Final asset bundles (HTML, JS, CSS, graphics, and fonts) are written to a dist directory and contents are either uploaded to a CDN or served with a web server (NGINX, Apache, Caddy, etc).

For context in this article, let’s assume the web application is a single-page frontend application which connects to a backend REST API to fetch data, for which the API endpoint will change across deployment environments. The backend API endpoint should, therefore, be fully configurable and configuration approach should support both server deployment and local development and assets are served by NGINX.

Deploying client-side web application containers requires a different configuration strategy compared to server-side applications containers. Given the nature of client-side web applications, there is no native executable that can read environment variables or configuration files in runtime, the runtime is the client-side web browser and configuration has to be hard-coded in the Javascript source code either by hard-coding values during the asset build phase or hard-coding rules (a rule would be to deduce current environment based on the domain name, ex: ‘staging.app.com’).

There is one OS process which is relevant to configuration in which reading values from environment values is useful, which is the asset build Node JS process and this is helpful for configuring the app for local development with local development auto reload.

For the configuration of the webapp across several different environments, there are a few solutions:

  1. Rebuild the Webpack assets on container start during each deployment with the proper configuration on the destination server node(s):
    • Adds to deployment time. Depending on deployment rate and size of the project, the deployment time overhead might be considerable.
    • Is prone to build failures at end of deployment pipeline even if the image build has been tested before.
    • Build phase can fail for example due to network conditions although this can probably be minimised by building on top of a docker image that already has all the dependencies installed.
    • Might affect rollback speed
  2. Build one image per environment (again with hardcoded configuration):
    • Similar solutions (and downsides) of solution #1 except that it adds clutter to the docker registry/daemon.
  3. Build image once and rewrite configuration bits only during each deployment to target environment:
    • Image is built once and ran everywhere. Aligns with the configuration pattern of other types of applications which is good for normalisation
    • Scripts that rewrite configuration inside the container can be prone to failure too but they are testable.

I believe solution #1 is viable and, in some cases, simpler and probably required if the root path where the web application is hosted needs to change dynamically, ex.: from ‘/’ to ‘/app’, as build pipelines can hardcode the base path of fonts and other graphics in CSS files with the root path, which is a lot harder to change post build.

Solution #3 is the approach I have been implementing for the projects where I have been responsible for containerising web applications (both at my current and previous roles), which is the solution also implemented by my friend Israel and for which he helped me out implementing the first time around and the approach that will be described in this article.

Application-level configuration

Although it has a few moving parts, the plan for solution #3 is rather straightforward:

For code samples, I will utilise my fork of Adam Sandor micro-service Doom web client, which I have been refactoring to follow this technique, which is a Vue.js application. The web client communicates with two micro-services through HTTP APIs, the state and the engine, endpoints which I would like to be configurable without rebuilding the assets.

Single Page Applications (SPA) have a single “index.html” as a entry point to the app and during deployment. meta tags with optional configuration defaults are added to the markup from which application can read configuration values. script tags would also work but I found meta tags simple enough for key value pairs.









1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

<!DOCTYPE html>

<html lang=”en”>

<head>

<meta charset=”utf-8″>

<meta http-equiv=”X-UA-Compatible” content=”IE=edge”>

<meta name=”viewport” content=”width=device-width,initial-scale=1.0″>

<meta property=”DOOM_STATE_SERVICE_URL” content=”http://localhost:8081/” />

<meta property=”DOOM_ENGINE_SERVICE_URL” content=”http://localhost:8082/” />

<link rel=”icon” href=”./favicon.ico”>

<title>frontend</title>

</head>

<body>

<noscript>

<strong>We’re sorry but frontend doesn’t work properly without JavaScript enabled. Please enable it to continue.</strong>

</noscript>

<div id=”app”></div>

<!– built files will be auto injected –>

</body>

</html>



For reading configuration values from meta tags (and other sources), I wrote a simple Javascript module (“/src/config.loader.js”):









1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

/**

* Get config value with precedence:

* – check `process.env`

* – check current web page meta tags

* @param key Configuration key name

*/

function getConfigValue (key) {

let value = null

if (process.env && process.env[`$`] !== undefined) {

// get env var value

value = process.env[`$`]

} else {

// get value from meta tag

return getMetaValue(key)

}

return value

}

/**

* Get value from HTML meta tag

*/

function getMetaValue (key) {

let value = null

const node = document.querySelector(`meta[property=$]`)

if (node !== null) {

value = node.content

}

return value

}

export default { getConfigValue, getMetaValue }



This module will read configuration “keys” by looking them up in the available environment variables (“process.env”) first, so that configuration can be overridden with environment variables when developing locally (webpack dev server) and then the current document meta tags.

I also abstracted the configuration layer by adding a “src/config/index.js” that exports an object with the proper values:










import loader from ‘./loader’

export default {

DOOM_STATE_SERVICE_URL: loader.getConfigValue(‘DOOM_STATE_SERVICE_URL’),

DOOM_ENGINE_SERVICE_URL: loader.getConfigValue(‘DOOM_ENGINE_SERVICE_URL’)

}



which can then be utilised in the main application by importing the “src/config” module and accessing the configuration keys transparently:










import config from ‘./config’

console.log(config.DOOM_ENGINE_SERVICE_URL)



There is some room for improvement in the current code as it not DRY (list of required configuration variables is duplicated in several places in the project) and I’ve considered writing a simple Javascript package to simplify this approach as I’m not aware if something already exists. Writing the Docker & Docker Compose files The Dockerfile for the SPA adds source-code to the container to the ‘/app’ directory, installs dependencies and runs a production webpack build (“NODE_ENV=production”). Assets bundles are written to the “/app/dist” directory of the image:










FROM node:8.11.4-jessie

RUN mkdir /app

WORKDIR /app

COPY package.json .

RUN npm install

COPY . .

ENV NODE_ENV production

RUN npm run build

CMD npm run dev



The docker image contains a Node.js script (“/app/bin/rewrite-config.js”) which copies “/app/dist” assets to another target directory before rewriting the configuration. Assets will be served by NGINX and therefore copied to a directory that NGINX can serve, in this case, a shared (persistent) volume. Source and destination directories can be defined through container environment variables:









1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

#!/usr/bin/env node

const cheerio = require(‘cheerio’)

const copy = require(‘recursive-copy’)

const fs = require(‘fs’)

const rimraf = require(‘rimraf’)

const DIST_DIR = process.env.DIST_DIR

const WWW_DIR = process.env.WWW_DIR

const DOOM_STATE_SERVICE_URL = process.env.DOOM_STATE_SERVICE_URL

const DOOM_ENGINE_SERVICE_URL = process.env.DOOM_ENGINE_SERVICE_URL

// – Delete existing files from public directory

// – Copy `dist` assets to public directory

// – Rewrite config meta tags on public directory `index.html`

rimraf(WWW_DIR + ‘/*’, {}, function() {

copy(`$`, `$`, , function(error, results) {

if (error) {

console.error(‘Copy failed: ‘ + error);

} else {

console.info(‘Copied ‘ + results.length + ‘ files’);

rewriteIndexHTML(`$/index.html`, {

DOOM_STATE_SERVICE_URL: DOOM_STATE_SERVICE_URL,

DOOM_ENGINE_SERVICE_URL: DOOM_ENGINE_SERVICE_URL

})

}

});

})

/**

* Rewrite meta tag config values in “index.html”.

* @param file

* @param values

*/

function rewriteIndexHTML(file, values) {

console.info(`Reading ‘$’`)

fs.readFile(file, ‘utf8’, function (error, data) {

if (!error) {

const $ = cheerio.load(data)

console.info(`Rewriting values ‘$’`)

for (let [key, value] of Object.entries(values)) {

console.log(key, value);

$(`[property=$]`).attr(“content”, value);

}

fs.writeFile(file, $.html(), function (error) {

if (!error) {

console.info(`Wrote ‘$’`)

} else {

console.error(error)

}

});

} else {

console.error(error)

}

});

}



The script utilises CheerioJS to read the “index.html” into memory, replaces values of meta tags according to environment variables and overwrites “index.html”. Although “sed” would have been sufficient for search & replace, I chose CherioJS as a more reliable solution that also allows expanding into more complex solutions like script injections.

Deployment with Kubernetes

Let’s jump into the Kubernetes Deployment manifest:









1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

# Source: doom-client/templates/deployment.yaml

apiVersion: apps/v1beta2

kind: Deployment

metadata:

name: doom-client

labels:

name: doom-client

spec:

replicas: 1

selector:

matchLabels:

name: doom-client

template:

metadata:

labels:

name: doom-client

spec:

initContainers:

– name: doom-client

image: “doom-client:latest”

command: [“/app/bin/rewrite-config.js”]

imagePullPolicy: IfNotPresent

env:

– name: DIST_DIR

value: “/app/dist”

– name: WWW_DIR

value: “/tmp/www”

– name: DOOM_ENGINE_SERVICE_URL

value: “http://localhost:8081/”

– name: DOOM_STATE_SERVICE_URL

value: “http://localhost:8082/”

volumeMounts:

– name: www-data

mountPath: /tmp/www

containers:

– name: nginx

image: nginx:1.14

imagePullPolicy: IfNotPresent

volumeMounts:

– name: www-data

mountPath: /usr/share/nginx/html

– name: doom-client-nginx-vol

mountPath: /etc/nginx/conf.d

volumes:

– name: www-data

emptyDir: {}

– name: doom-client-nginx-vol

configMap:

name: doom-client-nginx



The Deployment manifest defines an “initContainer” which executes the “rewrite-config.js” Node.js script to prepare and update the shared storage volume with the asset bundles. It also defines an NGINX container for serving our static assets. Finally, it also creates a shared Persistent Volume which is mounted on both of the above containers. In the NGINX container the mount point is “/var/www/share/html” but on the frontend container “/tmp/www” for avoiding creating extra directories. “/tmp/www” will be the directory where the Node.js script will copy asset bundles to and rewrite the “index.html”. Local development with Docker Compose

The final piece of our puzzle is the local Docker Compose development environment. I’ve included several services that allow both developing the web application with the development server and testing the application when serving production static assets through NGINX. It is perfectly possible to separate these services into several YAML files (“docker-compose.yaml”, “docker-compose.dev.yaml” and “docker-compose.prod.yaml”) and do some composition but I’ve added a single file for the sake of simplicity.

Apart from the “doom-state” and “doom-engine” services which are our backend APIs, the “ui” service starts the webpack development server with “npm run dev” and the “ui-deployment” service, which runs a container based on the same Dockerfile, runs the configuration deployment script. The “nginx” service serves static assets from a persistent volume (“www-data”) which is also mounted on the “ui-deployment” script.









1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

# docker-compose.yaml

version: ‘3’

services:

ui:

build: .

command: [“npm”, “run”, “dev”, ]

ports:

– “8080:8080”

environment:

– HOST=0.0.0.0

– PORT=8080

– NODE_ENV=development

– DOOM_ENGINE_SERVICE_URL=http://localhost:8081/

– DOOM_STATE_SERVICE_URL=http://localhost:8082/

volumes:

– .:/app

# bind volume inside container for source mount not shadow image dirs

– /app/node_modules

– /app/dist

doom-engine:

image: microservice-doom/doom-engine:latest

environment:

– DOOM_STATE_SERVICE_URL=http://doom-state:8080/

– DOOM_STATE_SERVICE_PASSWORD=enginepwd

ports:

– “8081:8080”

doom-state:

image: microservice-doom/doom-state:latest

ports:

– “8082:8080”

# run production deployment script

ui-deployment:

build: .

command: [“/app/bin/rewrite-config.js”]

environment:

– NODE_ENV=production

– DIST_DIR=/app/dist

– WWW_DIR=/tmp/www

– DOOM_ENGINE_SERVICE_URL=http://localhost:8081/

– DOOM_STATE_SERVICE_URL=http://localhost:8082/

volumes:

– .:/app

# bind volume inside container for source mount not shadow image dirs

– /app/node_modules

– /app/dist

# shared NGINX static files dir

– www-data:/tmp/www

depends_on:

– nginx

# serve docker image production build with nginx

nginx:

image: nginx:1.14

ports:

– “8090:80”

volumes:

– www-data:/usr/share/nginx/html

volumes:

www-data:



Since the webpack dev server is a long running process which also hot-reloads the app on source code changes, the Node.js config module will yield configuration from environment variables, based on the precedence I created. Also, although source code changes can trigger client-side updates without restarts (hot reload), it will not update the production build, which has to be manual but straightforward with a $ docker-compose build && docker-compose up.

Summarizing, although there are a few improvements points, including on the source code I wrote for this implementation, this setup has been working pretty well for the last few projects and is flexible enough to also support deployments to CDNs, which is as simple as adding a step for pushing assets to the cloud instead of a shared volume with NGINX.

If you have any comments feel free to get in touch on Twitter or comment under the article.

Download our free whitepaper, Kubernetes: Crossing the Chasm below.

Download Whitepaper

Source