Do Microservices Make SOA Irrelevant?

Is service-oriented architecture, or SOA, dead? You may be tempted to
think so. But that’s not really true. Yes, SOA itself may have receded
into the shadows as newer ideas have come forth, yet the remnants of SOA
are still providing the fuel that is propelling the microservices market
forward. That’s because incorporating SOA principles into the design and
build-out of microservices is the best way to ensure that your product
or service offering is well positioned for the long term. In this sense,
understanding SOA is crucial for succeeding in the microservices world.
In this article, I’ll explain which SOA principles you should adopt when
designing a microservices app.

Introduction

In today’s mobile-first development environment, where code is king, it
is easier than ever to build a service that has a RESTful interface,
connect it to a datastore and call it a day. If you want to go the extra
mile, piece together a few public software services (free or paid), and
you can have yourself a proper continuous delivery pipeline. Welcome to
the modern Web and your fully buzzworthy-compliant application
development process. In many ways, microservices are a direct descendant
of SOA, and a bit like the punk rock of the services world. No strict
rules, just some basic principles that loosely keep everyone on the same
page. And just like punk rock, microservices initially embraced a
do-it-yourself ethic, but has been evolving and picking up some
structure which moved microservices into the mainstream. It’s not just
the dot com or Web companies that use microservices anymore—all
companies are interested.

Definitions

For the purposes of this discussion, the following are the definitions I
will be using.

Microservices: The implementation of a specific business function,
delivered as a separate deployable artifact, using queuing or a RESTful
(JSON) interface, which can be written in any language, and that
leverages a continuous delivery pipeline.

SOA: Component-based architecture which has the goal of driving
reuse across the technology portfolio within an organization. These
components need to be loosely coupled, and can be services or libraries
which are centrally governed and require an organization to use a single
technology stack to maximize reusability.

Positive things about microservices-based development

As you can tell, microservices possess a couple of distinct features
that SOA lacked, and they are good:

Allowing smaller, self-sufficient teams to own a product/service
that supports a specific business function has drastically improved
business agility and IT responsiveness (to any directions that the
business units they support) want to take.

Automated builds and testing, while possible under SOA, are now
serious table stakes.

Allowing teams to use the tools they want, primarily around which
language and IDE to use.

Using-agile based development with direct access to the business.
Microservices and mobile development teams have successfully shown
businesses how technologists can adapt to and accept constant feedback.
Waterfall software delivery methods suffered from unnecessary overhead
and extended delivery dates as the business changed while the
development team was off creating products that often didn’t meet the
business’ needs by the time they were delivered. Even iterative
development methodologies like the Rational Unified Process (RUP) had
layers of abstraction between the business, product development, and the
developers doing the actual work.

A universal understanding of the minimum granularity of a service.
There are arguments around “Is adding a client a business function, or
is client management a business function?” So it isn’t perfect, but at
least both can be understood by the business side that actually runs the
business. You may not want to believe it, but technology is not the
entire business (for most of the world’s enterprises anyway). Back in
the days when SOA was the king on the hill, some services performed
nothing but a single database operation, and other services were adding
a client to the system, which led to nothing but confusion from business
when IT did not have a consistent answer.

How can SOA help?

Want to learn more about
Docker, Kubernetes, and Rancher? Join us for free online
training After reading those definitions, you are probably
thinking, “Microservices sounds so much better.” You’re right. It is the
next evolution for a reason, except that it threw away a lot of the
lessons that were hard-learned in the SOA world. It gave up all the good
things SOA tried to accomplish because the IT vendors in the space
morphed everything to push more product. Enterprise integration patterns
(which define how new technologies or concepts are adopted by
enterprises) are a key place where microservices are leveraging the work
done by the SOA world. Everyone involved in the integration space can
benefit from these patterns, as they are concepts, and microservices are
a great technological way to implement them. Below, I’ve listed two
other areas where SOA principles are being applied inside the
microservices ecosystem to great success.

API Gateways (née ESB)

Microservices encourage point-to-point connections, and that each client
take care of their own translations for dates and other nuanced things.
This is just not sustainable as the number of microservices available
from most companies skyrockets. So in comes the concept of an Enterprise
Service Bus (ESB), which provides a means of communication between
different application in an SOA environment. SOA originally intended the
ESB to be used to carry things between service components—not to be
the hub and spoke of the entire enterprise, which is what vendors
pushed, and large companies bought into, and left such a bad taste in
people’s mouths. The successful products in the ESB have changed into
today’s API gateway vendors, which is a centralized way for a single
organization to manage endpoints they are presenting to the world, and
provide translation to older services (often SOA/SOAP) that haven’t been
touched in years but are vital to the business.

Overarching standards

SOA had WS-* standards. They were heavy-handed, but guaranteed
interoperability (mostly). Having these standards in place, especially
the more common ones like WS-Security and WS-Federation, allowed
enterprises to call services used in their partner systems—in terms
that anyone could understand, though they were just a checklist.
Microservices have begun to formalize a set of standards and the vendors
that provide the services. The OAuth and OpenID authentication
frameworks are two great examples. As microservices mature, building
everything in-house is fun, fulfilling, and great for the ego, but
ultimately frustrating as it creates a lot of technical debt with code
that constantly needs to be massaged as new features are introduced. The
other side where standards are rapidly consolidating is API design and
descriptions. In the SOA world, there was one way. It was ugly and
barely readable by humans, but the Web service definition language
(WSDL), a standardized format for cataloguing network services, was
universal. As of April 2017, all major parties (including Google, IBM,
Microsoft, MuleSoft, and Salesforce.com) involved in providing tools to
build RESTful APIs are members of the OpenAPI Initiative. What was once
a fractured market with multiple standards (JSON API, WASL, RAML, and
Swagger) is now becoming a single way for everything to be described.

Conclusion

SOA originated as a set of concepts, which are the same core concepts as
microservices architecture. Where SOA fell down was driving too much
governance and not enough “Just get it done.” For microservices to
continue to survive, the teams leveraging them need to embrace their
ancestry, continue to steal the best of the ideas, and reintroduce them
using agile development methodologies—with a healthy dose of
anti-governance to stop SOA
Governance

from reappearing. And then, there’s the side job of keeping ITIL and
friends safely inside the operational teams where they thrive. Vince
Power is a Solution Architect who has a focus on cloud adoption and
technology implementations using open source-based technologies. He has
extensive experience with core computing and networking (IaaS), identity
and access management (IAM), application platforms (PaaS), and
continuous delivery.

Source

Applying Best Practice Security Controls to a Kubernetes Cluster

Applying Best Practice Security Controls to a Kubernetes Cluster

This is the penultimate article in a series entitled Securing Kubernetes for Cloud Native Applications, and follows our discussion about securing the important components of a cluster, such as the API server and Kubelet. In this article, we’re going to address the application of best-practice security controls, using some of the cluster’s inherent security mechanisms. If Kubernetes can be likened to a kernel, then we’re about to discuss securing user space – the layer that sits above the kernel – where our workloads run. Let’s start with authentication.

Authentication

We touched on authenticating access to the Kubernetes API server in the last article, mainly in terms of configuring it to disable anonymous authentication. There are a number of different authentication schemes are available in Kubernetes, so let’s delve into this a little deeper.

X.509 Certificates

X.509 certificates are a required ingredient for encrypting any client communication with the API server using TLS. X.509 certificates can also be used as one of the methods for authenticating with the API server, where a client’s identity is provided in the attributes of the certificate – the Common Name provides the username, whilst a variable number of Organization attributes provide the groups that the identity belongs to.

X.509 certificates are a tried and tested method for authentication, but there are a couple of limitations that apply in the context of Kubernetes:

  • If an identity is no longer valid (maybe an individual has left your organization), the certificate associated with that identity may need to be revoked. There is currently no way in Kubernetes to query the validity of certificates with a Certificate Revocation List (CRL), or by using an Online Certificate Status Protocol (OSCP) responder. There are a few approaches to get around this (for example, recreate the CA and reissue every client certificate), or it might be considered enough to rely on the authorization step, to deny access for a regular user already authenticated with a revoked certificate. This means we should be careful about the choice of groups in the Organization attribute of certificates. If a certificate we’re not able to revoke contains a group (for example, system:masters) that has an associated default binding that can’t be removed, then we can’t rely on the authorization step to prevent access.
  • If there are a large number of identities to manage, the task of issuing and rotating certificates becomes onerous. In such circumstances – unless there is a degree of automation involved – the overhead may become prohibitive.

OpenID Connect

Another increasingly popular method for client authentication is to make use of the built-in Kubernetes support for OpenID Connect (OIDC), with authentication provided by an external identity provider. OpenID Connect is an authentication layer that sits on top of OAuth 2.0, and uses JSON Web Tokens (JWT) to encode the identity of a user and their claims. The ID token provided by the identity provider – stored as part of the user’s kubeconfig – is provided as a bearer token each time the user attempts an API request. As ID tokens can’t be revoked, they tend to have a short lifespan, which means they can only be used during the period of their validity for authentication. Usually, the user will also be issued a refresh token – which can be saved together with the ID token – and used for obtaining a new ID token on its expiry.

Just as we can embody the username and its associated groups as attributes of an X.509 certificate, we can do exactly the same with the JWT ID token. These attributes are associated with the identity’s claims embodied in the token, and are mapped using config options applied to the kube-apiserver.

Kubernetes can be configured to use any one of several popular OIDC identity providers, such as the Google Identity Platform and Azure Active Directory. But what happens if your organization uses a directory service, such as LDAP, for holding user identities? One OIDC-based solution that enables authentication against LDAP, is the open source Dex identity service, which acts as an authentication intermediary to numerous types of identity provider via ‘connectors’. In addition to LDAP, Dex also provides connectors for GitHub, GitLab, and Microsoft accounts using OAuth, amongst others.

Authorization

We shouldn’t rely on authentication alone to control access to the API server – ‘one size fits all’, is too coarse when it comes to controlling access to the resources that make up the cluster. For this reason, Kubernetes provides the means to subject authenticated API requests to authorization scrutiny, based on the authorization modes configured on the API server. We discussed configuring API server authorization modes in the previous article.

Whilst it’s possible to defer authorization to an external authorization mechanism, the de-facto standard authorization mode for Kubernetes is the in-built Role-Based Access Control (RBAC) module. As most pre-packaged application manifests come pre-defined with RBAC roles and bindings – unless there is a very good reason for using an alternative method – RBAC should be the preferred method for authorizing API requests.

RBAC is implemented by defining roles, which are then bound to subjects using ‘role bindings’. Let’s provide some clarification on these terms.

Roles – define what actions can be performed on which objects. The role can either be restricted to a specific namespace, in which case it’s defined in a Role object, or it can be a cluster-wide role, which is defined in a ClusterRole object. In the following example cluster-wide role, a subject bound to the role has the ability to perform get and list operations on the ‘pods’ and ‘pods/log’ resource objects – no more, no less:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-and-pod-logs-reader
rules:
– apiGroups: [“”]
resources: [“pods”, “pods/log”]
verbs: [“get”, “list”]

If this were a namespaced role, then the object kind would be Role instead of a ClusterRole, and there would be a namespace key with an associated value, in the metadata section.

Role Bindings – bind a role to a set of subjects. A RoleBinding object binds a Role or ClusterRole to subjects in the scope of a specific namespace, whereas a ClusterRoleBinding binds a ClusterRole to subjects on a cluster-wide basis.

Subjects – are users and groups (as provided by a suitable authentication scheme), and Service Accounts, which are API objects used to provide pods that require access to the Kubernetes API, with an identity.

When thinking about the level of access that should be defined in a role, always be guided by the principle of least privilege. In other words, only provide the role with the access that is absolutely necessary for achieving its purpose. From a practical perspective – when creating the definition of a new role, it’s easier to start with an existing role (for example, the edit role), and remove all that is not required. If you find your configuration too restrictive, and you need to determine which roles need creating for a particular action or set of actions, you could use audit2rbac, which will automatically generate the necessary roles and role bindings based on what it observes from the API server’s audit log.

When it comes to providing API access for applications running in pods through service accounts, it might be tempting to bind a new role to the default service account that gets created for each namespace, which is made available to each pod in the namespace. Instead, create a specific service account and role for the pod that requires API access, and then bind that role to the new service account.

Clearly, thinking carefully about who or what needs access to the API server, which parts of the API, and what actions they can perform via the API, is crucial to maintaining a secure Kubernetes cluster. Give it the time and attention it deserves, and if you need some extra help, Giant Swarm has some in-depth documentation that you may find useful!

Pod Security Policy

The containers that get created as constituents of pods are generally configured with sane, practical security defaults, which serve the majority of typical use cases. Often, however, a pod may need additional privileges to perform its intended task – a networking plugin, or an agent for monitoring or logging, for example. In such circumstances, we’d need to enhance the default privileges for pods, but restrict the pods that don’t need the enhanced privileges, to a more restrictive set of privileges. We can, and absolutely should do this, by enabling the PodSecurityPolicy admission controller, and defining policy using the pod security policy API.

Pod security policy defines the security configuration that is required for pods to pass admission, allowing them to be created or updated in the cluster. The controller compares a Pod’s defined security context with any of the policies that the Pod’s creator (be that a Deployment or a user) is allowed to ‘use’, and where the security context exceeds the policy, it will refuse to create or update the pod. The policy can also be used to provide default values, by defining a minimal, restrictive policy, which can be bound to a very general authorization group, such as system:authenticated (applies to all authenticated users), to limit the access those users have to the API server.

Pod Security Fields

There’s quite a lot of configurable security options that can be defined in a PodSecurityPolicy (PSP) object, and the policy that you choose to define will be very dependent on the nature of the workload and the security posture of your organization. Here’s a few example fields from the API object:

  • privileged – specifies whether a pod can run in privileged mode, allowing it to access the host’s devices, which in normal circumstances it would not be able to do.
  • allowedHostPaths – provides a whitelist of filesystem paths on the host that can be used by the pod as a hostPath volume.
  • runAsUser – allows for controlling the UID which a pod’s containers will be run with.
  • allowedCapabilities – whitelists the capabilities that can be added on top of the default list provided to a pod’s containers.

Making Use of Pod Security Policy

A word of warning when enabling the PodSecurityPolicy admission controller – unless policy has already been defined in a PSP, pods will fail to get created as the admission controller’s default behavior is to deny pod creation where no match is found against policy – no policy, no match. The pod security policy API is enabled independently of the admission controller though, so it’s entirely possible to define policy ahead of enabling it.

It’s worth pointing out that unlike RBAC, pre-packaged applications rarely contain PSPs in their manifests, which means it falls to the users of those applications to create the necessary policy.

Once PSPs have been defined, they can’t be used to validate pods, unless either the user creating the pod, or the service account associated with the pod, has permission to use the policy. Granting permission is usually achieved with RBAC, by defining a role that allows the use of a particular PSP, and a role binding that binds the role to the user and/or service account.

From a practical perspective – especially in production environments – it’s unlikely that users will create pods directly. Pods are more often than not created as part of a higher level workload abstraction, such as a Deployment, and as a result, it’s the service account associated with the Pod that requires the role for using any given PSP.

Once again, Giant Swarm’s documentation provides some great insights into the use of PSPs for providing applications with privileged access.

Isolating Workloads

In most cases, a Kubernetes cluster is established as a general resource for running multiple, different, and often unrelated application workloads. Co-tenanting workloads in this way brings enormous benefits, but at the same time may increase the risk associated with accidental or intentional exposure of those workloads and their associated data to untrusted sources. Organizational policy – or even regulatory requirement – might dictate that deployed services are isolated from any other unrelated services.

One means of ensuring this, of course, is to separate out a sensitive application into its very own cluster. Running applications in separate clusters ensures the highest possible isolation of application workloads. Sometimes, however, this degree of isolation might be more than is absolutely necessary, and we can instead make use of some of the in-built isolation features available in Kubernetes. Let’s take a look at these.

Namespaces

Namespaces are a mechanism in Kubernetes for providing distinct environments for all of the objects that you might deem to be related, and that need to be separate from other unrelated objects. They provide the means for partitioning the concerns of workloads, teams, environments, customers, and just about anything you deem worthy of segregation.

Usually, a Kubernetes cluster is initially created with three namespaces:

  • kube-system – used for objects created by Kubernetes itself.
  • kube-public – used for publicly available, readable objects.
  • default – used for all objects that are created without an explicit association with a specific namespace.

To make effective use of namespaces – rather than having every object ending up in the default namespace – namespaces should be created and used for isolating objects according to their intended purpose. There is no right or wrong way for namespacing objects, and much will depend on your organization’s particular requirements. Some careful planning will save a lot of re-engineering work later on, so it will pay to give this due consideration up front. Some ideas for consideration might include; different teams and/or areas of the organization, environments such as development, QA, staging, and production, different applications workloads, and possibly different customers in a co-tenanted scenario. It can be tempting to plan your namespaces in a hierarchical fashion, but namespaces have a flat structure, so it’s not possible to do this. Instead, you can provide inferred hierarchies with suitable namespace names, teamA-appY and teamB-appZ, for example.

Adopting namespaces for segregating workloads also helps with managing the use of the cluster’s resources. If we view the cluster as a shared compute resource segregated into discrete namespaces, then it’s possible to apply resource quotas on a per-namespace basis. Resource hungry and more critical workloads that are judiciously namespaced can then benefit from a bigger share of the available resources.

Network Policies

Out-of-the-box, Kubernetes allows all network traffic originating from any pod in the cluster to be sent to and be received by any other pod in the cluster. This open approach doesn’t help us particularly when we’re trying to isolate workloads, so we need to apply network policies to help us achieve the desired isolation.

The Kubernetes NetworkPolicy API enables us to apply ingress and egress rules to selected pods – for layer 3 and layer 4 traffic – and relies on the deployment of a compliant network plugin, that implements the Container Networking Interface (CNI). Not all Kubernetes network plugins provide support for network policy, but popular choices (such as Calico, Weave Net and Romana) do.

Network policy is namespace scoped, and is applied to pods based on selection, courtesy of a matched label (for example, tier: backend). When the pod selector for a NetworkPolicy object matches a pod, traffic to and from the pod is governed according to the ingress and egress rules defined in the policy. All traffic originating from or destined for the pod is then denied – unless there is a rule that allows it.

To properly isolate applications at the network and transport layer of the stack in a Kubernetes cluster, network policies should start with a default premise of ‘deny all’. Rules for each of the application’s components and their required sources and destinations should then be whitelisted one by one, and tested to ensure the traffic pattern works as intended.

Service-to-Service Security

Network policies are just what we need for layer 3/4 traffic isolation, but it would serve us well if we could also ensure that our application services can authenticate with one another, that their communication is encrypted, and that we have the option of applying fine-grained access control for intra-service interaction.

Solutions that help us to achieve this rely on policy applied at layers 5-7 of the network stack, and are a developing capability for cloud-native applications. Istio is one such tool, whose purpose involves the management of application workloads as a service mesh, including; advanced traffic management and service observability, as well as authentication and authorization based on policy. Istio deploys a sidecar container into each pod, which is based on the Envoy reverse proxy. The sidecar containers form a mesh, and proxy traffic between pods from different services, taking account of the defined traffic rules, and the security policy.

Istio’s authentication mechanism for service-to-service communication is based on mutual TLS, and the identity of the service entity is embodied in an X.509 certificate. The identities conform to the Secure Production Identity Framework for Everyone (SPIFFE) specification, which aims to provide a standard for issuing identities to workloads. SPIFFE is a project hosted by the Cloud Native Computing Foundation (CNCF).

Istio has far reaching capabilities, and if its suite of functions aren’t all required, then the benefits it provides might be outweighed by the operational overhead and maintenance it brings on deployment. An alternative solution for providing authenticated service identities based on SPIFFE, is SPIRE, a set of open source tools for creating and issuing identities.

Yet another solution for securing the communication between services in a Kubernetes cluster is the open source Cilium project, which uses Berkeley Packet Filters (BPF) within the Linux kernel to enforce defined security policy for layer 7 traffic. Cilium supports other layer 7 protocols such as Kafka and gRPC, in addition to HTTP.

Summary

As with every layer in the Kubernetes stack, from a security perspective, there is also a huge amount to consider in the user space layer. Kubernetes has been built with security as a first-class citizen, and the various inherent security controls, and mechanisms for interfacing with 3rd party security tooling, provide a comprehensive security capability.

It’s not just about defining policy and rules, however. It’s equally important to ensure, that as well as satisfying your organization’s wider security objectives, your security configuration supports the way your teams are organized, and the way in which they work. This requires careful, considered planning.

In the next and final article in this series, Managing the Security of Kubernetes Container Workloads, we’ll be discussing the security associated with the content of container workloads, and how security needs to be made a part of the end-to-end workflow.

Source

Kubernetes Federation Evolution – Kubernetes

Deploying applications to a kubernetes cluster is well defined and can in some cases be as simple as kubectl create -f app.yaml. The user’s story to deploy apps across multiple clusters has not been that simple. How should an app workload be distributed? Should the app resources be replicated into all clusters, or replicated into selected clusters or partitioned into clusters? How is the access to clusters managed? What happens if some of the resources, which user wants to distribute pre-exist in all or fewer clusters in some form.

In SIG multicluster, our journey has revealed that there are multiple possible models to solve these problems and there probably is no single best fit all scenario solution. Federation however is the single biggest kubernetes open source sub project which has seen maximum interest and contribution from the community in this problem space. The project initially reused the k8s API to do away with any added usage complexity for an existing k8s user. This became non-viable because of problems best discussed in this community update.

What has evolved further is a federation specific API architecture and a community effort which now continues as Federation V2.

Because federation attempts to address a complex set of problems, it pays to break the different parts of those problems down. Let’s take a look at the different high-level areas involved:

Kubernetes Federation V2 Concepts

Kubernetes Federation V2 Concepts

Federating arbitrary resources

One of the main goals of Federation is to be able to define the APIs and API groups which encompass basic tenets needed to federate any given k8s resource. This is crucial due to the popularity of Custom Resource Definitions as a way to extend Kubernetes with new APIs.

The workgroup did arrive at a common definition of the federation API and API groups as ‘a mechanism that distributes “normal” Kubernetes API resources into different clusters’. The distribution in its most simple form could be imagined as simple propagation of this ‘normal Kubernetes API resource’ across the federated clusters. A thoughtful reader can certainly discern more complicated mechanisms, other than this simple propagation of the Kubernetes resources.

During the journey of defining building blocks of the federation APIs, one of the near term goals also evolved as ‘to be able to create a simple federation aka simple propagation of any Kubernetes resource or a CRD, writing almost zero code’. What ensued further was a core API group defining the building blocks as a Template resource, a Placement resource and an Override resource per given Kubernetes resource, a TypeConfig to specify sync or no sync for the given resource and associated controller(s) to carry out the sync. More details follow in the next section Federating resources: the details. Further sections will also talks about being able to follow a layered behaviour with higher level Federation APIs consuming the behaviour of these core building blocks, and users being able to consume whole or part of the API and associated controllers. Lastly this architecture also allows the users to write additional controllers or replace the available reference controllers with their own to carry out desired behaviour.

The ability to ‘easily federate arbitrary Kubernetes resources’, and a decoupled API, divided into building blocks APIs, higher level APIs and possible user intended types, presented such that different users can consume parts and write controllers composing solutions specific to them, makes a compelling case for Federation V2.

Federating resources: the details

Fundamentally, federation must be configured with two types of information:
Which API types federation should handle Which clusters federation should target for distributing
those resources. For each API type that federation handles, different parts of the declared state live in different API resources:
A template type holds the base specification of the resource – for example, a type called FederatedReplicaSet holds the base specification of a ReplicaSet that should be distributed to the targeted clusters A placement type holds the specification of the clusters the resource should be distributed to – for example, a type called FederatedReplicaSetPlacement holds information about which clusters FederatedReplicaSets should be distributed to An optional overrides type holds the specification of how the template resource should be varied in some clusters – for example, a type called FederatedReplicaSetOverrides holds information about how a FederatedReplicaSet should be varied in certain clusters.
These types are all associated by name – meaning that for a particular template resource with name foo, the placement and override information for that resource are contained by the override and placement resources with the same name and namespace as that of the template.

Higher level behaviour

The architecture of federation v2 API allows higher level APIs to be constructed using the mechanics provided by the core API types (template, placement and override) and associated controllers for a given resource. In the community we could uncover few use cases and did implement the higher level APIs and associated controllers useful for those cases. Some of these types described in further sections also provide an useful reference to anybody interested in solving more complex use cases, building on top of the mechanics already available with federation v2 API.

ReplicaSchedulingPreference

ReplicaSchedulingPreference provides an automated mechanism of distributing and maintaining total number of replicas for deployment or replicaset based federated workloads into federated clusters. This is based on high level user preferences given by the user. These preferences include the semantics of weighted distribution and limits (min and max) for distributing the replicas. These also include semantics to allow redistribution of replicas dynamically in case some replica pods remain unscheduled in some clusters, for example due to insufficient resources in that cluster.
More details can be found at the user guide for ReplicaSchedulingPreferences.

Federated Services & Cross-cluster service discovery

kubernetes services are very useful construct in micro-service architecture. There is a clear desire to deploy these services across cluster, zone, region and cloud boundaries. Services that span clusters provide geographic distribution, enable hybrid and multi-cloud scenarios and improve the level of high availability beyond single cluster deployments. Customers who want their services to span one or more (possibly remote) clusters, need them to be reachable in a consistent manner from both within and outside their clusters.

Federated Service at its core contains a template (definition of a kubernetes service), a placement(which clusters to be deployed into), an override (optional variation in particular clusters) and a ServiceDNSRecord (specifying details on how to discover it).

Note: The Federated Service has to be of type LoadBalancer in order for it to be discoverable across clusters.

Discovering a Federated Service from pods inside your Federated Clusters

By default, Kubernetes clusters come preconfigured with a cluster-local DNS server, as well as an intelligently constructed DNS search path which together ensure that DNS queries like myservice, myservice.mynamespace, some-other-service.other-namespace, etc issued by your software running inside Pods are automatically expanded and resolved correctly to the appropriate service IP of services running in the local cluster.

With the introduction of Federated Services and Cross-Cluster Service Discovery, this concept is extended to cover Kubernetes services running in any other cluster across your Cluster Federation, globally. To take advantage of this extended range, you use a slightly different DNS name (e.g. myservice.mynamespace.myfederation) to resolve federated services. Using a different DNS name also avoids having your existing applications accidentally traversing cross-zone or cross-region networks and you incurring perhaps unwanted network charges or latency, without you explicitly opting in to this behavior.

Lets consider an example: (The example uses a service named nginx and the query name for described above)

A Pod in a cluster in the us-central1-a availability zone needs to contact our nginx service. Rather than use the service’s traditional cluster-local DNS name (nginx.mynamespace, which is automatically expanded to nginx.mynamespace.svc.cluster.local) it can now use the service’s Federated DNS name, which is nginx.mynamespace.myfederation. This will be automatically expanded and resolved to the closest healthy shard of my nginx service, wherever in the world that may be. If a healthy shard exists in the local cluster, that service’s cluster-local IP address will be returned (by the cluster-local DNS). This is exactly equivalent to non-federated service resolution.

If the service does not exist in the local cluster (or it exists but has no healthy backend pods), the DNS query is automatically expanded to nginx.mynamespace.myfederation.svc.us-central1-a.us-central1.example.com. Behind the scenes, this is finding the external IP of one of the shards closest to my availability zone. This expansion is performed automatically by cluster-local DNS server, which returns the associated CNAME record. This results in a traversal of the hierarchy of DNS records, and ends up at one of the external IP’s of the Federated Service near by.

It is also possible to target service shards in availability zones and regions other than the ones local to a Pod by specifying the appropriate DNS names explicitly, and not relying on automatic DNS expansion. For example, nginx.mynamespace.myfederation.svc.europe-west1.example.comwill resolve to all of the currently healthy service shards in Europe, even if the Pod issuing the lookup is located in the U.S., and irrespective of whether or not there are healthy shards of the service in the U.S. This is useful for remote monitoring and other similar applications.

Discovering a Federated Service from Other Clients Outside your Federated Clusters

For external clients, automatic DNS expansion described is currently not possible. External clients need to specify one of the fully qualified DNS names of the federated service, be that a zonal, regional or global name. For convenience reasons, it is often a good idea to manually configure additional static CNAME records in your service, for example:

SHORT NAME CNAME
eu.nginx.acme.com nginx.mynamespace.myfederation.svc.europe-west1.example.com
us.nginx.acme.com nginx.mynamespace.myfederation.svc.us-central1.example.com
nginx.acme.com nginx.mynamespace.myfederation.svc.example.com

That way your clients can always use the short form on the left, and always be automatically routed to the closest healthy shard on their home continent. All of the required failover is handled for you automatically by Kubernetes Cluster Federation.

As a further reading a more elaborate guide for users is available at Multi-Cluster Service DNS with ExternalDNS Guide

To get started with Federation V2, please refer to the user guide hosted on github.
Deployment can be accomplished with a helm chart, and once the control plane is available, the user guide’s example can be used to get some hands-on experience with using Federation V2.

Federation V2 can be deployed in both cluster-scoped and namespace-scoped configurations. A cluster-scoped deployment will require cluster-admin privileges to both host and member clusters, and may be a good fit for evaluating federation on clusters that are not running critical workloads. Namespace-scoped deployment requires access to only a single namespace on host and member clusters, and is a better fit for evaluating federation on clusters running workloads. Most of the user guide refers to cluster-scoped deployment, with the Namespaced Federation section documenting how use of a namespaced deployment differs. Infact same cluster can host multiple federations and/or same clusters can be part of multiple federations in case of Namespaced Federation.

Source

Introducing the New Docker Hub

Today, we’re excited to announce that Docker Store and Docker Cloud are now part of Docker Hub, providing a single experience for finding, storing and sharing container images. This means that:

  • Docker Certified and Verified Publisher Images are now available for discovery and download on Docker Hub
  • Docker Hub has a new user experience

Millions of individual users and more than a hundred thousand organizations use Docker Hub, Store and Cloud for their container content needs. We’ve designed this Docker Hub update to bring together the features that users of each product know and love the most, while addressing known Docker Hub requests around ease of use, repository and team management.

Here’s what’s new:

Repositories

  • View recently pushed tags and automated builds on your repository page
  • Pagination added to repository tags
  • Improved repository filtering when logged in on the Docker Hub home page

Organizations and Teams

  • As an organization Owner, see team permissions across all of your repositories at a glance.
  • Add existing Docker Hub users to a team via their email (if you don’t remember their of Docker ID)

New Automated Builds

  • Speed up builds using Build Caching
  • Add environment variables and run tests in your builds
  • Add automated builds to existing repositories

Note: For Organizations, GitHub & BitBucket account credentials will need to be re-linked to your organization to leverage the new automated builds. Existing Automated Builds will be migrated to this new system over the next few months. Learn more

Improved Container Image Search

  • Filter by Official, Verified Publisher and Certified images, guaranteeing a level of quality in the Docker images listed by your search query
  • Filter by categories to quickly drill down to the type of image you’re looking for

Existing URLs will continue to work, and you’ll automatically be redirected where appropriate. No need to update any bookmarks.

Verified Publisher Images and Plugins

Verified Publisher Images are now available on Docker Hub. Similar to Official Images, these images have been vetted by Docker. While Docker maintains the Official Images library, Verified Publisher and Certified Images are provided by our third-party software vendors. Interested vendors can sign up at https://goto.docker.com/Partner-Program-Technology.html.

Certified Images and Plugins

Certified Images are also now available on Docker Hub. Certified Images are a special category of Verified Publisher images that pass additional Docker quality, best practice, and support requirements.

  • Tested and supported on Docker Enterprise platform by verified publishers
  • Adhere to Docker’s container best practices
  • Pass a functional API test suite
  • Complete a vulnerability scanning assessment
  • Provided by partners with a collaborative support relationship
  • Display a unique quality mark “Docker Certified”

Source

Simplifying Kubernetes with Docker Compose and Friends

Today we’re happy to announce we’re open sourcing our support for using Docker Compose on Kubernetes. We’ve had this capability in Docker Enterprise for a little while but as of today you will be able to use this on any Kubernetes cluster you choose.

Compose on Kubernetes

Why do I need Compose if I already have Kubernetes?

The Kubernetes API is really quite large. There are more than 50 first-class objects in the latest release, from Pods and Deployments to ValidatingWebhookConfiguration and ResourceQuota. This can lead to a verbosity in configuration, which then needs to be managed by you, the developer. Let’s look at a concrete example of that.

The Sock Shop is the canonical example of a microservices application. It consists of multiple services using different technologies and backends, all packaged up as Docker images. It also provides example configurations using different tools, including both Compose and raw Kubernetes configuration. Let’s have a look at the relative sizes of those configurations:

$ git clone https://github.com/microservices-demo/microservices-demo.git
$ cd deployment/kubernetes/manifests
$ (Get-ChildItem -Recurse -File | Get-Content | Measure-Object -line).Lines
908
$ cd ../../docker-compose
$ (Get-Content docker-compose.yml | Measure-Object -line).Lines
174

Describing the exact same multi-service application using just the raw Kubernetes objects takes more than 5 times the amount of configuration than with Compose. That’s not just an upfront cost to author – it’s also an ongoing cost to maintain. The Kubernetes API is amazingly general purpose – it exposes low-level primitives for building the full range of distributed systems. Compose meanwhile isn’t an API but a high-level tool focused on developer productivity. That’s why combining them together makes sense. For the common case of a set of interconnected web services, Compose provides an abstraction that simplifies Kubernetes configuration. For everything else you can still drop down to the raw Kubernetes API primitives. Let’s see all that in action.

First we need to install the Compose on Kubernetes controller into your Kubernetes cluster. This controller uses the standard Kubernetes extension points to introduce the `Stack` to the Kubernetes API. You can use any Kubernetes cluster you like, but if you don’t already have one available then remember that Docker Desktop comes with Kubernetes and the Compose controller built-in, and enabling it is as simple as ticking a box in the settings.

To install the controller manually on any Kubernetes cluster, see the full documentation for the current installation instructions.

Next let’s write a simple Compose file:

version: “3.7”
services:
web:
image: dockerdemos/lab-web
ports:
– “33000:80”
words:
image: dockerdemos/lab-words
deploy:
replicas: 3
endpoint_mode: dnsrr
db:
image: dockerdemos/lab-db

We’ll then use the docker client to deploy this to a Kubernetes cluster running the controller:

$ docker stack deploy –orchestrator=kubernetes -c docker-compose.yml words
Waiting for the stack to be stable and running…
db: Ready [pod status: 1/1 ready, 0/1 pending, 0/1 failed]
web: Ready [pod status: 1/1 ready, 0/1 pending, 0/1 failed]
words: Ready [pod status: 1/3 ready, 2/3 pending, 0/3 failed]
Stack words is stable and running

We can then interact with those objects via the Kubernetes API. Here you can see we’ve created the lower-level objects like Services, Pods, Deployments and ReplicaSets automatically:

$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/db-85849797f6-bhpm8 1/1 Running 0 57s
pod/web-7974f485b7-j7nvt 1/1 Running 0 57s
pod/words-8fd6c974-44r4s 1/1 Running 0 57s
pod/words-8fd6c974-7c59p 1/1 Running 0 57s
pod/words-8fd6c974-zclh5 1/1 Running 0 57s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/db ClusterIP None <none> 55555/TCP 57s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d
service/web ClusterIP None <none> 55555/TCP 57s
service/web-published LoadBalancer 10.102.236.49 localhost 33000:31910/TCP 57s
service/words ClusterIP None <none> 55555/TCP 57s

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/db 1 1 1 1 57s
deployment.apps/web 1 1 1 1 57s
deployment.apps/words 3 3 3 3 57s

NAME DESIRED CURRENT READY AGE
replicaset.apps/db-85849797f6 1 1 1 57s
replicaset.apps/web-7974f485b7 1 1 1 57s
replicaset.apps/words-8fd6c974 3 3 3 57s

It’s important to note that this isn’t a one-time conversion. The Compose on Kubernetes API Server introduces the Stack resource to the Kubernetes API. So we can query and manage everything at the same level of abstraction as we’re building the application. That makes delving into the details above useful for understanding how things work, or debugging issues, but not required most of the time:

$ kubectl get stack
NAME STATUS PUBLISHED PORTS PODS AGE
words Running 33000 5/5 4m

Integration with other Kubernetes tools

Because Stack is now a native Kubernetes object, you can work with it using other Kubernetes tools. As an example, save the as `stack.yaml`:

kind: Stack
apiVersion: compose.docker.com/v1beta2
metadata:
name: hello
spec:
services:
– name: hello
image: garethr/skaffold-example
ports:
– mode: ingress
target: 5678
published: 5678
protocol: tcp

You can use a tool like Skaffold to have the image automatically rebuild and the Stack automatically redeployed whenever you change any of the details of your application. This makes for a great local inner-loop development experience. The following `skaffold.yaml` configuration file is all you need.

apiVersion: skaffold/v1alpha5
kind: Config
build:
tagPolicy:
sha256: {}
artifacts:
– image: garethr/skaffold-example
local:
useBuildkit: true
deploy:
kubectl:
manifests:
– stack.yaml

The future

We already have some thoughts about a Helm plugin to make describing your application with Compose and deploying with Helm as easy as possible. We have lots of other ideas for helping to simplify the developer experience of working with Kubernetes too, without losing any of the power of the platform. We also want to work with the wider Cloud Native community, so if you have ideas and suggestions please let us know.

Kubernetes is designed to be extended, and we hope you like what we’ve been able to release today. If you’re one of the millions of Compose users you can now more easily move to and manage your applications on Kubernetes. If you’re a Kubernetes user struggling with too much low-level configuration then give Compose a try. Let us know in the comments what you think, and head over to GitHub to try things out and even open your first PR:

Source

Docker App and CNAB – Docker Blog

Docker App is a new tool we spoke briefly about back at DockerCon US 2018. We’ve been working on `docker-app` to make container applications simpler to share and easier to manage across different teams and between different environments, and we open sourced it so you can already download Docker App from GitHub at https://github.com/docker/app.

In talking to others about problems they’ve experienced sharing and collaborating on the broad area we call “applications” we came to a realisation: it’s a more general problem that others have been working on too. That’s why we’re happy to collaborate with Microsoft on the new Cloud Native Application Bundle (CNAB) specification.

Multi-Service Distributed Applications

Today’s cloud native applications typically use different technologies, each with their own toolchain. Maybe you’re using ARM templates and Helm charts, or CloudFormation and Compose, or Terraform and Ansible. There is no single solution in the market for defining and packaging these multi-service, multi-format distributed applications.

CNAB is an open source, cloud-agnostic specification for packaging and running distributed applications that aims to solve some of these problems. CNAB unifies the management of multi-service, distributed applications across different toolchains into a single all-in-one packaging format.

The draft specification is available at cnab.io and we’re actively looking both for folks interested in contributing to the spec itself, and to people interested in building tools around the specification. The latest release of Docker App is one such tool that implements the current CNAB spec. That means it can be used to both build CNAB bundles for Compose (which can then be used with any other CNAB client), and also to install, upgrade and uninstall any other CNAB bundle.

Sharing CNAB bundles on Docker Hub

One of the limitations of standalone Compose files is that they cannot be shared on Docker Hub or Docker Trusted Registry. Docker App solves this issue too. Here’s a simple Docker application which launches a very simple Prometheus stack:

version: 0.1.0
name: monitoring
description: A basic prometheus stack
maintainers:
– name: Gareth Rushgrove
email: garethr@docker.com

version: ‘3.7’

services:
prometheus:
image: prom/prometheus:$
ports:
– $:9090

alertmanager:
image: prom/alertmanager:$
ports:
– $:9093

ports:
prometheus: 9090
alertmanager: 9093
versions:
prometheus: latest
alertmanager: latest

With that saved as `monitoring.dockerapp` we can now build a CNAB and share that on Docker Hub.

$ docker-app push –namespace <your-namespace>

Now on another machine we can still interact with the shared application. For instance let’s use the `inspect` command to get information about our application:

$ docker-app inspect <your-namespace>/monitoring:0.1.0
monitoring 0.1.0

Maintained by: Gareth Rushgrove <garethr@docker.com>

A basic prometheus stack

Services (2) Replicas Ports Image
———— ——– —– —–
prometheus 1 9090 prom/prometheus:latest
alertmanager 1 9093 prom/alertmanager:latest

Parameters (4) Value
————– —–
ports.alertmanager 9093
ports.prometheus 9090
versions.alertmanager latest
versions.prometheus latest

All the information from the Compose file is stored with the CNAB on Docker Hub, and if you notice, it’s also parameterized, so values can be substituted at runtime to fit the deployment requirements. We can install the application directly from Docker Hub as well:

docker-app install <your-namespace>/monitoring:0.1.0 –set ports.alertmanager=9095

Installing a Helm chart using Docker App

One question that has come up in the conversations we’ve had so far is how `docker-app` and now CNAB relates to Helm charts. The good news is that they all work great together! Here is an example using `docker-app` to install a CNAB bundle that packages a Helm chart. The following example uses the `hellohelm` example from the CNAB example bundles.

$ docker-app install -c local bundle.json
Do install for hellohelm
helm install –namespace hellohelm -n hellohelm /cnab/app/charts/alpine
NAME: hellohelm
LAST DEPLOYED: Wed Nov 28 13:58:22 2018
NAMESPACE: hellohelm
STATUS: DEPLOYED

RESOURCES:
==> v1/Pod
NAME AGE
hellohelm-alpine 0s

Next steps

If you’re interested in the technical details of the CNAB specification, either to see how it works under the hood or to maybe get involved in the specification work or building tools against it, you can find the spec at cnab.io.

If you’d like to get started building applications with Docker App you can download the latest release from github.com/docker/app and check out some of the examples provided in the repository.
Source

Heptio Contour and Heptio Gimbal on Stage at KubeCon NA

It’s been an exciting eight months since launching Heptio Gimbal in partnership with Actapio and Yahoo Japan Corporation ahead of KubeCon EU 2018. We created Heptio Contour and Heptio Gimbal as a complementary pair of open source projects to enable organizations to unify and manage internet traffic in hybrid cloud environments.

Actapio and Yahoo Japan Corporation were critical early design partners and we were keen to consult with other Heptio customers as well as the larger Kubernetes community on how ingress could be improved. What we consistently heard was that people are struggling to manage ingress traffic in a multi-team and multi-cluster world. Notably, several of our customers had production outages due to teams creating conflicting routing rules with other teams.

Based on that feedback, we released Heptio Contour 0.6 in September which introduced the IngressRoute CRD, a novel new way of safely managing multi-team ingress. It’s been great to see community interest soar regarding our design and implementation that models Kubernetes Ingress similar to the delegation model of DNS. In particular, the ability to do instantaneous blue-green deployments of Ingress rules is a great feature that has come out of this work.

It’s important to recognize that the success of Heptio Contour and Heptio Gimbal wouldn’t be possible without building on Envoy proxy. We couldn’t be happier with Envoy’s recent graduation from the CNCF incubation process, joining Kubernetes and Prometheus as top-level CNCF projects.

At KubeCon NA next week, we’re excited to tell you more about these projects and Actapio & Yahoo Japan will be presenting on their production use of Heptio Gimbal. Read on for a complete list of related talks!

If you have any questions or are interested in learning more, reach us via the #contour and #gimbal channels on the Kubernetes community Slack or follow us on Twitter.

Source

Setting Up a Docker Registry with JFrog Artifactory and Rancher

For any team using
containers – whether in development, test, or production – an
enterprise-grade registry is a non-negotiable requirement. JFrog
Artifactory
is much beloved by Java
developers, and it’s easy to use as a Docker registry as well. To make
it even easier, we’ve put together a short walkthrough to setting things
up Artifactory in Rancher.

Before you start

For this article, we’ve assumed that you already have a Rancher
installation up and running (if not, check out our Quick Start
guide
), and
will be working with either Artifactory Pro or Artifactory Enterprise.
Choosing the right version of Artifactory depends on your development
needs. If your main development needs include building with Maven
package types, then Artifactory open source may be suitable. However,
if you build using Docker, Chef Cookbooks, NuGet, PyPI, RubyGems, and
other package formats then you’ll want to consider Artifactory Pro.
Moreover, if you have a globally distributed development team with HA
and DR needs, you’ll want to consider Artifactory Enterprise. JFrog
provides a detailed
matrix
with the differences between the versions of Artifactory. There’s
several values you’ll need to select in order to set Artifactory up as
a Docker registry, such as a public name, or public port. In this
article, we refer to them as variables; just substitute the values you
choose in for the variables throughout this post. To deploy Artifactory,
you’ll first need to create (or already) have a wildcard imported into
Rancher for “*.$public_name”. You’ll also need to create DNS entries
to the IP address for artifactory-lb, the load balancer for the
Artifactory high availability architecture. Artifactory will be reached
via $publish_schema://$public_name:$public_port, while the Docker
registry will be reachable at
$publish_schema://$docker_repo_name.$public_name:$public_port

Installing Artifactory

While you can choose to install Artifactory on your own with the
documented
instructions
,
you also have the option of using Rancher catalog. The Rancher community
has recently contributed a template for Artifactory, which deploys the
package, the Artifactory server, its reverse proxy, and a Rancher load
balancher.

**A note on reverse proxies: **to use Artifactory as a Docker registry,
a reverse proxy is required. This reverse proxy is automatically
configured using the Rancher catalog item. However, if you need to apply
a custom nginx configuration, you can do so by upgrading the
artifactory-rp container in Rancher.

Note that installing Artifactory is a separate task from setting up
Artifactory to serve as a Docker registry, and from connecting that
Docker registry to Rancher (we’ll cover how to do these things as
well). To launch the Artifactory template, navigate to the community
catalog in Rancher. Choose “Pro” as the Artifactory version to launch,
and set parameters for schema, name, and port:

Once the package is deployed, the service is accessible through
[$publish_schema://$publish_name:$publish_port]

Configure Artifactory

At this point, we’ll need to do a bit more configuration with
Artifactory to complete the setup. Access the Artifactory server using
the path above. The next step will be to configure the reverse proxy and
to enable Docker image registry integration. To configure the reverse
proxy, set the following parameters:

  • Internal hostname: artifactory
  • Internal port: 8081
  • Internal context: artifactory
  • Public server name: $public_name
  • Public context path: [leave blank]
  • http port: $public_port
  • Docker reverse proxy settings: Sub Domain

Next, create a
local Docker repository. Make sure to select Docker as the package type:
Verify that the
registry name is correct; it should be formatted as
$docker_rep_name.$public_name
Test that the
registry is working by logging into it:

# docker login $publish_schema://$docker_repo_name.$public_name

Add Artifactory into Rancher

Now that Artifactory is all set up, it’s time to add the registry to
Rancher itself, so any application built and managed in Rancher can pull
images from it. On the top navigation bar, visit Infrastructure, then
select Registries from the drop down menu. On the resulting screen,
choose “Add Registry”, then select the “Custom” option. All you’ll need
to do is enter the address for your Artifactory Docker registry, along
with the relevant credentials:
Once it’s been
added, you should see it show up your list of recognized registries
(which shows up after visiting Infrastructure -> Registries on the top
navigation bar). With that, you should be all set to use Artifactory as
a Docker registry within Rancher! Raul is a DevOps Lead at Rancher
Labs.

Source

New Contributor Workshop Shanghai – Kubernetes

New Contributor Workshop Shanghai

Authors: Josh Berkus (Red Hat), Yang Li (The Plant), Puja Abbassi (Giant Swarm), XiangPeng Zhao (ZTE)

Kubecon Shanghai New Contributor Summit attendees. Photo by Jerry Zhang

Kubecon Shanghai New Contributor Summit attendees. Photo by Jerry Zhang

We recently completed our first New Contributor Summit in China, at the first KubeCon in China. It was very exciting to see all of the Chinese and Asian developers (plus a few folks from around the world) interested in becoming contributors. Over the course of a long day, they learned how, why, and where to contribute to Kubernetes, created pull requests, attended a panel of current contributors, and got their CLAs signed.

This was our second New Contributor Workshop (NCW), building on the one created and led by SIG Contributor Experience members in Copenhagen. Because of the audience, it was held in both Chinese and English, taking advantage of the superb simultaneous interpretation services the CNCF sponsored. Likewise, the NCW team included both English and Chinese-speaking members of the community: Yang Li, XiangPeng Zhao, Puja Abbassi, Noah Abrahams, Tim Pepper, Zach Corleissen, Sen Lu, and Josh Berkus. In addition to presenting and helping students, the bilingual members of the team translated all of the slides into Chinese. Fifty-one students attended.

Noah Abrahams explains Kubernetes communications channels. Photo by Jerry Zhang

Noah Abrahams explains Kubernetes communications channels. Photo by Jerry Zhang

The NCW takes participants through the stages of contributing to Kubernetes, starting from deciding where to contribute, followed by an introduction to the SIG system and our repository structure. We also have “guest speakers” from Docs and Test Infrastructure who cover contributing in those areas. We finally wind up with some hands-on exercises in filing issues and creating and approving PRs.

Those hands-on exercises use a repository known as the contributor playground, created by SIG Contributor Experience as a place for new contributors to try out performing various actions on a Kubernetes repo. It has modified Prow and Tide automation, uses Owners files like in the real repositories. This lets students learn how the mechanics of contributing to our repositories work without disrupting normal development.

Yang Li talks about getting your PRs reviewed. Photo by Josh Berkus

Yang Li talks about getting your PRs reviewed. Photo by Josh Berkus

Both the “Great Firewall” and the language barrier prevent contributing Kubernetes from China from being straightforward. What’s more, because open source business models are not mature in China, the time for employees work on open source projects is limited.

Chinese engineers are eager to participate in the development of Kubernetes, but many of them don’t know where to start since Kubernetes is such a large project. With this workshop, we hope to help those who want to contribute, whether they wish to fix some bugs they encountered, improve or localize documentation, or they need to work with Kubernetes at their work. We are glad to see more and more Chinese contributors joining the community in the past few years, and we hope to see more of them in the future.

“I have been participating in the Kubernetes community for about three years,” said XiangPeng Zhao. “In the community, I notice that more and more Chinese developers are showing their interest in contributing to Kubernetes. However, it’s not easy to start contributing to such a project. I tried my best to help those who I met in the community, but I think there might still be some new contributors leaving the community due to not knowing where to get help when in trouble. Fortunately, the community initiated NCW at KubeCon Copenhagen and held a second one at KubeCon Shanghai. I was so excited to be invited by Josh Berkus to help organize this workshop. During the workshop, I met community friends in person, mentored attendees in the exercises, and so on. All of this was a memorable experience for me. I also learned a lot as a contributor who already has years of contributing experience. I wish I had attended such a workshop when I started contributing to Kubernetes years ago.”

Panel of contributors. Photo by Jerry Zhang

Panel of contributors. Photo by Jerry Zhang

The workshop ended with a panel of current contributors, featuring Lucas Käldström, Janet Kuo, Da Ma, Pengfei Ni, Zefeng Wang, and Chao Xu. The panel aimed to give both new and current contributors a look behind the scenes on the day-to-day of some of the most active contributors and maintainers, both from China and around the world. Panelists talked about where to begin your contributor’s journey, but also how to interact with reviewers and maintainers. They further touched upon the main issues of contributing from China and gave attendees an outlook into exciting features they can look forward to in upcoming releases of Kubernetes.

After the workshop, Xiang Peng Zhao chatted with some attendees on WeChat and Twitter about their experiences. They were very glad to have attended the NCW and had some suggestions on improving the workshop. One attendee, Mohammad, said, “I had a great time at the workshop and learned a lot about the entire process of k8s for a contributor.” Another attendee, Jie Jia, said, “The workshop was wonderful. It systematically explained how to contribute to Kubernetes. The attendee could understand the process even if s/he knew nothing about that before. For those who were already contributors, they could also learn something new. Furthermore, I could make new friends from inside or outside of China in the workshop. It was awesome!”

SIG Contributor Experience will continue to run New Contributor Workshops at each upcoming Kubecon, including Seattle, Barcelona, and the return to Shanghai in June 2019. If you failed to get into one this year, register for one at a future Kubecon. And, when you meet an NCW attendee, make sure to welcome them to the community.

Links:

Source

Announcing the Docker Customer Innovation Awards

We are excited to announce the first annual Docker Customer Innovation Award winners at DockerCon Barcelona today! We launched the awards this year to recognize customers who stand out in their adoption of Docker Enterprise platform to drive transformation within IT and their business.

38 companies were nominated, all of whom have spoken publicly about their containerization initiatives recently, or plan to soon. From looking at so many excellent nominees, we realized there were really two different stories — so we created two award categories. In each category, we have a winner and three finalists.

Business Transformation

Customers in this category have developed company-wide initiatives aimed at transforming IT and their business in a significant way, with Docker Enterprise as a key part if it. They typically started their journey two or more years ago and have containerized multiple applications across the organization.

WINNER:

FINALISTS:

  • Bosch built a global platform that enables developers to build and deliver new software solutions and updates at digital speed.
  • MetLife modernized hundreds of traditional applications, driving 66 percent cost savings and creating a self-funding model to fuel change and innovation. Cut new product time to market by two-thirds.

Rising Stars

Customers in this category are early in their containerization journey and have already leveraged their first project with Docker Enterprise as a catalyst to innovate their business — often creating new applications or services.

WINNER:

  • Desigual built a brand new in-store shopping experience app in less than 5 months to connect customers and associates, creating an outstanding brand and shopping experience.

FINALISTS:

  • Citizens Bank (Franklin American Mortgage) created a dedicated innovation team sparked cultural change at a traditional mortgage company, allowing it to bring new products to market in weeks or months.
  • The Dutch Ministry of Justice evaluated Docker Enterprise as a way to accelerate application development, which helped spark an effort to modernize juvenile custodian services from whiteboards and sticky notes to a mobile app.

We want to give a big thanks to the winners and finalists, and to all of our remarkable customers have started innovation journeys with Docker.

We’ve opened the nomination process for 2019 since we will be announcing winners at DockerCon 2019 on April 29-May 2. If you’re interested in submitting or want to nominate someone else, you can learn how here.

Desigual, docker, Docker Customer Innovation awards, rising star, Société Générale, transformation

Source