Announcing Support for Windows Server 2019 within Docker Enterprise

Docker is pleased to announce support within the Docker Enterprise container platform for the Windows Server 2019 Long Term Servicing Channel (LTSC) release and the Server 1809 Semi-Annual Channel (SAC) release. Windows Server 2019 brings the range of improvements that debuted in the Windows Server 1709 and 1803 SAC releases into a LTSC release preferred by most customers for production use. The addition of Windows Server 1809 brings support for the latest release for customers who prefer to work with the Semi-Annual Channel. As with all supported Windows Server versions, Docker Enterprise enables Windows Server 2019 and Server 1809 to be used in a mixed cluster alongside Linux nodes.

Windows Server 2019 includes the following improvements:

  • Ingress routing
  • VIP service discovery
  • Named pipe mounting
  • Relaxed image compatibility requirements
  • Smaller base image sizes

Docker and Microsoft: A Rich History of Advancing Containers

Docker and Microsoft have been working together since 2014 to bring containers to Windows Server applications, along with the benefits of isolation, portability and security. Docker and Microsoft first brought container technology to Windows Server 2016 which ships with a Docker Enterprise Engine, ensuring consistency for the same Docker Compose file and CLI commands across both Linux and Windows Server environments. Recognizing that most enterprise organizations have both Windows Server and Linux applications in their environment, we followed that up in 2017 with the ability to manage mixed Windows Server and Linux clusters in the same Docker Enterprise environment with Docker Swarm, enabling support for hybrid applications and driving higher efficiencies and lower overhead for organizations. In 2018 we extended customer choice by adding support for the Semi-Annual Channel (SAC) Windows Server 1709 and 1803 releases.

Delivering Choice of Container Orchestration

Docker Enterprise 2.1 supports both Swarm and Kubernetes orchestrators interchangeably in the same cluster. Docker and Microsoft are now working together to let you deploy Windows workloads with Kubernetes while leveraging all the advanced application management and security features of Docker Enterprise. While the Kubernetes community work to support Windows Server 2019 is still in beta, investments made today using Docker Enterprise to containerize Windows applications using Swarm can translate to Kubernetes when available.

Accelerating Your Legacy Windows Server Migration

Docker Enterprise’s Support for Windows Server 2019 also provides customers with more options for migrating their legacy Windows Server workloads from Windows Server 2008, which is facing end-of-life, to a modern OS. The Docker Windows Server Application Migration Program represents the best and only way to containerize and secure legacy Windows Server applications while enabling software-driven business transformation. By containerizing legacy applications and their dependencies with the Docker Enterprise container platform, they can be moved to Windows Server 2019, without code changes, saving millions in development costs. Docker Enterprise is the only container platform to support Windows Global Managed Service Accounts (gMSAs) – a crucial component in containerizing applications that require the ability to work with external services via Integrated Windows Authentication.

Next Steps

  • Read more about Getting started with Windows containers
  • Try the new Windows container experience today, using a Windows Server 2019 machine or Windows 10 with Microsoft’s latest 1809 update.
  • All the Docker labs for Windows containers are being updated to use Windows Server 2019 – you can follow along with the labs, and see how Docker containers on Windows continue to advance.

container organization, container platform, docker enterprise, Windows server 2019, Windows Server application

Source

Deploying an Elasticsearch Cluster using Rancher Catalog

Elasticsearch is a Lucene-based search engine developed by the
open-source vendor, elastic. With
principal features like scalability, resiliency, and top-notch
performance, it has overtaken Apache
Solr
, one of its closest competitors.
Nowadays, Elasticsearch is almost everywhere where a search engine is
involved: it’s the E of the well-known ELK
stack
, which
makes it straightforward for your project to process analytics (the L
stands for Logstash
which is used to process data like logs, streams, metrics; K stands for
Kibana, a data
visualization platform – projects also managed by elastic).
Installing Elasticsearch from the Rancher Catalog Before we get
started, let me tell you a bit about the Rancher
catalog
. The Rancher
catalog uses rancher-compose and docker-compose to ease the installation
of whatever tool you need. Using the Rancher catalog, you can deploy
everything from a simple app like ghost
(blogging platform) to a full CI/CD stack like
GoCD. I’ll assume here that you have a fully
working Rancher platform (a server and several nodes). If not, then
head over to the Rancher documentation
here, before going any
further in this article and set up your environment. My environment
looks like this (Figure 1, built using docker-machine on my laptop):
rachid
1-1

Figure *1: Elasticsearch Environment***

Accessing the Rancher catalog is simple:

  • On the top menu bar of your Rancher UI, click on Catalog, then
    All.
  • Using the search box on the upper right, search for
    *Elasticsearch***.
  • You’ll see two versions of Elasticsearch are available (Figure 2).
    Both work fine, but for this article, we’ll stick to version on the
    left.
  • Click on View Details. You will need to fill in some simple
    information (Figure 3).
  • To fire up the installation, click Launch.

Rachid
1-2

Figure *2: Elasticsearch Options in the Rancher Catalog***

rachid
1-3

Figure *3: Elasticsearch Data Form***

You should now see something similar to the image below (Figure 4).
You can find more details about what Rancher is doing by clicking on the
name of your stack (in my case, I’ve installed Elasticsearch, and named
my stack LocalEs). rachid
1-4

Figure *4: LocalEs app Naming Convention***

Expanding our view of the stack (Figure 5), we can see that deploying
an Elasticsearch cluster using the Rancher catalog template has
included:

  • a Master node
  • a Data node
  • a Client node
  • kopf, an
    Elasticsearch management web app

rachid
1-5

Figure *5: Elasticsearch Cluster Stack View***

Each of these nodes (except for kopf) comes with sidekick containers,
which in this case are configuration and data volume containers. Your
Elasticsearch cluster will be fully functional when all the entries are
“active”. If you want to see how they are all connected to each other,
take a look at the graph view (available from the drop-down menu
on the right hand corner in Figure 6). rachid
1-6

Figure *6: Elasticsearch Cluster Graph View***

Now, we can visualize how all these containers as they are mapped
within the Rancher platform (Figure 7). rachid
1-7

Figure *7: Elasticsearch Visual Map***

That’s it, our Elasticsearch cluster is now up and running. Let’s
see how our cluster behaves! Cluster Management Depending on your
Rancher setup, kopf is deployed on one of your Rancher nodes. You
can access the application using http://[your kopf rancher host].
Here’s an example (Figure 8): rachid
1-8

Figure *8: kopf node identification***

As you can see, everything seems to be fine, as
kopf shows a green
top bar. Indeed, our cluster is running without any data stored, so
there’s no need for resiliency at this point. Let’s see how it goes if
we manually create an index called ‘ranchercatalog’, with three shards
and two replicas. Using curl, your query would be something like this:

curl -XPUT ‘http://[your kopf rancher host]/es/ranchercatalog/’ -d ‘{
“settings” : {
“index” : {
“number_of_shards” : 3,
“number_of_replicas” : 2
}
}
}’

Elasticsearch should reply {“acknowledged”:true}% Shards are
related to data storage, and replicas to resiliency. This means our
index will have its data stored using three shards but needs two more
replicas of these shards. Now that our index has been successfully
created, let’s talk a look at
kopf. rachid
1-9

Figure *9: kopf Status View***

As you can see in Figure 9, the top bar is now yellow, which indicates
there may be something wrong with our Elasticsearch cluster. We can also
see in the middle left of the page a warning sign (red triangle in Fig.
9) saying “six unassigned shards.” Remember when we created the
ranchercatalog index, we specified:

  • Three shards
  • Two replicas

By default, the Elasticsearch Rancher catalog item deploys only 1 data
node, so we need two more data nodes. Adding nodes can be easily done
using the Rancher scale option. The results are shown in Figure 10.
rachid
1-10

Figure *10: Adding Nodes using Rancher Scale Option***

To scale your data nodes, let’s go again to Applications, then
to Stack. Click on your stack, and then on
elasticsearch-datanodes. You should have something like what is
shown in Figure 10. Click 2 times on the + of the scale option and
let Rancher do the work. You should see data nodes popping up one after
another until you finally have something like what you see in Figure 11.
rachid
1-11

Figure *11: Node View to Verify Additions***

Let’s check if this is enough to bring back the beautiful green bar
to kopf. Figure 12
provides the proof. Rachid
1-12

Figure *12: Corrected Nodes Verification***

Voilà! We now have a perfect and fully functional Elasticsearch
Cluster. In my next post, we’ll see how to populate this index and do
some cool queries! rachid
photo
*Rachid is a former virtualization consultant and Instructor. * After a
successful experience building and training the ops team of the French
registry AFNIC, he is now the CIO of a worldwide recognized CRM and
ecommerce agency.

Source

Microservices and Containers | Microservices Orchestration

Rancher Labs has been developing open source projects for about two
years now. We have a ton of GitHub repositories under our hood, and
our number keeps growing. The number of external contributions to our
projects keeps growing, too; Rancher has become more well-known over the
past year, and structural changes to our code base have made it easier
to contribute. So what are these structural changes? I would highlight 3
major ones:

  1. Moving key Rancher features into separate microservices projects
    (Metadata, DNS, Rancher compose, etc.)
  2. Dockerizing microservices orchestration
  3. Cataloging Dockerized application templates, and enabling them for
    deployment through the Rancher catalog

Item 2 acts as a bridge from 1 to 3. In this article, I will go over
each item in more detail.

Moving key Rancher features to microservices

It is well-known that monolithic systems come with certain
disadvantages:

  • Their code bases are not easy to understand and modify
  • Their features are hard to test in isolation
  • They have longer test and release cycles.

But even if your code base is pluggable and well-structured, the last
two challenges note above persist. Moving code into microservices helps
to overcome these challenges, and creates a lower threshold for external
committers: if you are new to open source development and willing to
start contributing, smaller projects are simply easier to grasp. In
addition, if you look at the project pull request history for Rancher
External DNS, you might see something interesting: DNS Service
Provide pull
requests
The majority of commits came from people with knowledge of
different service providers. From a contributor’s point of view, having
and bringing in specific service provider expertise reduces the pressure
associated with making initial contributions to the project. And of
course, the project benefits from getting all these provider extensions.

Dockerizing microservices

Let’s say as a contributor, you’ve created this new cool
DNSimple provider plug-in. It was released with an external-dns service,
and now you want to try it in Rancher. To adopt the changes, you
don’t have to wait for the next Rancher release, nor do you have to
change the Rancher code base. All you have to do is:

  • fetch the last released image from the external-dns dockerhub
    repo
  • create a docker-compose template with your service’s deployment
    details

docker-compose

  • Register your image in Rancher catalog
    repo
    (more on how to
    deploy it from Rancher catalog, in the next section).

Deploying the service through Rancher catalog

At Rancher, we want to provide an easy way for users to describe and
deploy their Docker-based applications. The Rancher catalog makes this
possible. By selecting an entry from the catalog, and answering several
questions, you can launch your service through the Rancher platform.
Screen Shot 2016-06-09 at 3.23.56
PM
All the services are grouped by category, so it is easy to search for a
specific functionality:

grouping by category

Pick your newly added DNSimple service, fill in the fields and hit
Launch: Screen Shot 2016-06-09 at 4.42.00
PM
That’s it! Your service gets deployed in Rancher, and can be discovered
and used by any other application. The catalog enables easy upgrades for
microservices. Once the new service image is available and its template
is published to the catalog, Rancher will get a notification, and
your service can be upgraded to the latest version in a rolling fashion.
The beauty of this is that you don’t have to update or upgrade Rancher
when a new version of a microservice gets released. Besides providing a
simple way of defining, deploying and upgrading microservices, the
Rancher Catalog acts as a shared template library. If you are interested
building an Elasticsearch microservice, using GlusterFS, or dockerizing
DroneCI, check out their corresponding catalog items. And if you want to
share your application, you can submit it to our Community catalog
repo
.

How microservices benefit Rancher as an orchestration platform

We’ve seen the single service implementation and deployment flow; let’s
look at the bigger picture now. Any container orchestration platform
should be easily extendable, especially when it comes to implementing a
specific service provider extension. Building and deploying this
extension shouldn’t be tightly coupled to the core platform, either.
Moving out the code to its own microservice repo, dockerizing the
service, and allowing it to deploy it using catalog, makes everything
easier to maintain and support (as pictured below): Rancher catalog
services
We are planning to move the rest of Rancher’s key services to their own
microservices. This will allow users to integrate the system service
plugins of their choice with just a couple of clicks.

Moving our key services – Metadata, Internal DNS – into dockerized
microservices written in Go has helped with the release management, and
driven more external commits. We’ve taken things one step further and
developed an application catalog where users can share their
applications’ templates in docker-compose format. This has taught us
more about best DevOps best practices from within our community, made us
more familiar with their use cases, and helped us improve our
microservices implementations. Working on an open source project is
always a two-way street – making your code easier to understand and
manage helps the community contribute to and enhance the project. We
have an awesome community, and appreciate every single contribution.
We will continue improving contributors’ experience and learning from
them.

Source

The History of Kubernetes & the Community Behind It

The History of Kubernetes & the Community Behind It

oscon award

It is remarkable to me to return to Portland and OSCON to stand on stage with members of the Kubernetes community and accept this award for Most Impactful Open Source Project. It was scarcely three years ago, that on this very same stage we declared Kubernetes 1.0 and the project was added to the newly formed Cloud Native Computing Foundation.

To think about how far we have come in that short period of time and to see the ways in which this project has shaped the cloud computing landscape is nothing short of amazing. The success is a testament to the power and contributions of this amazing open source community. And the daily passion and quality contributions of our endlessly engaged, world-wide community is nothing short of humbling.

Congratulations @kubernetesio for winning the “most impact” award at #OSCON I’m so proud to be a part of this amazing community! @CloudNativeFdn pic.twitter.com/5sRUYyefAK

— Jaice Singer DuMars (@jaydumars) July 19, 2018

👏 congrats @kubernetesio community on winning the #oscon Most Impact Award, we are proud of you! pic.twitter.com/5ezDphi6J6

— CNCF (@CloudNativeFdn) July 19, 2018

At a meetup in Portland this week, I had a chance to tell the story of Kubernetes’ past, its present and some thoughts about its future, so I thought I would write down some pieces of what I said for those of you who couldn’t be there in person.

It all began in the fall of 2013, with three of us: Craig McLuckie, Joe Beda and I were working on public cloud infrastructure. If you cast your mind back to the world of cloud in 2013, it was a vastly different place than it is today. Imperative bash scripts were only just starting to give way to declarative configuration of IaaS with systems. Netflix was popularizing the idea of immutable infrastructure but doing it with heavy-weight full VM images. The notion of orchestration, and certainly container orchestration existed in a few internet scale companies, but not in cloud and certainly not in the enterprise.

Docker changed all of that. By popularizing a lightweight container runtime and providing a simple way to package, distributed and deploy applications onto a machine, the Docker tooling and experience popularized a brand-new cloud native approach to application packaging and maintenance. Were it not for Docker’s shifting of the cloud developer’s perspective, Kubernetes simply would not exist.

I think that it was Joe who first suggested that we look at Docker in the summer of 2013, when Craig, Joe and I were all thinking about how we could bring a cloud native application experience to a broader audience. And for all three of us, the implications of this new tool were immediately obvious. We knew it was a critical component in the development of cloud native infrastructure.

But as we thought about it, it was equally obvious that Docker, with its focus on a single machine, was not the complete solution. While Docker was great at building and packaging individual containers and running them on individual machines, there was a clear need for an orchestrator that could deploy and manage large numbers of containers across a fleet of machines.

As we thought about it some more, it became increasingly obvious to Joe, Craig and I, that not only was such an orchestrator necessary, it was also inevitable, and it was equally inevitable that this orchestrator would be open source. This realization crystallized for us in the late fall of 2013, and thus began the rapid development of first a prototype, and then the system that would eventually become known as Kubernetes. As 2013 turned into 2014 we were lucky to be joined by some incredibly talented developers including Ville Aikas, Tim Hockin, Dawn Chen, Brian Grant and Daniel Smith.

Happy to see k8s team members winning the “most impact” award. #oscon pic.twitter.com/D6mSIiDvsU

— Bridget Kromhout (@bridgetkromhout) July 19, 2018

Kubernetes won the O’Reilly Most Impact Award. Thanks to our contributors and users! pic.twitter.com/T6Co1wpsAh

— Brian Grant (@bgrant0607) July 19, 2018

The initial goal of this small team was to develop a “minimally viable orchestrator.” From experience we knew that the basic feature set for such an orchestrator was:

  • Replication to deploy multiple instances of an application
  • Load balancing and service discovery to route traffic to these replicated containers
  • Basic health checking and repair to ensure a self-healing system
  • Scheduling to group many machines into a single pool and distribute work to them

Along the way, we also spent a significant chunk of our time convincing executive leadership that open sourcing this project was a good idea. I’m endlessly grateful to Craig for writing numerous whitepapers and to Eric Brewer, for the early and vocal support that he lent us to ensure that Kubernetes could see the light of day.

In June of 2014 when Kubernetes was released to the world, the list above was the sum total of its basic feature set. As an early stage open source community, we then spent a year building, expanding, polishing and fixing this initial minimally viable orchestrator into the product that we released as a 1.0 in OSCON in 2015. We were very lucky to be joined early on by the very capable OpenShift team which lent significant engineering and real world enterprise expertise to the project. Without their perspective and contributions, I don’t think we would be standing here today.

Three years later, the Kubernetes community has grown exponentially, and Kubernetes has become synonymous with cloud native container orchestration. There are more than 1700 people who have contributed to Kubernetes, there are more than 500 Kubernetes meetups worldwide and more than 42000 users have joined the #kubernetes-dev channel. What’s more, the community that we have built works successfully across geographic, language and corporate boundaries. It is a truly open, engaged and collaborative community, and in-and-of-itself and amazing achievement. Many thanks to everyone who has helped make it what it is today. Kubernetes is a commodity in the public cloud because of you.

But if Kubernetes is a commodity, then what is the future? Certainly, there are an endless array of tweaks, adjustments and improvements to the core codebase that will occupy us for years to come, but the true future of Kubernetes are the applications and experiences that are being built on top of this new, ubiquitous platform.

Kubernetes has dramatically reduced the complexity to build new developer experiences, and a myriad of new experiences have been developed or are in the works that provide simplified or targeted developer experiences like Functions-as-a-Service, on top of core Kubernetes-as-a-Service.

The Kubernetes cluster itself is being extended with custom resource definitions (which I first described to Kelsey Hightower on a walk from OSCON to a nearby restaurant in 2015), these new resources allow cluster operators to enable new plugin functionality that extend and enhance the APIs that their users have access to.

By embedding core functionality like logging and monitoring in the cluster itself and enabling developers to take advantage of such services simply by deploying their application into the cluster, Kubernetes has reduced the learning necessary for developers to build scalable reliable applications.

Finally, Kubernetes has provided a new, common vocabulary for expressing the patterns and paradigms of distributed system development. This common vocabulary means that we can more easily describe and discuss the common ways in which our distributed systems are built, and furthermore we can build standardized, re-usable implementations of such systems. The net effect of this is the development of higher quality, reliable distributed systems, more quickly.

It’s truly amazing to see how far Kubernetes has come, from a rough idea in the minds of three people in Seattle to a phenomenon that has redirected the way we think about cloud native development across the world. It has been an amazing journey, but what’s truly amazing to me, is that I think we’re only just now scratching the surface of the impact that Kubernetes will have. Thank you to everyone who has enabled us to get this far, and thanks to everyone who will take us further.

Brendan

Source

Getting Microservices Deployments on Kubernetes with Rancher

Most people running Docker in production use it as a way to build and
move deployment artifacts. However, their deployment model is still very
monolithic or comprises of a few large services. The major stumbling
block in the way of using true containerized microservices is the lack
of clarity on how to manage and orchestrate containerized workloads at
scale. Today we are going to talk about building a Kubernetes based
microservice deployment. Kubernetes is the open
source successor to Google’s long running Borg project, which has been
running such workloads at scale for about a decade. While there are
still some rough edges, Kubernetes represents one of the most mature
container orchestration systems available today.

[[Launching Kubernetes Environment ]]

[[You can take a look at the
]]Kubernetes
Documentation

for instructions on how launch a Kubernetes cluster in various
environments. In this post, I’m going to focus on launching Rancher’s
distribution of Kubernetes
as an
environment within the Rancher container management
platform
. We’ll start by setting up
a Rancher server as described
here and
select Environment/Default > Manage Environments > Add Environment.
Select Kubernetes from Container Orchestration options and create your
environment. Now select Infrastructure > Hosts > Add Host and launch
a few nodes for Kubernetes to run on. Note: we recommend adding at least
3 hosts, which will run the Rancher agent container. Once the hosts come
up, you should see the following screen, and in a few minutes your
cluster should be up and ready.

There are lots of advantages to running Kubernetes within Rancher.
Mostly, it just makes the deployment and management dramatically easier
for both users and the IT team. Rancher automatically implements an HA
implementation of etcd for the Kubernetes backend, and deploys all of
the necessary services onto any hosts you add into this environment. It
sets up access controls, and can tie into existing LDAP and AD
infrastructure easily. Rancher also automatically implements container
networking and load balancing services for Kubernetes. Using Rancher,
you should have an HA implementation of Kubernetes in a few minutes.

kubernetes
launching

Namespaces

Now that we have our cluster running, let’s jump in and start going
through some basic Kubernetes resources. You can access the Kubernetes
cluster either directly through the kubectl CLI, or through the Rancher
UI. Rancher’s access management layer controls who can access the
cluster, so you’ll need to generate an API key from the Rancher UI
before accessing the CLI.

The first Kubernetes resource we are going to look at is namespaces.
Within a given namespace, all resources must have unique names. In
addition, labels used to link resources are scoped to a single
namespace. This is why namespaces can be very useful for creating
isolated environments on the same Kubernetes cluster. For example, you
may want to create an Alpha, Beta and Production environment for your
application so that you can test latest changes without impacting real
users. To create a namespace, copy the following text into a file called
namespace.yaml and run the kubectl create -f namespace.yaml command to
create a namespace called beta.

kind: Namespace
apiVersion: v1
metadata:
name: beta
labels:
name: beta

You can also create, view and select namespaces from the Rancher UI by
using the Namespace menu on the top menu bar.
namespaces

You can use the following command to set the namespace in for CLI
interactions using kubectl:

$ kubectl config set-context Kubernetes –namespace=beta.

To verify that the context was set currently, use the config view
command and verify the output matches the namespace you expect.

$ kubectl config view | grep namespace command
namespace: beta

Pods

Now that we have our namespaces defined, let’s start creating
resources. The first resource we are going to look at is a Pod. A group
of one or more containers is referred to by Kubernetes as a pod.
Containers in a pod are deployed, started, stopped, and replicated as a
group. There can only be one pod of a given type on each host, and all
containers in the pod are run on the same host. Pods share network
namespace and can reach each other via the localhost domain. Pods are
the basic unit of scaling and cannot span across hosts, hence it’s
ideal to make them as close to single workload as possible. This will
eliminate the side-effects of scaling a pod up or down as well as
ensuring we don’t create pods that are too resource intensive for our
underlying hosts.

Lets define a very simple pod named mywebservice which has one
container in its spec named web-1-10 using the nginx container image
and exposing the port 80. Add the following text into a file called
pod.yaml.

apiVersion: v1
kind: Pod
metadata:
name: mywebservice
spec:
containers:
– name: web-1-10
image: nginx:1.10
ports:
– containerPort: 80

Run the kubectl create command to create your pod. If you set your
namespace above using the set-context command then the pods will be
created in the specified namespace. You can verify the status of your
pod by running the get pods command. Once you are done we can delete
the pod by running the kubectl delete command.

$ kubectl create -f ./pod.yaml
pod “mywebservice” created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mywebservice 1/1 Running 0 37s
$ kubectl delete -f pod.yaml
pod “mywebservice” deleted

You should also be able see your pod in the Rancher UI by selecting
Kubernetes > Pods from the top menu bar.

Screen Shot 2016-08-13 at 5.52.20
AM

Replica Sets

Replica Sets, as the name implies, define how many replicas of each pod
will be running. They also monitor and ensure the required number of
pods are running, replacing pods that die. Note that replica sets are a
replacement for Replication Controllers – however, for most
use-cases you will not use Replica Sets directly but instead use
Deployments. Deployments wrap replica sets and add the the
functionality to do rolling updates to your application.

Deployments

Deployments are a declarative mechanism to manage rolling updates of
your application. With this in mind, let’s define our first deployment
using the pod definition above. The only difference is that we take out
the name parameter, as a name for our container will be auto-generated
by the deployment. The text below shows the configuration for our
deployment; copy it to a file called deployment.yaml.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mywebservice-deployment
spec:
replicas: 2 # We want two pods for this deployment
template:
metadata:
labels:
app: mywebservice
spec:
containers:
– name: web-1-10
image: nginx:1.10
ports:
– containerPort: 80

Launch your deployment using the kubectl create command and then verify
that the deployment is up using the get deployments command.

$ kubectl create -f ./deployment.yaml
deployment “mywebservice-deployment” create
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
mywebservice-deployment 2 2 2 2 7m

You can get details about your deployment using the describe deployment
command. One of the useful items output by the describe command is a set
of events. A truncated example of the output from the describe command
is shown below. Currently your deployment should have only one event
with the message: Scaled up replica set … to 2.

$ kubectl describe deployment mywebservice
Name: mywebservice-deployment
Namespace: beta
CreationTimestamp: Sat, 13 Aug 2016 06:26:44 -0400
Labels: app=mywebservice
…..
….. Scaled up replica set mywebservice-deployment-3208086093 to 2

Scaling Deployments

You can modify the scale of the deployment by updating the
deployment.yaml file from earlier to replace replicas: 2
with replicas: 3 and run the apply command shown below. If you run
the describe deployment command again you will see a second event with
the message:
Scaled up replica set mywebservice-deployment-3208086093 to 3.

$ kubectl apply -f deployment.yaml
deployment “mywebservice-deployment” configured

Updating Deployments

You can also use the apply command to update your application by
changing the image version. Modify the deployment.yaml file from earlier
to replace image: nginx:1.10 to image: nginx:1.11 and run the
kubectl apply command. If you run the describe deployment command again
you will see new events whose messages are shown below. You can see how
the new deployment (2303032576) was scaled up and the old deployment
(3208086093) was scaled down and the in steps. The total number of pods
across both deployments is kept constant however the pods are gradually
moved from the old to the new deployments. This allows us to run
deployments under load without service interruption.

Scaled up replica set mywebservice-deployment-2303032576 to 1
Scaled down replica set mywebservice-deployment-3208086093 to 2
Scaled up replica set mywebservice-deployment-2303032576 to 2
Scaled down replica set mywebservice-deployment-3208086093 to 1
Scaled up replica set mywebservice-deployment-2303032576 to 3
Scaled down replica set mywebservice-deployment-3208086093 to 0

If during or after the deployment you realize something is wrong and the
deployment has caused problems you can use the rollout command to undo
your deployment change. This will apply the reverse operation to the one
above and move load back to the previous version of the container.

$ kubectl rollout undo deployment/mywebservice-deployment
deployment “mywebservice-deployment” rolled back

Health check

With deployments we have seen how to scale our service up and down, as
well as how to do deployments themselves. However, when running services
in production, it’s also important to have live monitoring and
replacement of service instances when they go down. Kubernetes provides
health checks to address this issue. Update the deployment.yaml file
from earlier by adding a livenessProbe configuration in the spec
section. There are three types of liveness probes, http, tcp and
container exec. The first two will check whether Kubernetes is able to
make an http or tcp connection to the specified port. The container exec
probe runs a specified command inside the container and asserts a zero
response code. In the snippet shown below, we are using the http probe
to issue a GET request to port 80 at the root URL.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mywebservice-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: mywebservice
spec:
containers:
– name: web-1-11
image: nginx:1.11
ports:
– containerPort: 80
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
timeoutSeconds: 1

If you recreate your deployment with the additional helthcheck and run
describe deployment, you should see that Kubernetes now tells you that 3
of your replicas are unavailable. If you run describe again after the
initial delay period of 30 seconds, you will see that the replicas are
now marked as available. This is a good way to make sure that your
containers are healthy and to give your application time to come up
before Kubernetes starts routing traffic to it.

$ kubectl create -f deployment.yaml
deployment “mywebservice-deployment” created
$ kubectl describe deployment mywebservice

Replicas: 3 updated | 3 total | 0 available | 3 unavailable

Service

Now that we have a monitored, scalable deployment which can be updated
under load, it’s time to actually expose the service to real users.
Copy the following text into a file called service.yaml. Each node in
your cluster exposes a port which can route traffic to the replicas
using the Kube Proxy.

apiVersion: v1
kind: Service
metadata:
name: mywebservice
labels:
run: mywebservice
spec:
type: NodePort
ports:
– port: 80
protocol: TCP
name: http
selector:
app: mywebservice

With the service.yaml file we create service using the create command
and then we can lookup the NodePort using the describe service command.
For example, in my service I can access the application on port 31673 on
any of my Kubernetes/Rancher agent nodes. Kubernetes will route traffic
to available nodes automatically if nodes are scaled up and down, become
unhealthy or are relaunched.

$ kubectl create -f service.yaml
service “mywebservice” created
$ kubectl describe service mywebservice | grep NodePort
NodePort: http 31673/TCP

In today’s article, we looked some basic Kubernetes resources including
Namespaces, Pods, Deployments and Services. We looked at how to scale
our application up and down manually as well as how to perform rolling
updates of our application. Lastly, we looked at configuring services in
order to expose our application externally. In subsequent articles, we
will be looking at how to use these together to orchestrate a more
realistic deployment. We will look at the resources covered today in
more detail, including how to setup SSL/TLS termination, multi-service
deployments, service discovery and how the application would react to
failures scenarios.

Note: Part
2

of this series is now available!

Source

Cloud Native in 2019: A Look Ahead

Cloud Native has been around for a few years now, but 2018 was the year Cloud Native crossed the chasm to go truly mainstream. From the explosion in the number of projects making up the CNCF landscape, to IBM’s $34 billion purchase of Red Hat under their Hybrid Cloud division, the increasingly wide adoption of Kubernetes, capped off by CloudNativeCon in Seattle with eight thousand attendees, the past year Cloud Native became the major trend in the industry.

So then the question now is, “what will happen in 2019”?

We have seen strong indications of where the sector is heading, and how it will grow over the next year. The large surge in popularity and adoption of Cloud Native technologies such as Kubernetes has led to many new use cases. New use cases result in new challenges, which translate into new requirements and tools. If I were to pick one main theme for this year it would be security. Many of the forthcoming tools, features, products are specifically focused on addressing security concerns.

Here are a few of the trends we expect to see in the year ahead:

Cloud Native meets the Enterprise – We will see continued adoption of Kubernetes and Cloud Native tech by large enterprise companies. This will have a two-way effect. Cloud Native will continue to influence enterprise companies. They will need to adjust their culture and ways of working in order to properly leverage the technologies. From their end, enterprises will have a strong influence on shaping the future of Cloud Native. We predict a continued focus on features regarding security. An additional effect from the pressure around security and reliability will potentially lead to slowing down both speed and release cycles of some of the major projects.

The Year of Serverless (really) – We predict serverless/FaaS is finally going to live up to the hype this year. It feels like we’ve been talking about serverless for a long time already, but except for some niche use cases it hasn’t really lived up the expectations so far. Up until recently, AWS Lambda has been pretty much the only proven, production scale option. With the rise of new Kubernetes-focused tools such as KNative and promising platforms such as OpenFaas, serverless becomes much more attractive for companies who don’t run on AWS (or are wary of vendor lock in).

Not Well Contained – While last year it seemed that we were headed to ‘everything in a container’, more recently we’ve seen many VM/Container hybrid technologies, such as AWS Firecracker, Google’s gVisor and Kata Containers. These technologies are arising mostly for reasons around security and multi-tenancy. They offer stronger isolation while keeping performance similar to containers. We expect to see a move away from trend of running everything as a Docker container. Thanks to standardization on Open Container Initiative specifications and the Container Runtime Interface these new runtimes can be abstracted away from the applications and the developers writing them. We will continue to use Docker (or other OCI compliant) images for our applications, it will be the underlying runtimes which will likely change.

Managing state – The early use cases of Cloud Native focused heavily around stateless applications. While Kubernetes made persistent storage a first class citizen from early on, properly migrating stateful workloads to containers and Kubernetes was fraught with risks and new challenges. The ephemeral, dynamic and distributed nature of cloud native systems make the prospect of running stateful services, specifically databases, on Kubernetes worrying to say the least. This led to many to be content migrating their stateless applications, while avoiding moving over stateful services at all costs. More recently, we have seen a large increase in the number of open source tools, products and services for managing state in the cloud native world. Combined with a better understanding by the community, 2019 looks to be the year when we are comfortable running our critical stateful services on Kubernetes.

We’re excited to see what kind of surprises this year brings as well. Will Kubernetes maintain its dominance? Could a new technology come along to take away its steam? Unlikely, though not impossible as there has been considerable criticism of it’s complexity, with some more lightweight options popping up. Will unikernels finally see more adoption as we continue to focus on security and resource optimisation? Will the CNCF landscape continue exploding in size, or will we finally see a paring down of tooling and more standardization?

No matter whatever surprises that may yet pop up, we’re confident in saying Cloud Native will continue to be the major trend in IT over the next year…and many more, as CN increasingly influences how everyday enterprises run their technology for years to come.

Source

APIServer dry-run and kubectl diff

APIServer dry-run and kubectl diff

Author: Antoine Pelisse (Google Cloud, @apelisse)

Declarative configuration management, also known as configuration-as-code, is
one of the key strengths of Kubernetes. It allows users to commit the desired state of
the cluster, and to keep track of the different versions, improve auditing and
automation through CI/CD pipelines. The Apply working-group
is working on fixing some of the gaps, and is happy to announce that Kubernetes
1.13 promoted server-side dry-run and kubectl diff to beta. These
two features are big improvements for the Kubernetes declarative model.

Challenges

A few pieces are still missing in order to have a seamless declarative
experience with Kubernetes, and we tried to address some of these:

  • While compilers and linters do a good job to detect errors in pull-requests
    for code, a good validation is missing for Kubernetes configuration files.
    The existing solution is to run kubectl apply –dry-run, but this runs a
    local dry-run that doesn’t talk to the server: it doesn’t have server
    validation and doesn’t go through validating admission controllers. As an
    example, Custom resource names are only validated on the server so a local
    dry-run won’t help.
  • It can be difficult to know how your object is going to be applied by the
    server for multiple reasons:

    • Defaulting will set some fields to potentially unexpected values,
    • Mutating webhooks might set fields or clobber/change some values.
    • Patch and merges can have surprising effects and result in unexpected
      objects. For example, it can be hard to know how lists are going to be
      ordered once merged.

The working group has tried to address these problems.

APIServer dry-run

APIServer dry-run was implemented to address these two problems:

  • it allows individual requests to the apiserver to be marked as “dry-run”,
  • the apiserver guarantees that dry-run requests won’t be persisted to storage,
  • the request is still processed as typical request: the fields are
    defaulted, the object is validated, it goes through the validation admission
    chain, and through the mutating admission chain, and then the final object is
    returned to the user as it normally would, without being persisted.

While dynamic admission controllers are not supposed to have side-effects on
each request, dry-run requests are only processed if all admission controllers
explicitly announce that they don’t have any dry-run side-effects.

How to enable it

Server-side dry-run is enabled through a feature-gate. Now that the feature is
Beta in 1.13, it should be enabled by default, but still can be enabled/disabled
using kube-apiserver –feature-gates DryRun=true.

If you have dynamic admission controllers, you might have to fix them to:

  • Remove any side-effects when the dry-run parameter is specified on the webhook request,
  • Specify in the sideEffects
    field of the admissionregistration.k8s.io/v1beta1.Webhook object to indicate that the object doesn’t
    have side-effects on dry-run (or at all).

How to use it

You can trigger the feature from kubectl by using kubectl apply
–server-dry-run, which will decorate the request with the dryRun flag
and return the object as it would have been applied, or an error if it would
have failed.

Kubectl diff

APIServer dry-run is convenient because it lets you see how the object would be
processed, but it can be hard to identify exactly what changed if the object is
big. kubectl diff does exactly what you want by showing the differences between
the current “live” object and the new “dry-run” object. It makes it very
convenient to focus on only the changes that are made to the object, how the
server has merged these and how the mutating webhooks affects the output.

How to use it

kubectl diff is meant to be as similar as possible to kubectl apply:
kubectl diff -f some-resources.yaml will show a diff for the resources in the yaml file. One can even use the diff program of their choice by using the KUBECTL_EXTERNAL_DIFF environment variable, for example:

KUBECTL_EXTERNAL_DIFF=meld kubectl diff -f some-resources.yaml

What’s next

The working group is still busy trying to improve some of these things:

  • Server-side apply is trying to improve the apply scenario, by adding owner
    semantics to fields! It’s also going to improve support for CRDs and unions!
  • Some kubectl apply features are missing from diff and could be useful, like the ability
    to filter by label, or to display pruned resources.
  • Eventually, kubectl diff will use server-side apply!

Source

Using New Relic, Splunk, AppDynamics and Netuitive for Container Monitoring

Monitoring
Icons

If you use containers as part of your day-to-day operations, you need to
monitor them — ideally, by using a monitoring solution that you
already have in place, rather than implementing an entirely new tool.
Containers are often deployed quickly and at a high volume, and they
frequently consume and release system resources at a rapid rate. You
need to have some way of measuring container performance, and the impact
that container deployment has on your system. In this article, we’ll
take a look at four widely used monitoring
platforms—Netuitive, NewRelic, Splunk, and AppDynamics—that support containers, and compare how they measure up when it comes to monitoring containers.
First, though, a question: When you monitor containers, what kind of
metrics do you expect to see? The answer, as we’ll see below, varies
with the monitoring platform. But in general, container metrics fall
into two categories—those that measure overall container impact on the
system, and those that focus on the performance of individual
containers.

Setting up the Monitoring System

The first step in any kind of software monitoring, of course, is to
install the monitoring service. For all of the platforms covered in this
article, you can expect additional steps for setting up standard
monitoring features. Here we cover only those directly related to
container monitoring. Setup: New Relic With New Relic, you start by
installing New Relic Servers for Linux,
which includes integrated Docker monitoring. It should be installed on
the Docker server, rather than the Docker container. The Servers for
Linux package is available for most common Linux distributions; Docker
monitoring, however, requires a 64-bit system. After you install New
Relic Servers for Linux, you will need to create a docker group (if it
doesn’t exist), then add the newrelic user to that group. You may need
to do some basic setup after that, including (depending on the Linux
distribution) setting the location of the container data files and
enabling memory metrics. Setup: Netuitive Netuitive also requires
you to install its Linux monitoring agent on the Docker server. You then
need to enable Docker metrics collection in the Linux Agent config file,
and optionally limit the metrics and/or containers by creating a
regex-based blacklist or whitelist. As with New Relic, you may wind up
setting a few additional options. Netuitive, however, offers an
additional installation method. If you are unable to install the Linux
Agent on the Docker server, you can install a Linux Agent Docker
container, which will then do the job of monitoring the host and
containers. Setup: Splunk Splunk takes a very different approach to
container monitoring. It uses a Docker API logging driver to send
container log data directly to Splunk Enterprise and Splunk Cloud via
its HTTP Event Collector. You specify the splunk driver (and its
associated options) on the Docker command line. Splunk’s monitoring, in
other words, is integrated directly with Docker, rather than with the
host system. The Splunk log driver requires an HTTP Event Collector
token, and the path (plus port) to the user’s Splunk Cloud/Splunk
Enterprise instance. It also takes several optional arguments. Setup:
AppDynamics AppDynamics uses a Docker Monitoring extension
to send Docker Remote API metrics to its Machine Agent. In some ways,
this places it in a middle ground between New Relic and Netuitive’s
agent-based monitoring and Splunk’s close interaction with Docker.
AppDynamics’ extension installation, however, is much more hands-on.
The instructions suggest that you can expect to come out of it with
engine grease up to your elbows, and perhaps a few scraped knuckles. You
must first manually bind Docker to the TCP Socket or the Unix Socket.
After that, you need to install the Machine Agent, followed by the
extension. You then need to manually edit several sections of the config
file, including the custom dashboard. You must also set executable
permissions, and AppDynamics asks you to review both the Docker Remote
API and the Docker Monitoring extension’s socket command file. The
AppDynamics instructions also include troubleshooting instructions for
most of the installation steps.

What You Get

As you might expect, there are some significant differences in the
metrics which each of these platforms monitors and displays. Output:
New Relic New Relic displays Docker container metrics as part of its
Application Performance Monitoring (APM) Overview page; containers are
grouped host servers when Servers for Linux is installed, and are
indicated by a container symbol. The overview page includes drill-down
for detailed performance features. The New Relic Servers monitor
includes a Docker page, which shows the impact of Docker containers on
the server’s performance. It displays server-related metrics for
individual Docker images, with drill-down to image details. It does not,
however, display data for individual containers. Output: Netuitive
Netuitive’s Docker monitor collects a variety of metrics, including
several related to CPU and network performance, and almost two dozen
involving memory. It also collects computed CPU, memory, and throttling
percentages. With Netuitive, you build a dashboard by assembling
widgets, so the actual data shown (and the manner in which it is
displayed) will depend on your choice of widgets and their
configuration. Output: Splunk Splunk is designed to use data from a
wide range of logs and related sources; for containers, it pulls data
from individual container logs, from Docker and cloud APIs, and from
application logs. Since Splunk integrates such a large amount of data at
the cloud and enterprise level, it is up to the user to configure
Splunk’s analysis and monitoring tools to display the required data.
For containers, Splunk recommends looking at CPU and memory use,
downtime/outage-related errors, and specific container and application
logs to identify problem containers. Output: AppDynamics AppDynamics
reports basic container and system statistics (individual container
size, container and image count, total memory, memory limit, and swap
limit), along with various ongoing network, CPU, and memory statistics.
It sends these to the dashboard, where they are displayed in a series of
charts.

Which Service Should You Use?

When it comes to the question of which monitoring service is right for
your container deployment, there’s no single answer. For most
container-based operations, including Rancher-managed operations on a
typical Linux distribution, either New Relic or Netuitive should do
quite well. With reasonably similar setup and monitoring features, the
tradeoff is between New Relic’s preconfigured dashboard pages and the
do-it-yourself customizability of Netuitive’s dashboard system. For
enterprise-level operations concerned with integrated monitoring of
performance at all scales, from system-level down to individual
container and application logs, Splunk is the obvious choice. Since
Splunk works directly with the Docker API, it is also likely to be the
best option for use with minimal-feature RancherOS deployments. If, on
the other hand, you simply want to monitor container performance via the
Docker API in a no-frills, basic way, the AppDynamics approach might
work best for you. So there it is: Look at what kind of container
monitoring you need, and take your pick.

Source

Get started with OpenFaaS and KinD

In this post I want to show you how I’ve started deploying OpenFaaS with the new tool from the Kubernetes community named Kubernetes in Docker or KinD. You can read my introductory blog post Be KinD to Yourself here.

The mission of OpenFaaS is to Make Serverless Functions Simple. It is open-source and built by developers, for developers in the open with a growing and welcoming community. With OpenFaaS can run stateless microservices and functions with a single control-plane that focuses on ease of use on top of Kubernetes. The widely accepted OCI/Docker image format is used to package and deploy your code and can be run on any cloud.

Over the past two years more than 160 developers have contributed to code, documentation and packaging. A large number of them have also written blog posts and held events all over the world.

Find out more about OpenFaaS on the blog or GitHub openfaas/faas

Pre-reqs

Unlike prior development environments for Kubernetes such as Docker for Mac or minikube – the only requirement for your system is Docker which means you can install this almost anywhere you can get docker installed.

This is also a nice experience for developers because it’s the same on MacOS, Linux and Windows.

On a Linux host or Linux VM type in $ curl -sLS https://get.docker.com | sudo sh.

Download Docker Desktop for Windows or Mac.

Create your cluster

Install kubectl

The kubectl command is the main CLI needed to operate Kubernetes.

I like to install it via the binary release here.

Get kind

You can get the latest and greatest by running the following command (if you have Go installed locally)

$ go get sigs.k8s.io/kind

Or if you don’t want to install Golang on your system you can grab a binary from the release page.

Create one or more clusters

Another neat feature of kind is the ability to create one or more named clusters. I find this useful because OpenFaaS ships plain YAML files and a helm chart and I need to test both independently on a clean and fresh cluster. Why try to remove and delete all the objects you created between tests when you can just spin up an entirely fresh cluster in about the same time?

$ kind create cluster --name openfaas

Creating cluster 'kind-openfaas' ...
 ✓ Ensuring node image (kindest/node:v1.12.2) 🖼 
 ✓ [kind-openfaas-control-plane] Creating node container 📦 
 ✓ [kind-openfaas-control-plane] Fixing mounts 🗻 
 ✓ [kind-openfaas-control-plane] Starting systemd 🖥
 ✓ [kind-openfaas-control-plane] Waiting for docker to be ready 🐋 
⠈⡱ [kind-openfaas-control-plane] Starting Kubernetes (this may take a minute) ☸ 

Now there is something you must not forget if you work with other remote clusters. Always, always switch context into your new cluster before making changes.

$ export KUBECONFIG="$(kind get kubeconfig-path --name="openfaas")"

Install OpenFaaS with helm

Install helm and tiller

The easiest way to install OpenFaaS is to use the helm client and its server-side equivalent tiller.

  • Create a ServiceAccount for Tiller:
$ kubectl -n kube-system create sa tiller \
 && kubectl create clusterrolebinding tiller \
      --clusterrole cluster-admin \
      --serviceaccount=kube-system:tiller
  • Install the helm CLI
$ curl -sLSf https://raw.githubusercontent.com/helm/helm/master/scripts/get | sudo bash
  • Install the Tiller server component
helm init --skip-refresh --upgrade --service-account tiller

Note: it may take a minute or two to download tiller into your cluster.

Install the OpenFaaS CLI

$ curl -sLSf https://cli.openfaas.com | sudo sh

Or on Mac use brew install faas-cli.

Install OpenFaaS

You can install OpenFaaS with authentication on or off, it’s up to you. Since your cluster is running locally you may want it turned off. If you decide otherwise then checkout the documentation.

  • Create the openfaas and openfaas-fn namespaces:
$ kubectl apply -f https://raw.githubusercontent.com/openfaas/faas-netes/master/namespaces.yml
  • Install using the helm chart:
$ helm repo add openfaas https://openfaas.github.io/faas-netes && \
    helm repo update && \
    helm upgrade openfaas --install openfaas/openfaas \
      --namespace openfaas  \
      --set basic_auth=false \
      --set functionNamespace=openfaas-fn \
      --set operator.create=true

The command above adds the OpenFaaS helm repository, then updates the local helm library and then installs OpenFaaS locally without authentication.

Note: if you see Error: could not find a ready tiller pod then wait a few moments then try again.

You can fine-tune the settings like the timeouts, how many replicas of each service run, what version you are using and more using the Helm chart readme.

Check OpenFaaS is ready

The helm CLI should print a message such as: To verify that openfaas has started, run:

$ kubectl --namespace=openfaas get deployments -l "release=openfaas, app=openfaas"

The KinD cluster will now download all the core services that make up OpenFaaS and this could take a few minutes if you’re on WiFi, so run the command above and look out for “AVAILABLE” turning to 1 for everything listed.

Access OpenFaaS

Now that you’ve setup a cluster and OpenFaaS it’s time to access the UI and API.

First forward the port of the gateway to your local machine using kubectl.

$ kubectl port-forward svc/gateway -n openfaas 8080:8080

Note: If you already have a service running on port 8080, then change the port binding to 8888:8080 for instance. You should also run export OPENFAAS_URL=http://127.0.0.1:8888 so that the CLI knows where to point to.

You can now use the OpenFaaS CLI and UI.

Open the UI at http://127.0.0.1:8080 and deploy a function from the Function store – a good example is “CertInfo” which can check when a TLS certificate will expire.

Downloading your chosen image may take a few seconds or minutes to deploy depending on your connection.

  • Invoke the function then see its statistics and other information via the CLI:
$ faas-cli list -v
  • Deploy figlet which can generate ASCII text messages, try it out.
$ faas-cli store deploy figlet
$ echo Hi! | faas-cli invoke figlet

You can use the describe verb for more information and to find your URL for use with curl or other tools and services.

$ faas-cli describe figlet

Use the OpenFaaS CRD

You can also use the OpenFaaS Custom Resource Definition or CRD by typing in:

$ kubectl get functions -n openfaas-fn

When you create a new function for OpenFaaS you can use the CLI which calls the RESTful API of the OpenFaaS API Gateway, or generate a CRD YAML file instead.

  • Here’s an example with Node.js:

Change the --prefix do your own Docker Hub account or private Docker registry.

$ mkdir -p ~/dev/kind-blog/ && \
  cd ~/dev/kind-blog/ && \
  faas-cli template store pull node10-express && \
  faas-cli new --lang node10-express --prefix=alexellis2 openfaas-loves-crds

Our function looks like this:

$ cat openfaas-loves-crds/handler.js

"use strict"

module.exports = (event, context) => {
    let err;
    const result =             {
        status: "You said: " + JSON.stringify(event.body)
    };

    context
        .status(200)
        .succeed(result);
}

Now let’s build and push the Docker image for our function

$ faas-cli up --skip-deploy -f openfaas-loves-crds.yml 

Followed by generating a CRD file to apply via kubectl instead of through the OpenFaaS CLI.

$ faas-cli generate crd  -f openfaas-loves-crds.yml 
---
apiVersion: openfaas.com/v1alpha2
kind: Function
metadata:
  name: openfaas-loves-crds
  namespace: openfaas-fn
spec:
  name: openfaas-loves-crds
  image: alexellis2/openfaas-loves-crds:latest

You can then pipe this output into a file to store in Git or pipe it directly to kubectl like this:

$ faas-cli generate crd  -f openfaas-loves-crds.yml | kubectl apply -f -
function.openfaas.com "openfaas-loves-crds" created

$ faas-cli list -v
Function                      	Image                                   	Invocations    	Replicas
openfaas-loves-crds           	alexellis2/openfaas-loves-crds:latest   	0              	1    

Wrapping up

KinD is not the only way to deploy Kubernetes locally, or the only way to deploy OpenFaaS, but it’s quick and easy and you could even create a bash script to do everything in one shot.

  • If you’d like to keep learning then checkout the official workshop which has been followed by hundreds of developers around the world already.
  • Join Slack if you’d like to chat more or contribute to the project Slack

You can also read the docs to find out how to deploy to GKE, AKS, DigitalOcean Kubernetes, minikube, Docker Swarm and more.

Source

Hidden Dependencies with Microservices | Rancher Labs

Hidden Dependencies with Microservices


One of the great things about microservices is that they allow engineering to decouple software development from application lifecycle. Every microservice:

  • can be written in its own language, be it Go, Java, or Python
  • can be contained and isolated form others
  • can be scaled horizontally across additional nodes and instances
  • is owned by a single team, rather than being a shared responsibility among many teams
  • communicates with other microservices through an API a message bus
  • must support a common service level agreement to be consumed by other microservices, and conversely, to consume other microservices

These are all very cool features, and most of them help to decouple various software dependencies from each other. But what is the operations point of view? While the cool aspects of microservices bulleted above are great for development teams, they pose some new challenges for DevOps teams. Namely:

Scalability: Traditionally, monolithic applications and systems scaled vertically, with low dynamism in mind. Now, we need horizontal architectures to support massive dynamism – we need infrastructure as code (IaC). If our application is not a monolith, then our infrastructure cannot be, either.

Orchestration: Containers are incredibly dynamic, but they need resources – CPU, memory, storage (SDS) and networking (SDN) when executed. Operations and DevOps teams need a new piece of software that knows which resources are available to run tasks fairly (if sharing resources with other teams), and efficiently.

System Discovery: To merge dynamism and orchestration, you need a system discovery service. With microservices and containers, one can implement still use a configuration management database (CMDB), but with massive dynamism in mind. A good system has to be aware of every container deployment change, able to get or receive information from every container (metadata, labels), and provide a method for making this info available.

There are many tools in the ecosystem one can choose. However, the scope of this article is not to do a deep dive into these tools, but to provide an overview of how to reduce dependencies between microservices and your tooling.

Scenario 1: Rancher + Cattle

Consider the following scenario, where a team is using Rancher. Rancher is facilitating infrastructure as code, and using Cattle for orchestration, and Rancher discovery (metadata, DNS, and API) for managing system discovery. Assume that the DevOps team is familiar with this stack, but must begin building functionality for the application to run. Let’s look at the dependency points they’ll need to consider:

  1. The IaC tool shouldn’t affect the development or deployment of microservices. This layer is responsible for providing, booting, and enabling communication between servers (VMs or bare metal). Microservices need servers to run, but it doesn’t matter how those servers were provided. We should be able to change our IaC method without affecting microservices development or deployment paths.
  2. Microservice deployments are dependent on orchestration. The development path for microservices could be the same, but the deployment path is tightly coupled to the orchestration service, due to deployment syntax and format. There’s no easy way to avoid this dependency, but it can be minimized by using different orchestration templates for specific microservice deployments.
  3. Microservice developments could be dependent on system discovery. It depends on the development path.

Points (1) and (2) are relatively clear, but let’s take a closer look at (3). Due to the massive dynamism in microservices architectures, when one microservice is deployed, it must be able to retrieve its configuration easily. It also needs to know where its peers and collaborating microservices are, and how they communicate (which IP, and via which port, etc). The natural conclusion is that for each microservice, you also define logic coupling it with service discovery. But what happens if you decide to use another orchestrator or tool for system discovery? Consider a second system scenario:

Scenario 2: Rancher + Kubernetes + etcd

In the second scenario, the team is still using Rancher to facilitate Infrastructure as Code. However, this team has instead decided to use Kubernetes for orchestration and system discovery (using etcd). The team would have to create Kubernetes deployment files for microservices (Point 2, above), and refactor all the microservices to talk with Kubernetes instead of Rancher metadata (Point 3). The solution is to decouple services from configuration. This is easier said than done, but here is one possible way to do it:

  • Define a container hierarchy
  • Separate containers into two categories: executors and configurators
  • Create generic images based on functionality
  • Create application images from the generic executor images; similarly, create config-apps images from the generic configurator images
  • Define logic for running multiple images in a collaborative/accumulative mode

Here’s an example with specific products to clarify these steps: container-hierarchiesIn the image above, we’ve used:

  • base: alpine Built from Alpine Linux, with some extra packages: OpenSSL, curl, and bash. This has the smallest OS footprint with package manager, based in uClib, and it’s ideal for containerized microservices that don’t need glibc.
  • executor: monit. Built from the base above, with monitoring installed under /opt/monit. It’s written in C with static dependencies, a small 2 MB footprint, and cool features.
  • configurator: confd. Built from the base above, with confd and useful system tools under /opt/tools. It’s written in Golang with static dependencies, and provides an indirect path for system discovery, due to supporting different backends like Rancher, etcd, and Consul.

The main idea is to keep microservices development decoupled from system discovery; this way, they can run on their own, or complemented by another tool that provides dynamic configuration. If you’d like to test out another tool for system discovery (such as etcd, ZooKeeper, or Consul), then all you’ll have to do is develop a new branch for the configurator tool. You won’t need to develop another version of the microservice. By avoiding reconfiguring the microservice itself, you’ll be able to reuse more code, collaborate more easily, and have a more dynamic system. You’ll also get more control, quality, and speed during both development and deployment. To learn more about hierarchy, images, and build I’ve used here, you can access this repoon GitHub. Within are additional examples using Kafka and Zookeeper packages; these service containers can run alone (for dev/test use cases), or with Rancher and Kubernetes for dynamic configuration with Cattle or Kubernetes/etcd. Zookeeper (repo here, 67 MB)

  • With Cattle (here)
  • With Kubernetes (here)

Kafka (repo here, 115 MB)

Conclusion

Orchestration engines and system discovery services can result in “hidden” microservice dependencies if not carefully decoupled from microservices themselves. This decoupling makes it easier to develop, test and deploy microservices across infrastructure stacks without refactoring, and in turn allows users to build wider, more open systems with better service agreements. Raul Sanchez Liebana (@rawmindNet) is a DevOps engineer at Rancher.

Source