Docker Networking Tip – Troubleshooting

Debugging Container and Docker Networking issues can be daunting at first considering that containers do not contain any debug tools inside the container. I normally see a lot of questions around Docker networking issues in Docker and stackoverflow forum. All the usual networking tools can be used to debug Docker networking, it is just that the approach taken is slightly different. I have captured my troubleshooting steps in a video and a presentation.
Following is the video and presentation of my Docker Networking troubleshooting tips.
I would appreciate if you can provide me feedback if the Networking tip videos were useful to you. Also, if there are any other Docker Networking topics that you would like to see as a tip video, please let me know.
I have also put few Docker Networking videos and presentations that I did over last 3 months below for completeness.
Following are the 2 previous Networking tip videos.
Following are 2 Docker Networking deep dive presentations:
Source

NextCloudPi brings NC13.0.2, automatic NC upgrades, Rock64 and Banana Pi support, Armbian integration, Chinese language and more – Own your bits

 

The latest release of NextCloudPi is out!

The key improvement is bringing fully automating Nextcloud updates. This was the last piece of the puzzle, and finally we can leave the board just sitting there and everything will be automatically be kept up to date: Debian packages, Nextcloud and NextCloudPi itself.

Also, work has been focused on bringing NextCloudPi to more boards, making backup/restore as resiliant as possible, and of course implementing many improvements and small fixes.

On a more social note, I was interviewed by Nextcloud as a part of the community, so check it out if you would like to learn some more things about me and the project.

NextCloudPi improves everyday thanks to your feedback. Please report any problems here. Also, you can discuss anything in the forums, or hang out with us in Telegram.

Last but not least, please download through bitorrent and share it for a while to help keep hosting costs down.

Name change

Wait, didn’t we change the name to NextCloudPlus? Well, I am sad to announce that we have to move back to the original name, and here we will stay for the rest of time.

I understand this is confusing but it turns out that we are incuring into trademark issues with Nextcloud GmbH. It was my mistake to assume that we could just change names, and Nextcloud was nice to let us keep the old name so that’s exactly what we are doing.

We don’t want to harm the Nextcloud brand in any way (quite the opposite!) or any trouble, so I bought nextcloudpi.com, undid everything and just move on to more motivating issues.

Nextcloud 13.0.2

This Nextcloud minor version comes mainly with improved UI, better end-to-end encryption and small fixes. Check out the release notes for more details.

Automatic Nextcloud updates

Finally here! NCP is now capable of upgrading to the latest Nextcloud version in a safe manner. This can be done automatically or on demand, by using nc-update-nextcloud, and nc-autoupdate-nc respectively.

This means that we can just stop worrying about checking for updates, and all the box software will be up to date: Nextcloud, Debian security packages and NextCloudPi itself. We will be notified whenever this happens.

A lot of care has been taken on testing every possible situation, even sudden power loss during the update, to make sure that the system will just roll back to the previous state if anything goes wrong.

At this point, the autoupdate is not enabled by default. If we continue not to see any problems with it it well be activated by default in the future during the first run wizard.

All feedback is welcome!

Rock64

Rock64 SD card images ready to run with Nextcloud and all the NCP extra goodies are now available for download. This board is thought of as the perfect NAS solution, featuring real Gigabit Ethernet (*) and USB3, but at a very low price tag, starting at 25$.

If you want something nicer than the RPi but the Odroid HC2 is too expensive, this is a great investment.

(*) Unlike the Raspberry Pi 3B+. More on this in following posts.

Banana Pi

The first testing build for the Banana Pi is also available for download. This board is popular for its SATA port, and Gigabit Ethernet so it is also a popular low cost NAS solution, despite the bad kernel support, GPL violations and some of the questionable practices of Allwinner. Luckily we have the Linux Sunxi community to the rescue!

This is a testing release, consider it work in progress.

Potato Pi

Sooner or later we will have NCP running everywhere. Jim is still trying to make his NCPotato board boot, he keeps saying that he almost has it. Good luck Jim!

Let’s take this oportunity to announce that we need tons of help to support more boards! The machinery is in place, and we now need people to help with building / testing / improving other boards.

As long as the board is supported by Armbian, it is really not complicated to build your own SD card image with NCP on it. Please share it with us!

Armbian integration

The patches have been merged to add NCP to the Armbian software installer. This is just another way to make it easy for people to install Nextcloud. Every Armbian board will be capable of installing NCP right out of the box.

Conversations have started to do the same thing with DietPi.

Chinese web

Last but not least, initial suppot for Chinese has been added to ncp-web. Thanks Carl Hung and Yi Chi for the help!

Source

How Giant Swarm Enables a New Workflow

How Giant Swarm Enables a New Workflow

By now we all know that Amazon AWS changed computing forever and it actually started as an internal service. The reason for the existence of AWS is pretty easy to understand once you understand Jeff Bezos and Amazon. Sit tight.

Jeff and his team deeply believe in the two pizza team rule, meaning that you need to be able to feed a team with two pizzas or it is too big. This is due to the math behind communication, namely the fact that the number of communication links in a group can be calculated based on the members of the team n :

In a team of 10, there are 45 possible links, possible communication paths. At 20, there are 190, at 100 people there are 5.000. You get the idea. You need to allow a small team to be in full control and that is really where DevOps comes from: You build it, you run it and if you want to make your corporate overlord fully tremble in fear, you need a third point: you decide it.

The problem Amazon had though, was that their teams were losing a lot of time because they had to care for the servers running their applications and that part of the equation was just not integrated into their workflow yet. “Taking care of servers” was a totally separate thing than the rest of their work, where one (micro-)service simply talked to another service when needed. The answer, in the end, was simple. Make infrastructure code and give those teams APIs to control compute resources, creating an abstraction layer to the servers. There should not be a difference between talking to a service built by another team, the API for a message queue, charge a credit card or start a few servers.

This allows for a lot of efficiency on both sides and is great. Developers have a nice API and the Server Operations people can do whatever needs to be done as long as they keep the API stable.

Everything becomes part of the workflow. And once you have it internally as a service there was no reason to not make it public and hence have better utilization of your servers.

Kubernetes Appears on the Scene

Now think about how Kubernetes has started to gain traction within bigger companies. It actually normally starts out with a team somewhere that installs Kubernetes however they want, sometimes as a strategic DevOps decision. Of course, these teams would never think about buying their own servers and building their own datacenter, but as K8s is code, it is seen as being more on the developer side. This means you end up with a disparate set of K8s installations until the infrastructure team gets interested and wants to provide it centrally.

While the corporation might think that with providing a centralized K8s they are actually doing what Amazon did with providing the API to K8s, being API driven, but that is not what the Amazon way is. The Amazon way, the right way, is to provide an API to start a K8s cluster and abstract all other things, like security and storage provisioning, away as far as possible. For efficiency, you might want to provide a bigger production cluster at some point, but first and foremost, this is about development speed.

Giant Swarm – Your Kubernetes Provisioning API

This is where the Giant Swarm Platform comes in, soon including more managed services around it. Be it in the cloud or on-premise, we offer you an API that allows teams to start as many of their own K8s clusters, in a specific and clear-cut version, as they see fit, integrating the provisioning of K8s right into their workflows. The infrastructure team, or cluster operations team as we tend to call them, makes sure that all security requirements are met, there is some tooling around the clusters like CI/CD, possibly supply versioned helm chart templates and so on. This is probably worth a totally separate post.

At the same time, Giant Swarm provides you with fully managed Kubernetes clusters, keeping them always up to date, with the latest security fixes and in-place upgrades, so you are not left with a myriad of different versions run by different teams in different locations. Giant Swarm clusters of one version always look the same. “Cloud Native projects at demand at scale in consistently high quality”, as one of our partners said.

Through Giant Swarm, customers can put their teams back into full control, going as far as allowing them to integrate Giant Swarm in their CI/CD pipeline and quickly launch and tear down test clusters on demand. They can give those teams the freedom they planned by letting them launch their own K8s clusters by themselves, not having to request it somewhere, while keeping full control of how these clusters are secured, versioned and managed, so that they know that applications can move easily through their entire K8s ecosystem, in different countries and locations.

Giant Swarm is the Amazon EC2 for Kubernetes in any location. API-driven Kubernetes, where the teams stay in control and can really live DevOps and API-driven as a mindset and way of doing things. Request your free trial of the Giant Swarm Infrastructure here.

Source

Docker vs. Kubernetes: Is Infrastructure Still At War?

Jul 31, 2018

by Pini Reznik

Docker and Kubernetes are undoubtedly the biggest names in Cloud Native infrastructure right now. But: are they competing technologies, or complementary ones? Do we need to worry about joining the right side? Is there risk here for enterprises, or is the war already won? If so, who won it?

Some context for this query is in order. Three years ago, war was raging inside the data centre. By introducing a whole new way of packaging and running applications called “containers,” Docker created the Cloud Native movement. Of course other companies quickly began producing rival container engines to compete with Docker. Who was going to win?

As if the tournament of container systems was not enough, Docker had to fight another, simultaneous, battle. Once containers and container packaging arrived, the obvious next killer app had to be scheduling and orchestrating them. Docker introduced its own Swarm scheduler, which jousted with rival tools from Google (Kubernetes), Mesos (Mesosphere), Nomad (Hashicorp) and a hosted solution from AWS (ECS).

These wars were a risky time for an enterprise wanting to go Cloud Native. First, which platform and tooling to choose? And, once chosen, what if it ultimately lost out to the competition and disappeared? The switching costs for all of these products was very high, so the cost of making the wrong choice would be significant. Unfortunately, while so many contenders vied for primacy in such an emergent sector, the likelihood of making that wrong choice was dangerously high.

So what happened? Who won? Well, both wars got completely and soundly won — and by different combatants.

Docker won a decisive victory in the container engine war. There is one container engine and they are it.

Kubernetes, which was finally released by Google as its own open source entity, has won the scheduler wars. There may yet be a few skirmishers about, but Kubernetes is basically THE orchestrator.

The final result is that any Cloud Native system is likely to utilize Docker as the container engine and Kubernetes as the orchestrator. The two have effectively become complimentary: Shoes and socks. Bows and arrows. Impossible to think of one without the other.

Does this mean we have no decisions left to make?

Unfortunately things are not yet completely simple and decided. Yes, there are still decisions to make, with new factors which must now be considered. Looking over the current container landscape we still see all kinds of competitors.

First, a large number of container platform products designed to abstract the orchestrator and container engine away from you. Among these are Docker EE, DC/OS from Mesos and OpenShift from Redhat.

Second, there are now managed container services from many of the original punters, like EKS from Amazon, Google’s GKE, and Rancher Lab’s offerings. In addition we’ve also got newcomer offerings like AKS from Microsoft and Cisco’s Container Platform.

So, that sounds like we have even more options, and even more work to do when choosing our Cloud Native platform and services.

No, The Situation Has Changed

While it is true that there are even more platforms to choose between now than three years ago, the situation is actually very different now. All of the current platforms are based on Kubernetes. This alone makes them far more similar than the three pioneering orchestration options — Kubernetes, Mesos and Swarm — were in the past. Today’s choices may vary somewhat on small details, like slightly different costs, approaches to security, operational complexity, etc. Fundamentally, though, they are all pretty similar under the hood. This means one big important change: the switching cost is vastly less.

So, for example, imagine you pick EKS to start out on. One year down the line you decide you’d like to manage things yourself and want to move to OpenShift. No problem. The transition costs are not zero, but they are also not severe. You can afford to change your mind. You are not held prisoner with your platform decision by fear of retooling costs.

That said, it is important to recognize that there can be additional costs in changing between platforms when that means moving away from nonstandard or proprietary tools bundled with the original choice. Services that are simple to use on the public cloud service you started with may not be so simple, or even available at all, on others. A significant example of this is services like DynamoDB, Amazon’s proprietary database service; while easily consumable on AWS, it can only be used there. Same with OpenShift’s Source to Image tool for creating Docker images.

Basically, to reduce the cost (risk) of future migration it is advisable to use, from the start, standard tools with standard formats. Such as the native Kubernetes API, or tools like Istio (which sits on top of K8s, and seems to be leading the service mesh market) that work on all the platforms.

Other than that you should be fine. You can move. The cost of being wrong has been dramatically reduced.

This is such great news for enterprises that we shall say it again: You can move! In the early days, there used to be a very difficult decision to make. Should you move quickly and risk getting locked into what could become a dead end technology…Or wait until it’s safe, but suffer opportunity costs?

No more tradeoffs. Now that the risks are reduced, there is no need to wait any more. Pick your platform!

 

Source

Introduction to Container Security | Rancher Labs

 

Expert Training in Kubernetes and Rancher

Join our free online training sessions to learn how to manage Kubernetes workloads with Rancher.

Sign up here

Containers are still a relatively new technology, but they have already had a massive impact on software development and delivery. Companies all around the world are starting to migrate towards microservices, and containers enable developers to quickly spin up services with minimal effort. In the past it took a fair amount of time to download, install, and configure software, but now you can take advantage of solutions that have already been packaged up for you and only require a single command to run. These packages, known as images, are easily extensible and configurable so that teams can customize them to suit their needs and reuse them for many projects. Companies like Docker and Google helped to make containers easier to use and deploy, introducing orchestration tools like Docker Compose, Docker Swarm, and Kubernetes.

While the usefulness and power of containers continues to grow, security is still something that prevents containers from receiving wider adoption. In this article, we’re going to take a look at the security mechanisms of containers, some of the big issues with container security, and some of the methods that you can use to address those issues.

Cgroups and Namespaces

We will begin by talking about two of the Linux kernel features that helped make containers as we know them today possible. The first of these features is called cgroups. This feature was developed as a way to group processes and provide more control over the resources that were available to the group, such as CPU, memory, and I/O. This feature also allows for better accounting of the usage, such as when teams need to report the usage for billing purposes. The cgroups feature of containers allows them to scale in a controllable way and have a predictable capacity. It is also a good security feature because processes running in containers can not easily consume all the resources on a system – for example, a denial of service attack by starving other processes of required resources.

The other feature is called namespaces which essentially allows a process to have its own dedicated set of resources, such as files, users, process ids, and hostnames (there are different namespace types for each of the resource types). In a lot of ways, this can make a container process seem like it is a virtual machine, but the process still executes system calls on the main kernel.

Namespaces limit what the process running inside a container can see and do. Container processes can only see processes running in the same namespace (in general, containers only have a single process, but it is possible to have more than one). These processes see a filesystem which is a small subset of the real filesystem. The user ids inside the container can be mapped from different ids outside the container (you could make the user root have user id 0 inside the container but actually has user id 1099 outside the container – thus appearing to give administrative control when not actually doing so). This feature allows containers to isolate processes, making them more secure than they would normally be.

Priviledged Containers

When you are running processes in containers, there is sometimes a need to do things that do require elevated privileges. A good example is running a web server that needs to listen on a privileged port, such as 80. Ports under 1024 are privileged and usually assigned to more sensitive network processes such as mail, secure shell access, HTTP, and network time synchronization. Opening these ports requires elevated access as a security feature so that rogue processes can’t just open them up and masquerade as legitimate ones. If you wanted to run an Apache server (which is often used as a secure entry point to an application) in a container and listen on port 80, you would need to give that container privileged access.

The problem with giving a container elevated rights is that it makes it less secure in a lot of different ways. Your intent was to give the process the ability to open a privileged port, but now the process has the ability to do other things that require privileged access. The limitations imposed by the cgroups controller have been lifted, and the process can do almost anything that is possible to do running outside the container. To avoid this issue, it is possible to map a non-privileged port outside the container to a privileged port inside the container. For example, you map port 8080 on the host to port 80 inside the container. This will allow you to run processes that normally require privileged ports without actually giving them privileged access.

Seccomp Profiles

Seccomp and seccomp-bpf are Linux kernel features that allow you to restrict the system calls that a process can make. Docker allows you to define seccomp security profiles to do the same to processes running inside a container. The default seccomp profile for Docker disables around 40 system calls to provide a baseline level of security. These profiles are defined in JSON and use whitelisting for allowed calls (making any calls not listed prohibited). This whitelisting approach is safer because added system calls don’t immediately become available until added to the whitelist.

The issue with these seccomp profiles is that they must be specified at the start of the container and are difficult to manage. Detailed knowledge of the available Linux system calls is required to create effective profiles, and it can be difficult to find the balance between a policy too restrictive (preventing some applications from running) and a policy too flexible (possibly creating an unnecessary security risk).

Capabilities

Capabilities are another way of specifying privileges that need to be available to a process running in a container. The advantage of capabilities is that groups of permissions are bundled together into meaningful groups which makes it easier to collect the privileges required for doing common tasks.

In Docker, a large number of capabilities are enabled by default and can be dropped, such as the ability to change owners of files, open up raw sockets, kill processes, or run processes as other users using setuid or setgid.

More advanced capabilities can be added, such as load and unload kernel modules, override resource limits, set the system clock, make socket broadcasts and listen to multicasts, and perform various system admin operations.

Using capabilities is much more secure than simply running a container as privileged, and a lot easier to manage than using seccomp profiles. Next we’ll talk about some system wide security controls that can also apply to containers.

SELinux and AppArmor

A lot of the security concerns for processes running in containers apply to processes on a host in general, and a couple of different security tools have been written to address the issue of better controlling what processes can do.

SELinux is a Linux kernel security module that provides a mandatory access control (MAC) mechanism for providing stricter security enforcement. SELinux defines a set of users, roles, and domains that can be mapped to the actual system users and groups. Processes are mapped to a combination of user, role, and domain, and policies define exactly what can be done based on the combinations used.

AppArmor is a similar MAC mechanism that aims to confine programs to a limited set of resources. AppArmor is more focused on binding access controls to programs rather than users. It also combines capabilities and defining access to resources by path.

These solutions allow for fine grain control over what processes are allowed to do and make it possible to secure processes to the bare minimum set of privileges that will be required to run. The issue with solutions like these is that the policies can take a long time to develop and tune properly.

Policies that are too strict will block a lot of applications that may expect to have more privileges than they really need. Policies that are too loose are effectively lowering the overall level of security on the system. A lot of companies would like to use these solutions, but they are simply too difficult to maintain.

Some Final Thoughts

Container security controls are an interesting subject that goes back to the beginning of containers with cgroups and namespaces. Because of a lot of the things that we want to do with containers, extending privileges is often something that we must do. The easiest and least secure approach is simply using privileged containers, but we can make that a lot better by using capabilities. More advanced techniques like seccomp profiles, SELinux or AppArmor allow more fine grained control but require more effort to manage. The key is to find a balance between giving the process the least amount of privileges possible and the ease by which that can be done.

Containers are, however, a quickly evolving technology, and with security become more and more of an important focus in software engineering, we should see better controls continue to emerge, especially for large organizations which may have hundreds or thousands of containers to manage. The platforms that make managing so many containers possible are likely to guide the way in building the next generation of security controls. Some of those controls will likely be new Linux kernel features, and we may even see a hybrid approach where containers use a virtual kernel instead of the real one to provide even more security. The future of container security is looking promising.

Jeffrey Poore

Jeffrey Poore

Senior App Architect and Manager

Source

OpenFaaS @ SF / Dockercon round-up

Here are some of the top tweets about Serverless and OpenFaaS from Day 1 at Dockercon hosted in San Francisco. Docker Inc’s CEO Steve Singh opened the event and event mentioned both the Raspberry Pi and Serverless community too.

Good to see the new enterprise-focused Docker Inc acknowledging the @Raspberry_Pi community pic.twitter.com/GXRQ5ouqb1

— Alex Ellis (@alexellisuk) June 13, 2018

Holberton School fireside chat

After having met some students back in April at DevNet Create I started mentoring one of them and keeping in touch with the others. It was great to hear that some students won tickets to Dockercon in a Hackathon using OpenFaaS as part of their winning entry.

During the fireside chat the students I’d met in May interviewed me about my background in Computer Science, my new role leading Open Source in the community and for tips on public speaking.

Really enjoyed sharing with @holbertonschool this week about OSS, career and public speaking pic.twitter.com/4so3myGv8l

— Alex Ellis (@alexellisuk) June 13, 2018

For more on Holberton School and its mission read here: https://www.holbertonschool.com

OpenFaaS + GitOps

The night before the event I spoke with Stefan Prodan at the Weaveworks User Group about GitOps with Functions. When you combine GitOps and Serverless Functions you get something that looks a lot like OpenFaaS Cloud which we open-sourced in November last year.

How does @OpenFaaS cloud work? How about a demo from Silicon Valley? Demo code by @mccabejohn pic.twitter.com/PU85ZWDmAv

— Alex Ellis (@alexellisuk) June 13, 2018

I gave a demo with my mobile phone and OpenFaaS Cloud, we categorized two different images of a hotdog to show Silicon Valley’s “Hotdog, not hotdog” episode. Thanks to John Mccabe for working on this demo.

When I was speaking to a Director of Platform Engineering at a large forward-thinking bank I was told that in their perspective infrastructure is cheap compared to engineering salaries. With OpenFaaS Cloud and GitOps we make it even easier for the developer and operations team to build and deploy serverless functions at scale on any cloud.

“Infrastructure is cheap, engineer hours are expensive” #DockerCon #openfaas #gitops pic.twitter.com/qpVJw1wgqI

— Dwayne Lessner (@dlink7) June 13, 2018

Contribute and Collaborate Track

Here are some highlights from the Contribute and Collaborate Track where I gave a talk on The State of OpenFaaS.

With OpenFaaS you can: iterate faster, build your own platform, use any language, own your data, unlock your community. Great introduction from @alexellisuk #DockerCon

— Mark Jeromin (@devtty2) June 13, 2018

Here I am with Idit Levine from Solo.io and Stefan Prodan from Weaveworks who is also a core contributor to OpenFaaS.

All set to talk about serverless functions made simple in the collaborate and communicate track with @stefanprodan @Idit_Levine and @alexellisuk pic.twitter.com/nTv6yP9Qvh

— OpenFaaS (@openfaas) June 13, 2018

The global group of contributors and influencers is growing and I think it’s important to state that OpenFaaS is built by community for the Open Source Community – that means, for you.

Thanks to @alexellisuk for the call out to @monadic and @stefanprodan for our support for #OpenFaaS
Happy to help!
😸

Hailing from #DockerCon pic.twitter.com/gtYGvXkCu7

— Tamao Nakahara (@mewzherder) June 13, 2018

That sums up the highlights, and there’s much more on Twitter if you don’t want to miss out.

Get involved

Here’s three ways you can get involved with the community and project.

If you’d like to get involved then the best way is to join our Slack community:

https://docs.openfaas.com/community

Find out about OpenFaaS Cloud

Find out more about OpenFaaS Cloud and try the public demo, or install it on your own OpenFaaS cluster today:

?https://docs.openfaas.com/openfaas-cloud/intro/

Deploy OpenFaaS on Kubernetes

You can deploy OpenFaaS on Kubernetes with helm within a matter of minutes. Read the guide for helm below:

https://docs.openfaas.com/deployment/kubernetes/

Source

Happy birthday, Kubernetes: Here’s to three years of collaborative innovation

Three years ago the community celebrated the first production-ready release of Kubernetes, what is now a de facto standard system for container orchestration, at the 1.0 launch day at OSCON. Today we celebrate Kubernetes to not only acknowledge it on the project’s birthday but to also thank the community for the extensive work and collaboration to drive the project forward.

Let’s look back at what has made this one of the fastest moving modern open source projects, how we arrived at production maturity, and look forward to what’s to come.

Kubernetes: A look back

You’ve probably heard by now that Kubernetes was a project born at Google and based on the company’s internal infrastructure known as Borg. Early on, Google introduced the project to Red Hat and asked us to participate in it and help build out the community. In 2014, Kubernetes saw its first alpha release by a team of engineers who sought to simplify systems orchestration and management by decoupling applications and infrastructure and by also decoupling applications from their own components.

At the same time, enterprises around the world were increasingly faced with the pressure to innovate more quickly and bring new, differentiated applications to bear in crowded marketplaces. Industry interest began to consolidate around Kubernetes, thanks to its capacity for supporting rapid, iterative software development and the development of applications that could enable change across environments, from on-premise to the public cloud.

Before Kubernetes, the IT world attempted to address these enterprise needs with traditional Platform-as-a-Service (PaaS) offerings, but frequently these solutions were too opinionated in terms of the types of applications that you could run on them and how those applications were built and deployed. Kubernetes provided a much more un-opinionated, open platform, that enabled customers to deploy a broader range of applications with greater flexibility, and as a result, Kubernetes has been used as a core building block for both Containers-as-a-Service (CaaS) and PaaS-based platforms.

In July 2015, Kubernetes 1.0 was released and the Cloud Native Computing Foundation (CNCF) was born, a vendor-neutral governing body intended to host Kubernetes and related ecosystem projects. Red Hat was a founding member at the CNCF’s launch and we are pleased to see its growth. We also continue to support and contribute to the Kubernetes upstream, much as we did even pre-CNCF, and are excited to be a part of these critical milestones.

Dive in more with this explainer from Brendan Burns, a creator of Kubernetes, from CoreOS Fest in 2015 for a brief technical picture of Kubernetes.

Kubernetes as the new Linux of the Cloud

So what makes Kubernetes so popular?

It is the demand for organizations to move to hybrid cloud and multi-cloud infrastructure. It is the demand for applications paired with the need to support cloud-native and traditional applications on the same platform. It is the desire to manage distributed systems with containerized software and a microservices infrastructure. It is the need for developers and administrators to focus on innovation rather than just keeping the lights on.

Kubernetes has many of the answers to address these demands. The project now provides the ability to self-heal when there is a problem, separate developer and operational concerns, and update itself in near real-time.

And, it’s all open source, which makes it available to all and enables contributors all around the world to better solve this next era of computing challenges together, in the open, unbeholden to siloed environments or proprietary platforms.

Pushing the project forward is the Kubernetes community, a diverse community with innovative ideas, discipline, maintenance, and a consensus-driven decision model. In more than 25 years of contributing to open source projects, ranging from the Linux kernel to OpenStack, we’ve seen few projects that can claim the velocity of Kubernetes. It is a testament to the project’s contributors’ ability to work collaboratively to solve a broad enterprise need that Kubernetes has moved so quickly from 1.0 to broad industry support in three years.

Kubernetes has won over the support of hundreds of individuals and companies, both large and small, including major cloud providers. Red Hat has been contributing to the project since it was open sourced in 2014, and today is the second leading corporate contributor (behind only Google) working on and contributing code to the project. Not to mention, we are the experts behind Red Hat OpenShift, the industry’s most comprehensive enterprise Kubernetes platform.

We’ve observed that Kubernetes is becoming nearly as ubiquitous as Linux in the enterprise IT landscape, with the potential to serve as the basis for nearly any type of IT initiative.

Kubernetes Major Milestones

Some major milestones over the years to note include contributions making it more extensible:

  • September 2015: Kubernetes’s use of the Container Network Interface (CNI) has enabled a rich ecosystem of networking options since the early days of Kubernetes.
  • December 2016: The addition of the Container Runtime Interface (CRI), the way containers start and stop, was a major step forward in Kubernetes 1.5 and on, and helped move towards OCI-compliant containers and tooling.
  • January 2017: etcd v3, a backbone of large-scale distributed systems created by CoreOS and maintained at Red Hat, came into production in Kubernetes 1.6.
  • June 2017: Custom resource definitions (CRD) were introduced to enable API extension by those outside the core platform.
  • October 2017: The stability of role-based access control (RBAC), which lets admins control access to the Kubernetes API, made Kubernetes even more dependable with this security feature enterprises care about. It reached stable in Kubernetes 1.8 but had been widely used in the platform since 1.3.
  • March 2018: How storage is provided and consumed has moved along well in three years with the availability of local persistent volumes (PVs) and dynamically provisioned PVs. Notably, at this time, the Container Storage Interface (CSI), which makes the Kubernetes volume plugin layer more extensible, moved to beta this year in Kubernetes 1.10.
  • June 2018: Custom Resource Definition (CRD) versioning was introduced as beta in Kubernetes 1.11 and is a move toward lowering the barrier and making it easier for you to start building Operators.

Check out some other notable mentions from last year.

Honorable Mentions

But what about the heroic parts of Kubernetes that may not get enough applause? Here are some honorable mentions from our team.

Kubernetes is built for workload diversity

“The scope of the workloads that can/could/will potentially be tackled needs some appreciation,” said Scott McCarty, principal technology product manager, Linux containers at Red Hat. “It’s not just web workloads; it’s much more than that. Kubernetes today solves the 80/20 rule. Imagine what other workloads could come to the project.”

Kubernetes is focused

“The fact that it is focused and is a low-level tool, similar to docker containers or the Linux kernel, is what makes it so broadly exciting. It’s also what makes it not a solution by itself,” said Brian Gracely, director of OpenShift product strategy at Red Hat. “The fact that it’s not a PaaS, and is built to be multi-cloud driven makes it widely usable.”

Kubernetes is extensible

“As Kubernetes matures, the project has shifted its attention to support a broad set of extension points that enable a vibrant ecosystem of solutions to build on top of the platform. Developers are able to extend the Kubernetes API, prove out the pattern, and contribute it back to the broader community,” said Derek Carr, senior principal software engineer and OpenShift architect at Red Hat, and Kubernetes Steering Committee member and SIG-Node co-lead.

Kubernetes is all about collaboration

“At three years old, Kubernetes is now proving itself as one of the most successful open source collaboration efforts since Linux. Having learned lessons from prior large scale, cross-community collaboration initiatives, such as OpenStack, the Kubernetes community has managed to leapfrog to a new level of effective governance that embraces diversity and an ethos of openness – all of which has driven incredible amounts of innovation into all aspects of the project,” said Diane Mueller, director, community development, Red Hat Cloud Platform.

The Next Frontier

Kubernetes is being used in production by many companies globally, with Red Hat OpenShift Container Platform providing a powerful choice for organizations looking to embrace Kubernetes for mission-critical roles, but we expect still more innovation to come.

A major innovation on the rise is the Operator Framework that helps manage Kubernetes native applications in an effective, automated, and scalable way. Follow the project here: https://github.com/operator-framework.

If you want to learn more about Kubernetes in general, Brian Gracely discussed what’s next for Kubernetes and you can learn more listening to a recent webinar about what to look forward to.

Source

conu (Container utilities) – scripting containers made easy

Introducing conu – Scripting Containers Made Easier

There has been a need for a simple, easy-to-use handler for writing tests and other code around containers that would implement helpful methods and utilities. For this we introduce conu, a low-level Python library.

This project has been driven from the start by the requirements of container maintainers and testers. In addition to basic image and container management methods, it provides other often used functions, such as container mount, shortcut methods for getting an IP address, exposed ports, logs, name, image extending using source-to-image, and many others.

conu aims for stable engine-agnostic APIs that would be implemented by several container runtime back-ends. Switching between two different container engines should require only minimum effort. When used for testing, one set of tests could be executed for multiple back-ends.

Hello world

In the following example there is a snippet of code in which we run a container from a specified image, check its output, and gracefully delete.

We have decided our desired container runtime would be docker (now the only fully implemented container runtime). The image is run with an instance of DockerRunBuilder, which is the way to set additional options and custom commands for the docker container run command.

import conu, logging

def check_output(image, message):
command_build = conu.DockerRunBuilder(command=[‘echo’, message])
container = image.run_via_binary(command_build)

try:
# check_output
assert container.logs_unicode() == message + ‘n’
finally:
#cleanup
container.stop()
container.delete()

if __name__ == ‘__main__’:
with conu.DockerBackend(logging_level=logging.DEBUG) as backend:
image = backend.ImageClass(‘registry.access.redhat.com/rhscl/httpd-24-rhel7′)
check_output(image, message=’Hello World!’)

Get http response

When dealing with containers that run as services, the container state ‘Running’ is often not enough. We need to check that its port is open and ready to serve, and also to send custom requests to it.

def check_container_port(image):
“””
run container and wait for successful
response from the service exposed via port 8080
“””
port=8080
container = image.run_via_binary()
container.wait_for_port(port)

# check httpd runs
http_response = container.http_request(port=port)
assert http_response.ok

# cleanup
container.delete(force=True)

Look inside the container filesystem

To check presence and content of the configuration files, conu provides a way to easily mount the container filesystem with a predefined set of useful methods. The mount is in read-only mode, but we plan to also implement read-write modes in the next releases.

def mount_container_filesystem(image):
# run httpd container
container = image.run_via_binary()

# mount container filesystem
with container.mount() as fs:
# check presence of httpd configuration file
assert fs.file_is_present(‘/etc/httpd/conf/httpd.conf’)

# check presence of default httpd index page
index_path = ‘/opt/rh/httpd24/root/usr/share/httpd/noindex/index.html’
assert fs.file_is_present(index_path)

# and its content
index_text = fs.read_file(index_path)

So why not just use docker-py?

Aside from docker, conu also aims to support other container runtimes by providing a generic API. To implement the docker back-end, conu actually uses docker-py. Conu also implements other utilities that are generally used when dealing with containers. Adopting other utilities should be also simple.

And what about container testing frameworks?

You don’t have to be limited by a specified a set of tests. When writing code with conu, you can acquire ports, sockets, and filesystems, and the only limits you have are the ones set by Python. In the cases where conu does not support certain features and you don’t want to deal with a subprocess, there is a run_cmd utility that helps you simply run the desired command.

We are reaching out to you to gather feedback and encourage contribution to conu to make scripting around containers even more efficient. We have already successfully used conu for several image tests (for example here), and it also helped while implementing clients for executing specific kinds of containers.

For more information, see conu documentation or source

Join the Red Hat Developer Program (it’s free) and get access to related cheat sheets, books, and product downloads.

Source

Codefresh adds native integration for Azure Kubernetes Service

Deploying an application to Kubernetes is a very easy process when you use Codefresh as your CI/CD solution. Codefresh comes with its own integrated Kubernetes dashboard that allows you to view pods, deployments, and services in a unified manner regardless of the cloud provider behind the cluster.

This makes Codefresh the perfect solution for multi-cloud installations as you can gather all cluster information on a single view even when they come from multiple providers. At the most basic level, you can add any Kubernetes cluster in Codefresh using its credentials (token, certificate, url). This integration process is perfectly valid but it involves some manual steps that are time-consuming in order to gather these credentials.

Today we are happy to announce that you can now add your Azure Kubernetes cluster (AKS) in a quicker way using native integration. The process is much more simple than the “generic” Kubernetes integration.

First, navigate to the integration screen from the left sidebar and select “Kubernetes”. Click the drop-down menu “add provider”. You will see the new option for “Azure AKS”

Adding Azure clusterAdding Azure cluster

Click the “Authenticate” button and enter your login information for Azure. You should also accept the permissions that Codefresh asks.
At the time or writing you will need a company/organizational account with Azure.

Once Codefresh gets the required permissions, you will see your Azure subscriptions and available clusters.

Selecting your clusterSelecting your cluster

Click “ADD” and Codefresh will show you the basic details of your cluster:

Basic cluster detailsBasic cluster details

That’s it! The process of adding an Azure Kubernetes cluster is much easier now. There is no need to manually enter token and certificate information anymore.

The cluster is now available in the Kubernetes dashboard along with all your other clusters.

Kubernetes dashboardKubernetes dashboard

You are now ready to start deploying your applications using Codefresh pipelines.

New to Codefresh? Create Your Free Account Today!

Source

Kubernetes v1.12: Introducing RuntimeClass – Kubernetes

 

Kubernetes v1.12: Introducing RuntimeClass

Author: Tim Allclair (Google)
Kubernetes originally launched with support for Docker containers running native applications on a Linux host. Starting with rkt in Kubernetes 1.3 more runtimes were coming, which lead to the development of the Container Runtime Interface (CRI). Since then, the set of alternative runtimes has only expanded: projects like Kata Containers and gVisor were announced for stronger workload isolation, and Kubernetes’ Windows support has been steadily progressing. 
With runtimes targeting so many different use cases, a clear need for mixed runtimes in a cluster arose. But all these different ways of running containers have brought a new set of problems to deal with:

  • How do users know which runtimes are available, and select the runtime for their workloads?
  • How do we ensure pods are scheduled to the nodes that support the desired runtime?
  • Which runtimes support which features, and how can we surface incompatibilities to the user?
  • How do we account for the varying resource overheads of the runtimes?

RuntimeClass aims to solve these issues.

RuntimeClass in Kubernetes 1.12

RuntimeClass was recently introduced as an alpha feature in Kubernetes 1.12. The initial implementation focuses on providing a runtime selection API, and paves the way to address the other open problems.
The RuntimeClass resource represents a container runtime supported in a Kubernetes cluster. The cluster provisioner sets up, configures, and defines the concrete runtimes backing the RuntimeClass. In its current form, a RuntimeClassSpec holds a single field, the RuntimeHandler. The RuntimeHandler is interpreted by the CRI implementation running on a node, and mapped to the actual runtime configuration. Meanwhile the PodSpec has been expanded with a new field, RuntimeClassName, which names the RuntimeClass that should be used to run the pod.
Why is RuntimeClass a pod level concept? The Kubernetes resource model expects certain resources to be shareable between containers in the pod. If the pod is made up of different containers with potentially different resource models, supporting the necessary level of resource sharing becomes very challenging. For example, it is extremely difficult to support a loopback (localhost) interface across a VM boundary, but this is a common model for communication between two containers in a pod.

What’s next?

The RuntimeClass resource is an important foundation for surfacing runtime properties to the control plane. For example, to implement scheduler support for clusters with heterogeneous nodes supporting different runtimes, we might add NodeAffinity terms to the RuntimeClass definition. Another area to address is managing the variable resource requirements to run pods of different runtimes. The Pod Overhead proposal was an early take on this that aligns nicely with the RuntimeClass design, and may be pursued further.
Many other RuntimeClass extensions have also been proposed, and will be revisited as the feature continues to develop and mature. A few more extensions that are being considered include:

  • Surfacing optional features supported by runtimes, and better visibility into errors caused by incompatible features.
  • Automatic runtime or feature discovery, to support scheduling decisions without manual configuration.
  • Standardized or conformant RuntimeClass names that define a set of properties that should be supported across clusters with RuntimeClasses of the same name.
  • Dynamic registration of additional runtimes, so users can install new runtimes on existing clusters with no downtime.
  • “Fitting” a RuntimeClass to a pod’s requirements. For instance, specifying runtime properties and letting the system match an appropriate RuntimeClass, rather than explicitly assigning a RuntimeClass by name.

RuntimeClass will be under active development at least through 2019, and we’re excited to see the feature take shape, starting with the RuntimeClass alpha in Kubernetes 1.12.

Learn More

Source