{"id":763,"date":"2018-11-07T13:00:17","date_gmt":"2018-11-07T13:00:17","guid":{"rendered":"https:\/\/www.appservgrid.com\/paw93\/?p=763"},"modified":"2018-11-07T13:05:24","modified_gmt":"2018-11-07T13:05:24","slug":"securing-the-base-infrastructure-of-a-kubernetes-cluster","status":"publish","type":"post","link":"https:\/\/www.appservgrid.com\/paw93\/index.php\/2018\/11\/07\/securing-the-base-infrastructure-of-a-kubernetes-cluster\/","title":{"rendered":"Securing the Base Infrastructure of a Kubernetes Cluster"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/blog.giantswarm.io\/assets\/2018\/11\/securing-the-base-infrastructure-of-a-kubernetes-cluster.png\" alt=\"Securing the Base Infrastructure of a Kubernetes Cluster\" \/><\/p>\n<p>The <a href=\"https:\/\/blog.giantswarm.io\/why-is-securing-kubernetes-so-difficult\/\">first article<\/a> in this series <em>Securing Kubernetes for Cloud Native Applications<\/em>, provided a discussion on why it\u2019s difficult to secure Kubernetes, along with an overview of the various layers that require our attention, when we set about the task of securing that platform.<\/p>\n<p>The very first layer in the stack, is the base infrastructure layer. We could define this in many different ways, but for the purposes of our discussion, it\u2019s the sum of the infrastructure components on top of which Kubernetes is deployed. It\u2019s the physical or abstracted hardware layer for compute, storage, and networking purposes, and the environment in which these resources exist. It also includes the operating system, most probably Linux, and a container runtime environment, such as Docker.<\/p>\n<p>Much of what we\u2019ll discuss, applies equally well to infrastructure components that underpin systems other than Kubernetes, but we\u2019ll pay special attention to those factors that will enhance the security of Kubernetes.<\/p>\n<h2>Machines, Data Centers, and the Public Cloud<\/h2>\n<p>The adoption of the cloud as the vehicle for workload deployment, whether its public, private, or a hybrid mix, continues apace. And whilst the need for specialist bare-metal server provisioning hasn\u2019t entirely gone away, the infrastructure that underpins the majority of today\u2019s compute resource, is the virtual machine. It doesn\u2019t really matter, however, if the machines we deploy are virtual (cloud-based or otherwise), or physical, the entity is going to reside in a data center, hosted by our own organisation, or a chosen third-party, such as a public cloud provider.<\/p>\n<p>Data centers are complex, and there is a huge amount to think about when it comes to the consideration of security. It\u2019s a general resource for hosting the data processing requirements of an entire organisation, or even, co-tenanted workloads from a multitude of independent organisations from different industries and geographies. For this reason, applying security to the many different facets of infrastructure at this level, tends to be a full-blown corporate or supplier responsibility. It will be governed according to factors such as, national or international regulation (<a href=\"https:\/\/www.hhs.gov\/hipaa\/for-professionals\/privacy\/index.html\">HIPAA<\/a>, <a href=\"https:\/\/ec.europa.eu\/commission\/priorities\/justice-and-fundamental-rights\/data-protection\/2018-reform-eu-data-protection-rules_en\">GDPR<\/a>), industry compliance requirements (<a href=\"https:\/\/www.pcisecuritystandards.org\/pci_security\/how\">PCI DSS<\/a>), and often results in the pursuit of certified standards accreditation (<a href=\"https:\/\/www.iso.org\/isoiec-27001-information-security.html\">ISO 27001<\/a>, <a href=\"https:\/\/www.nist.gov\/itl\/current-fips\">FIPS<\/a>).<\/p>\n<p>In the case of a public cloud environment, a supplier can and will provide the necessary adherence to regulatory and compliance standards at the infrastructure layer, but at some point, it comes down to the service consumer (you and me), to further build on this secure foundation. It\u2019s a shared responsibility. As a public cloud service consumer, this begs the question, \u201cwhat should I secure, and how should I go about it?\u201d There are a lot of people with a lot of views on the topic, but one credible entity is the <a href=\"https:\/\/www.cisecurity.org\/\">Center for Internet Security (CIS)<\/a>, a non-profit organisation dedicated to safeguarding public and private entities from the threat of malign cyber activity.<\/p>\n<h3>CIS Benchmarks<\/h3>\n<p>The CIS provides a range of tools, techniques, and information for combating the potential threat to the systems and data we rely on. <a href=\"https:\/\/www.cisecurity.org\/cis-benchmarks\/\">CIS Benchmarks<\/a>, for example, are per-platform best practice configuration guidelines for security, consensually compiled by security professionals and subject matter experts. In recognition of the ever increasing number of organisations embarking on transformation programmes, which involve migration to public and\/or hybrid cloud infrastructure, the CIS have made it their business to provide benchmarks for the major public cloud providers. The <a href=\"https:\/\/www.cisecurity.org\/benchmark\/amazon_web_services\/\">CIS Amazon Web Services Foundations Benchmark<\/a> is an example, and there are similar benchmarks for the other major public cloud providers.<\/p>\n<p>These benchmarks provide foundational security configuration advice, covering identity and access management (IAM), ingress and egress, and logging and monitoring best practice, amongst other things. Implementing these benchmark recommendations is a great start, but it shouldn\u2019t be the end of the journey. Each public cloud provider will have their own set of detailed recommended best practices<a href=\"1\">1<\/a>,<a href=\"2\">2<\/a>,<a href=\"3\">3<\/a>, and a lot of benefit can be taken from other expert voices in the domain, such as the <a href=\"https:\/\/cloudsecurityalliance.org\/\">Cloud Security Alliance<\/a>.<\/p>\n<p>Let\u2019s take a moment to look at a typical cloud-based scenario that requires some careful planning from a security perspective.<\/p>\n<h3>Cloud Scenario: Private vs. Public Networks<\/h3>\n<p>How can we balance the need to keep a Kubernetes cluster secure by limiting access, whilst enabling the required access for external clients via the Internet, and also from within our own organisation?<\/p>\n<ul>\n<li>Use a private network for the machines that host Kubernetes &#8211; ensure that the host machines that represent the cluster\u2019s nodes don\u2019t have public IP addresses. Removing the ability to make a direct connection with any of the host machines, significantly reduces the available options for attack. This simple precaution provides significant benefits, and would prevent the kind of compromises that result in the exploitation of compute resource for cryptocurrency mining, for example.<\/li>\n<li>Use a bastion host to access the private network &#8211; external access to the host\u2019s private network, which will be required to administer the cluster, should be provided via a suitably configured bastion host. The Kubernetes API will often also be exposed in a private network behind the bastion host. It may also be exposed publicly, but it is recommended to at least restrict access by whitelisting IP addresses from an organization\u2019s internal network and\/or its VPN server.<\/li>\n<li>Use VPC peering with internal load balancers\/DNS &#8211; where workloads running in a Kubernetes cluster with a private network, need to be accessed by other private, off-cluster clients, the workloads can be exposed with a service that invokes an internal load balancer. For example, to have an internal load balancer created in an AWS environment, the service would need the following annotation: service.beta.kubernetes.io\/aws-load-balancer-internal: 0.0.0.0\/0. If clients reside in another VPC, then the VPCs will need to be peered.<\/li>\n<li>Use an external load balancer with ingress &#8211; workloads are often designed to be consumed by anonymous, external clients originating from the Internet; how is it possible to allow traffic to find the workloads in the cluster, when it\u2019s deployed to a private network? We can achieve this in a couple of different ways, depending on the requirement at hand. The first option would be to expose workloads using a Kubernetes service object, which would result in the creation of an external cloud load balancer service (e.g. AWS ELB) on a public subnet. This approach can be quite costly, as each service exposed invokes a dedicated load balancer, but may be the preferred solution for non-HTTP services. For HTTP-based services, a more cost effective approach would be to deploy an ingress controller to the cluster, fronted by a Kubernetes service object, which in turn creates the load balancer. Traffic addressed to the load balancer\u2019s DNS name is routed to the ingress controller endpoint(s), which evaluates the rules associated with any defined ingress objects, before further routing to the endpoints of the services in the matched rules.<\/li>\n<\/ul>\n<p>This scenario demonstrates the need to carefully consider how to configure the infrastructure to be secure, whilst providing the capabilities required for delivering services to their intended audience. It\u2019s not a unique scenario, and there will be other situations that will require similar treatment.<\/p>\n<h2>Locking Down the Operating System and Container Runtime<\/h2>\n<p>Assuming we\u2019ve investigated and applied the necessary security configuration to make the machine-level infrastructure and its environment secure, the next task is to lock down the host operating system (OS) of each machine, and the container runtime that\u2019s responsible for managing the lifecycle of containers.<\/p>\n<h3>Linux OS<\/h3>\n<p>Whilst it\u2019s possible to run Microsoft Windows Server as the OS for Kubernetes worker nodes, more often than not, the control plane and worker nodes will run a variant of the Linux operating system. There might be many factors that govern the choice of Linux distribution to use (commercials, in-house skills, OS maturity), but if its possible, use a minimal distribution that has been designed just for the purpose of running containers. Examples include <a href=\"https:\/\/coreos.com\/os\/docs\/latest\/\">CoreOS Container Linux<\/a>, <a href=\"https:\/\/www.ubuntu.com\/core\">Ubuntu Core<\/a>, and the <a href=\"https:\/\/www.projectatomic.io\/\">Atomic Host<\/a> variants. These operating systems have been stripped down to the bare minimum to facilitate running containers at scale, and as a consequence, have a significantly reduced attack surface.<\/p>\n<p>Again, the CIS have a number of different benchmarks for different flavours of Linux, providing best practice recommendations for securing the OS. These benchmarks cover what might be considered the mainstream distributions of Linux, such as RHEL, Ubuntu, SLES, Oracle Linux and Debian. If your preferred distribution isn\u2019t covered, there is a <a href=\"https:\/\/www.cisecurity.org\/benchmark\/distribution_independent_linux\/\">distribution independent<\/a> CIS benchmark, and there are often distribution-specific guidelines, such as the <a href=\"https:\/\/coreos.com\/os\/docs\/latest\/hardening-guide.html\">CoreOS Container Linux Hardening Guide<\/a>.<\/p>\n<h3>Docker Engine<\/h3>\n<p>The final component in the infrastructure layer is the container runtime. In the early days of Kubernetes, there was no choice available; the container runtime was necessarily the Docker engine. With the advent of the Kubernetes <a href=\"https:\/\/kubernetes.io\/blog\/2016\/12\/container-runtime-interface-cri-in-kubernetes\/\">Container Runtime Interface<\/a>, however, it\u2019s possible to remove the Docker engine dependency in favour of a runtime such as <a href=\"http:\/\/cri-o.io\/\">CRI-O<\/a>, <a href=\"https:\/\/containerd.io\/\">containerd<\/a> or <a href=\"https:\/\/github.com\/kubernetes\/frakti\">Frakti<\/a>.<a href=\"4\">4<\/a> In fact, as of Kubernetes version 1.12, an alpha feature (<a href=\"https:\/\/kubernetes.io\/docs\/concepts\/containers\/runtime-class\/\">Runtime Class<\/a>) allows for running multiple container runtimes, side-by-side in a cluster. Whichever container runtimes are deployed, they need securing.<\/p>\n<p>Despite the varied choice, the Docker engine remains the default container runtime for Kubernetes (although this may change to containerd in the near future), and we\u2019ll consider its security implications here. It\u2019s built with a large number of configurable security settings, some of which are turned on by default, but which can be bypassed on a per-container basis. One such example is the <a href=\"https:\/\/github.com\/moby\/moby\/blob\/master\/oci\/defaults.go#L14-L30\">whitelist of Linux kernel capabilities<\/a> applied to each container on creation, which helps to diminish the privileges available inside a running container.<\/p>\n<p>Once again, the CIS maintain a benchmark for the Docker platform, the <a href=\"https:\/\/www.cisecurity.org\/benchmark\/docker\/\">CIS Docker Benchmark<\/a>. It provides best practice recommendations for configuring the Docker daemon for optimal security. There\u2019s even a handy open source tool (script) called <a href=\"https:\/\/github.com\/docker\/docker-bench-security\">Docker Bench for Security<\/a>, that can be run against a Docker engine, which evaluates the system for conformance to the CIS Docker Benchmark. The tool can be run periodically to expose any drift from the desired configuration.<\/p>\n<p>Some care needs to be taken when considering and measuring the security configuration of the Docker engine when it\u2019s used as the container runtime for Kubernetes. Kubernetes ignores much of the available functions of the Docker daemon, in preference of its own security controls. For example, the Docker daemon is configured to apply a default whitelist of available Linux kernel system calls to every created container, using a <a href=\"https:\/\/docs.docker.com\/engine\/security\/seccomp\/\">seccomp profile<\/a>. Unless specified, Kubernetes will instruct Docker to create pod containers \u2018unconfined\u2019 from a seccomp perspective, giving containers access to each and every syscall available. In other words, what may get configured at the lower \u2018Docker layer\u2019, may get undone at a higher level in the platform stack. We\u2019ll cover how to mitigate these discrepancies with security contexts, in a future article.<\/p>\n<h2>Summary<\/h2>\n<p>It might be tempting to focus all our attention on the secure configuration of the Kubernetes components of a platform. But as we\u2019ve seen in this article, the lower layer infrastructure components are equally important, and are ignored at our peril. In fact, providing a secure infrastructure layer can even mitigate problems we might introduce in the cluster layer itself. Keeping our nodes private, for example, will prevent an inadequately secured kubelet from being exploited for <a href=\"https:\/\/medium.com\/handy-tech\/analysis-of-a-kubernetes-hack-backdooring-through-kubelet-823be5c3d67c\">nefarious purposes<\/a>. Infrastructure components deserve the same level of attention, as the Kubernetes components, themselves.<\/p>\n<p>In the next article, we\u2019ll move on to discuss the implications of securing the next layer in the stack, the Kubernetes cluster components.<\/p>\n<p><a href=\"https:\/\/blog.giantswarm.io\/securing-the-base-infrastructure-of-a-kubernetes-cluster\/\" target=\"_blank\" rel=\"noopener\">Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The first article in this series Securing Kubernetes for Cloud Native Applications, provided a discussion on why it\u2019s difficult to secure Kubernetes, along with an overview of the various layers that require our attention, when we set about the task of securing that platform. The very first layer in the stack, is the base infrastructure &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/www.appservgrid.com\/paw93\/index.php\/2018\/11\/07\/securing-the-base-infrastructure-of-a-kubernetes-cluster\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Securing the Base Infrastructure of a Kubernetes Cluster&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-763","post","type-post","status-publish","format-standard","hentry","category-kubernetes"],"_links":{"self":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts\/763","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/comments?post=763"}],"version-history":[{"count":1,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts\/763\/revisions"}],"predecessor-version":[{"id":769,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts\/763\/revisions\/769"}],"wp:attachment":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/media?parent=763"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/categories?post=763"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/tags?post=763"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}