{"id":567,"date":"2018-10-17T12:53:57","date_gmt":"2018-10-17T12:53:57","guid":{"rendered":"https:\/\/www.appservgrid.com\/paw93\/?p=567"},"modified":"2018-10-17T12:58:27","modified_gmt":"2018-10-17T12:58:27","slug":"load-balancing-on-kubernetes-with-rancher","status":"publish","type":"post","link":"https:\/\/www.appservgrid.com\/paw93\/index.php\/2018\/10\/17\/load-balancing-on-kubernetes-with-rancher\/","title":{"rendered":"Load Balancing on Kubernetes with Rancher"},"content":{"rendered":"<h5>Build a CI\/CD Pipeline with Kubernetes and Rancher 2.0<\/h5>\n<p>Recorded Online Meetup of best practices and tools for building pipelines with containers and kubernetes.<\/p>\n<p><a href=\"https:\/\/rancher.com\/events\/2018\/2018-08-07-onlinemeetup-building-a-cicd-pipeline-with-k8s\/\" target=\"blank\">Watch the training<\/a><\/p>\n<p>When it comes to containerizing user applications and deploying them on Kubernetes, incremental design is beneficial. First you figure out how to package your application into a container. Then, decide on a deployment model \u2013 do you want one container or multiple ones \u2013 any other scheduling rules, and configuring liveness and readines probes so that if the application goes down, Kubernetes can safely restore it.<\/p>\n<p>The next step would be to expose the workload internally and\/or externally, so that the workload can be reached by microservices in the same Kubernetes cluster, or &#8211; if it\u2019s a user facing app &#8211; from the internet.<\/p>\n<p>And as your application gets bigger, providing it with Load Balanced access becomes essential. This article focuses on various use cases requiring Load Balancers, and fun and easy ways to achieve load balancing with Kubernetes and Rancher.<\/p>\n<h2>L4 Load Balancing<\/h2>\n<p>Lets imagine you have an nginx webserver running as a Kubernetes workload on every node in the cluster. It was well tested locally, and the decision was made to go in production by exposing the service to the internet, and make sure the traffic is evenly distributed between the nodes the workload resides on. The easiest way to achieve this set up is by picking the L4 Load Balancer option when you open the port for the workload in the Rancher UI:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/BJbanLw.png\" alt=\"Imgur\" \/><\/p>\n<p>As a result, the workload would get updated with a publicly available endpoint, and, if you click on it, the nginx application page would load:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/eBuCMqX.png\" alt=\"Imgur\" \/><\/p>\n<h3>What happens behind the scenes<\/h3>\n<p>Smooth user experience implies some heavy lifting done on the backend. When a user creates a workload with Load Balancer port exposed via Rancher UI\/API, two Kuberntes objects get created on the backend: the actual workload in the form of the Kubernetes deployment\/daemonset\/statefulset (depending on the workload type chosen) and the Service of type Load Balancer. The Load Balancer service in Kubernetes is a way to configure L4 TCP Load Balancer that would forward and balance traffic from the internet to your backend application. The actual Load Balancer gets configured by the cloud provider where your cluster resides:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/QCKwPHE.png\" alt=\"Imgur\" \/><\/p>\n<h3>Limitations<\/h3>\n<ul>\n<li>Load Balancer Service is enabled on only certain Kubernetes Cluster Providers in Rancher; first of all on those supporting Kubernetes as a service:<\/li>\n<\/ul>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/gVQsypz.png\" alt=\"Imgur\" \/><\/p>\n<p>\u2026and on the EC2 cloud provider where Rancher RKE acts as a Kubernetes cluster provisioner, under condition of the Cloud Provider being explicitly set to Amazon during the cluster creation:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/iBM3cVx.png?1\" alt=\"Imgur\" \/><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/cAN3jh6.png?1\" alt=\"Imgur\" \/><\/p>\n<ul>\n<li>Each Load Balancer Service gets its own LB IP Address, so it\u2019s recommended to check your Cloud Provider\u2019s pricing model given thats the use can be excessive.<\/li>\n<li>L4 balancing only, no HTTP based routing.<\/li>\n<\/ul>\n<h2>L7 Load Balancing<\/h2>\n<h3>Host and path based routing<\/h3>\n<p>The typical use case for host\/path based routing is using a single IP (or the same set of IPs) to distribute traffic to multiple services. For example, a company needs to host two different applications &#8211; a website and chat &#8211; on the same set of public IP addresses. First they setup two separate applications to deliver these functions: Nginx for the website and LetsChat for the chat platform. Then by configuring the ingress resource via the Rancher UI, the traffic can be split between these two workloads based on the request coming in. If the request is coming for userdomain.com\/chat, it will be directed to the LetsChat servers; if the request is for userdomain.com\/website &#8211; it will be directed to the web servers:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/IMv0HPt.png\" alt=\"Imgur\" \/><\/p>\n<p>Rancher uses native Kubernetes capabilities when it comes to ingress configuration, as well as providing some nice extra features on top of it. One of them is an ability to point the ingress to the workload directly, saving users from creating a service &#8211; the only resource that can act as a target for an ingress resource.<\/p>\n<h3>Ingress controller<\/h3>\n<p>Ingress resource in Kubernetes is just a Load Balancer spec &#8211; a set of rules that have to be configured on an actual load balancer. The load balancer can be any system supporting reverse proxying, and it can be deployed as a standalone entity outside of kubernetes cluster, or run as a native Kubernetes application inside kubernetes pod(s). Below, we\u2019ll provide example of both models.<\/p>\n<h4>Ingress Load Balancer outside of Kubernetes cluster<\/h4>\n<p>If a Kubernetes cluster is deployed on a public cloud having Load Balancing services, it can be used to backup the ingress resouce. For example, for Kubernetes clusters on Amazon, an ALB ingress controller can program ALB with ingress traffic routing rules:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/yEe05d4.png\" alt=\"Imgur\" \/><\/p>\n<p>The controller itself would be deployed as a native Kubernetes app that would listen to ingress resource events, and program ALB accordingly. ALB ingress controller code can be found here: <a href=\"https:\/\/github.com\/coreos\/alb-ingress-controller\">Core OS ALB ingress controller<\/a><\/p>\n<p>When users hit the url userdomain.com\/website, ALB would redirect the traffic to the corresponding Kubernetes Node Port service. Given the Load Balancer is external to the cluster, the service has to be of a NodePort type. A similar restriction applies to Ingress programming on GCE clusters.<\/p>\n<h4>Ingress Load Balancer as a native Kubernetes app<\/h4>\n<p>Let\u2019s look at another model, where the ingress controller acts both as a resource programming Load Balancer records, and as a Load Balancer itself. A good example is the <a href=\"https:\/\/github.com\/kubernetes\/ingress-nginx\">Nginx ingress controller<\/a> \u2013 a controller that you get installed by default by <a href=\"https:\/\/rancher.com\/announcing-rke-lightweight-kubernetes-installer\/\">RKE<\/a> \u2013 Rancher\u2019s native tool used to provision k8s clusters on clouds like Digital Ocean and vSphere.<\/p>\n<p>The diagram below shows the deployment details of the nginx ingress controller:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/pO8Gp3d.png\" alt=\"Imgur\" \/><\/p>\n<p>RKE deploys nginx ingress controller as a daemonset, which means every node in the cluster will get one nginx instance deployed as a Kubernetes pod. You can allocate limited sets of nodes for deployment using scheduling labels. Nginx acts like an ingress controller and the load balancer, meaning it programs itself with the ingress rules. Then, nginx gets exposed via NodePort service, so when a user\u2019s request comes to snodeip\/nodePort, it gets redirected to nginx ingress controller, and the controller would route the request based on hostname routing rules to the backend workload.<\/p>\n<h3>Programming ingress LB address to public DNS<\/h3>\n<p>By this point we\u2019ve got some undestanding as to how path\/hostname routing is implemented by an ingress controller. In the case where the LB is outside of the Kubernetes cluster, the user hits the URL, and based on URL contents, the Load Balancer redirects traffic to one of the Kubernetes nodes where the user application workload is exposed via NodePort service. In the case where the LB is running as a Kubernetes app, the Load Balancer exposes itself to the outside using Node Port service, and then balances traffic between the workloads\u2019 pods internal IPs. In both cases, ingress would get updated with the address that the user has to hit in order to get to the Load Balancer:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/urfAnev.png\" alt=\"Imgur\" \/><\/p>\n<p>There is one question left unasnwered \u2013 who is actually responsible for mapping that address to the userdomain.com hostname from the URL the user would hit? You\u2019d need to have some tool that would program a DNS service with these mappings. Here is one example of such a tool, from kubernetes-incubator project: <a href=\"https:\/\/github.com\/kubernetes-incubator\/external-dns\">external-dns<\/a>. External-dns gets deployed as a kubernetes native application that runs in the pod, listens to an ingress, creates\/updates events, and programs the DNS of your choice. The tool supports providers like AWS Route53, Google Cloud DNS, etc. It doesn\u2019t come by default with a Kubernetes cluster, and has to be deployed on demand.<\/p>\n<p>In Rancher, we wanted to make things easy for users who are just getting familiar with Kubernetes, and who simply want to deploy their first workload and try to balance traffic to it. The Requirement to setup DNS plugin in this case can be a bit excessive. By using xip.io integration, we make DNS programming automatic for simple use cases:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/PtY5aIX.png\" alt=\"Imgur\" \/><\/p>\n<p>Let\u2019s check how it works with Rancher. When you create the ingress, pick Automatically generate .xip.io hostname&#8230; option:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/jkI31TN.png\" alt=\"Imgur\" \/><\/p>\n<p>The hostname would get automatically generated, used as a hostname routing rule in ingress, and programmed as xip.io publicly available DNS record. So all you have to do is \u2013 use the generated hostname in your url:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/vuWpsMN.png\" alt=\"Imgur\" \/><\/p>\n<h3>If you want to learn more about Load Balancing in Rancher\u2026<\/h3>\n<p>Please join our upcoming online meetup: <a href=\"https:\/\/rancher.com\/events\/2018\/kubernetes-networking-masterclass-june-online-meetup\/\">Kubernetes Networking Master Class<\/a>! Since we released Rancher 2.0, we\u2019ve fielded hundreds of questions about different networking choices on our Rancher Slack Channel and Forums. From overlay networking and SSL to ingress controllers and network security policies, we\u2019ve seen many users get hung up on Kubernetes networking challenges. In our June online meetup, we\u2019ll be diving deep into Kubernetes networking, and discussing best practices for a wide variety of deployment options. <a href=\"https:\/\/rancher.com\/events\/2018\/kubernetes-networking-masterclass-june-online-meetup\/\">Register here.<\/a><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/rancher.com\/img\/bio\/alena-prokharchyk.jpg\" alt=\"Alena Prokharchyk\" width=\"100\" height=\"100\" \/><\/p>\n<p>Alena Prokharchyk<\/p>\n<p>Software Engineer<\/p>\n<p><a href=\"https:\/\/rancher.com\/blog\/2018\/2018-06-08-load-balancing-user-apps-with-rancher\/\" target=\"_blank\" rel=\"noopener\">Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Build a CI\/CD Pipeline with Kubernetes and Rancher 2.0 Recorded Online Meetup of best practices and tools for building pipelines with containers and kubernetes. Watch the training When it comes to containerizing user applications and deploying them on Kubernetes, incremental design is beneficial. First you figure out how to package your application into a container. &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/www.appservgrid.com\/paw93\/index.php\/2018\/10\/17\/load-balancing-on-kubernetes-with-rancher\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Load Balancing on Kubernetes with Rancher&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-567","post","type-post","status-publish","format-standard","hentry","category-kubernetes"],"_links":{"self":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts\/567","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/comments?post=567"}],"version-history":[{"count":1,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts\/567\/revisions"}],"predecessor-version":[{"id":569,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts\/567\/revisions\/569"}],"wp:attachment":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/media?parent=567"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/categories?post=567"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/tags?post=567"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}