{"id":1495,"date":"2019-03-08T15:32:02","date_gmt":"2019-03-08T15:32:02","guid":{"rendered":"https:\/\/www.appservgrid.com\/paw93\/?p=1495"},"modified":"2019-04-06T00:22:18","modified_gmt":"2019-04-06T00:22:18","slug":"challenges-and-solutions-for-scaling-kubernetes-in-the-hybrid-cloud","status":"publish","type":"post","link":"https:\/\/www.appservgrid.com\/paw93\/index.php\/2019\/03\/08\/challenges-and-solutions-for-scaling-kubernetes-in-the-hybrid-cloud\/","title":{"rendered":"Challenges and Solutions for Scaling Kubernetes in the Hybrid Cloud"},"content":{"rendered":"<h2>Introduction<\/h2>\n<p>Let\u2019s assume you\u2019re in business online: you have your own datacenter and a private cloud running your website. You\u2019ll have a number of servers deployed to run applications and store their data.<\/p>\n<p>The overall website traffic in this scenario is pretty constant, yet there are times where you expect traffic growth. How do you handle all this growth in traffic?<\/p>\n<p>The first thing that comes to mind is that you need to be able to scale some of your applications in order to cope with the traffic increase. As you don\u2019t want to spend money on new hardware, which you\u2019ll use only a few times per year, you think of moving to a <a href=\"https:\/\/searchcloudcomputing.techtarget.com\/definition\/hybrid-cloud\">hybrid cloud<\/a> set up.<\/p>\n<p>This can be a real time and cost saver. Scaling (parts of) your application to public cloud will allow you to pay for only the resources you use, for the time you use them.<\/p>\n<p>But how do you choose that public cloud, and can you choose more than one?<\/p>\n<p>The short answer is yes, you\u2019ll most likely need to choose more than one public cloud provider. Because you have different teams, working on different applications, having different requirements, one cloud provider may not fit all your needs. In addition, many organizations need to follow certain laws, regulations and policies which dictate that their data must physically reside in certain locations. A strategy of using more than one public cloud can help organizations meet those stringent and varied requirements. They can also select from multiple data center regions or availability zones, to be as close to their end users as possible, providing them optimal performance and minimal latency.<\/p>\n<h2>Challenges of scaling across multiple cloud providers<\/h2>\n<p>You\u2019ve decided now upon the cloud(s) to use, so let\u2019s go back and think about the initial problem. You have an application with a microservice deployment architecture for your applications, running containers that need to be scaled. Here is where Kubernetes comes into play. Essentially Kubernetes is a solution which helps you manage and orchestrate containerized applications in a cluster of nodes. Although Kubernetes will help you manage and scale deployments, nodes and clusters, it won\u2019t help you easily manage and scale them across cloud providers. More on that later.<\/p>\n<p>A Kubernetes cluster is a set of machines (physical\/virtual), resourced by Kubernetes to run applications. Essential Kubernetes concepts that you need to understand for our purposes are:<\/p>\n<ul>\n<li><a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod\/\">Pods<\/a> are units that control one or more containers, scheduled as one application. Typically you should create one Pod per application, so you can scale and control them separately.<\/li>\n<li><a href=\"https:\/\/kubernetes.io\/docs\/concepts\/architecture\/nodes\/#what-is-a-node\">Node components<\/a> are worker machines in Kubernetes. A node may be a virtual machine (VM) or physical machine, depending on the cluster. Each node contains the services necessary to run pods and is managed by the master components.<\/li>\n<li><a href=\"https:\/\/kubernetes.io\/docs\/concepts\/overview\/components\/#master-components\">Master components<\/a> manage the lifecycle of a Pod. If a Pod dies, the Controller creates a new one, if you scale up\/down Pods, the Controller creates\/destroys your Pods. More on the controller types you can find <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/replicaset\/\">here<\/a><\/li>\n<\/ul>\n<p>The role of these three components is to scale and schedule containers. The master component dictates the scheduling and scaling commands. The nodes then orchestrate the pods accordingly.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/rancher.com\/img\/blog\/2018\/scaling-hybrid-cloud\/01-rancher-k8s-architecture.png\" alt=\"01\" \/><\/p>\n<p>These are only the basics of Kubernetes, for a more detailed understanding, you can check our Intro to Kubernetes <a href=\"https:\/\/rancher.com\/blog\/2018\/2018-09-07-introduction-to-kubernetes\/\">article<\/a>.<\/p>\n<p>There are a few key challenges that come to mind when trying to use Kubernetes to solve our scaling problem across multiple clouds problem:<\/p>\n<ul>\n<li>Difficult to manage multiple clouds, multiple clusters, set users, set policies<\/li>\n<li>Complexity of installation and configuration<\/li>\n<li>Different experiences for users\/teams depending on environment<\/li>\n<\/ul>\n<p>Here\u2019s where <a href=\"https:\/\/rancher.com\/\">Rancher<\/a> can help you. Rancher is an open source container manager used to run Kubernetes in production. Below are some features that Rancher provides that help us manage and scale our applications regardless of whether the compute resources are hosted on-prem or across multiple clouds:<\/p>\n<ul>\n<li>common infrastructure management across multiple clusters and clouds<\/li>\n<li>easy-to-use interface for Kubernetes configuration and deployment<\/li>\n<li>easy to scale Pods and clusters with a few simple clicks<\/li>\n<li>access control and user management (ldap, AD)<\/li>\n<li>workload, RBAC, policy and project management<\/li>\n<\/ul>\n<p>Rancher becomes your single point of control for multiple clusters, running on multiple clouds, on pretty much any infrastructure that can run Kubernetes.<\/p>\n<p>Let\u2019s see now how we can use Rancher in order to manage more than one cluster, in two different regions.<\/p>\n<h2>Starting a Rancher 2.0 instance<\/h2>\n<p>To begin, start a Rancher 2.0 instance. There is a very intuitive getting started guide for this purpose <a href=\"https:\/\/rancher.com\/quick-start\/\">here<\/a>.<\/p>\n<h2>Hands-on with Rancher and Kubernetes<\/h2>\n<p>Let\u2019s create two hosted Kubernetes clusters in GCP, in two different regions. For this you will need a service account <a href=\"https:\/\/cloud.google.com\/iam\/docs\/creating-managing-service-account-keys\">key<\/a>.<\/p>\n<p>In Global Tab, we can see all the available clusters and their state. From Provisioning state, when ready, they should turn to Active.<br \/>\n<img decoding=\"async\" src=\"https:\/\/rancher.com\/img\/blog\/2018\/scaling-hybrid-cloud\/02-rancher-provisioning-clusters.png\" alt=\"02\" \/><\/p>\n<p><img decoding=\"async\" src=\"https:\/\/rancher.com\/img\/blog\/2018\/scaling-hybrid-cloud\/03-rancher-clusters-tab.png\" alt=\"03\" \/><\/p>\n<p>A number of pods are already deployed to each node from your Kubernetes Cluster. Those pods are used by Kubernetes and Rancher\u2019s internal systems.<\/p>\n<p>Let\u2019s proceed by deploying Workloads for both the clusters. Sequentially select Default project; this will open the Workloads tab. Click on Deploy and set the Name and the Docker image to be httpd for the first cluster, nginx for the second one, since we want to expose our webservers to internet traffic in the Port mapping area select aLayer-4 Load Balancer`.<br \/>\n<img decoding=\"async\" src=\"https:\/\/rancher.com\/img\/blog\/2018\/scaling-hybrid-cloud\/05-rancher-workload-httpd.png\" alt=\"05\" \/><img decoding=\"async\" src=\"https:\/\/rancher.com\/img\/blog\/2018\/scaling-hybrid-cloud\/07-rancher-pod-httpd.png\" alt=\"07\" \/><img decoding=\"async\" src=\"https:\/\/rancher.com\/img\/blog\/2018\/scaling-hybrid-cloud\/06-rancher-workload-nginx.png\" alt=\"06\" \/><img decoding=\"async\" src=\"https:\/\/rancher.com\/img\/blog\/2018\/scaling-hybrid-cloud\/08-rancher-pod-nginx.png\" alt=\"08\" \/><\/p>\n<p>If you click on nginx\/httpd workload, you will see that Rancher actually created a Deployment just as recommended by Kubernetes to manage ReplicaSets. You will also see the Pod created by that ReplicaSet.<\/p>\n<h3>Scaling Pods and clusters<\/h3>\n<p>Our Rancher instance is managing two clusters:<\/p>\n<ul>\n<li>us-east1b-cluster, running 5 httpd Pods<\/li>\n<li>europe-west4-a cluster, running 1 nginx Pod<\/li>\n<\/ul>\n<p>Let\u2019s scale down some httpd Pods by clicking on &#8211; under Scale column. In no time we see the number of Pods decrease.<br \/>\n<img decoding=\"async\" src=\"https:\/\/rancher.com\/img\/blog\/2018\/scaling-hybrid-cloud\/09-rancher-scale-down.png\" alt=\"09\" \/><img decoding=\"async\" src=\"https:\/\/rancher.com\/img\/blog\/2018\/scaling-hybrid-cloud\/10-rancher-scale-down2.png\" alt=\"10\" \/><\/p>\n<p>To scale up Pods, click + under Scale column. Once you do that, you should instantly see Pods being created and ReplicaSet scaling events. Try to delete one of the pods, by using the right-hand side menu of the Pod and notice how ReplicaSet is recreating it back, to match the desired state.<br \/>\n<img decoding=\"async\" src=\"https:\/\/rancher.com\/img\/blog\/2018\/scaling-hybrid-cloud\/11-rancher-scale-up.png\" alt=\"11\" \/><img decoding=\"async\" src=\"https:\/\/rancher.com\/img\/blog\/2018\/scaling-hybrid-cloud\/12-rancher-scale-up2.png\" alt=\"12\" \/><\/p>\n<p>So, we went from 5 httpd Pods to 2 for first cluster, and from 1 nginx Pod to 7 Pods for second one. Second cluster looks now almost to be running out of resources.<br \/>\n<img decoding=\"async\" src=\"https:\/\/rancher.com\/img\/blog\/2018\/scaling-hybrid-cloud\/13-rancher-cluster-after-scale.png\" alt=\"13\" \/><\/p>\n<p>From Rancher we can also scale the cluster by adding extra Nodes. Let\u2019s try do that, let\u2019s edit the node count to 5.<br \/>\n<img decoding=\"async\" src=\"https:\/\/rancher.com\/img\/blog\/2018\/scaling-hybrid-cloud\/14-rancher-scale-cluster.png\" alt=\"14\" \/><\/p>\n<p>While Rancher shows us \u201creconciling cluster,\u201d Kubernetes behind the scenes is actually upgrading the cluster master and resizing the node pool.<br \/>\n<img decoding=\"async\" src=\"https:\/\/rancher.com\/img\/blog\/2018\/scaling-hybrid-cloud\/15-rancher-cluster-updating.png\" alt=\"15\" \/><\/p>\n<p>Give this action some time and eventually you should see 5 nodes up and running.<br \/>\n<img decoding=\"async\" src=\"https:\/\/rancher.com\/img\/blog\/2018\/scaling-hybrid-cloud\/16-rancher-cluster-after-scale.png\" alt=\"16\" \/><\/p>\n<p>Let\u2019s check the Global tab, so we can have an overview of all the clusters Rancher is managing.<br \/>\n<img decoding=\"async\" src=\"https:\/\/rancher.com\/img\/blog\/2018\/scaling-hybrid-cloud\/17-rancher-clusters-tab2.png\" alt=\"17\" \/><\/p>\n<p>Now we can add more Pods if we want as there are new resources available, let\u2019s go to 13.<br \/>\n<img decoding=\"async\" src=\"https:\/\/rancher.com\/img\/blog\/2018\/scaling-hybrid-cloud\/18-rancher-scale-up3.png\" alt=\"18\" \/><\/p>\n<p>Most importantly, any of these operations is performed with no downtime. While scaling Pods up or down, or resizing the cluster, hitting the public IP for httpd\/nginx Deployment the HTTP response status code was all the time 200.<br \/>\n<img decoding=\"async\" src=\"https:\/\/rancher.com\/img\/blog\/2018\/scaling-hybrid-cloud\/19-rancher-httpd-statuscode.png\" alt=\"19\" \/><img decoding=\"async\" src=\"https:\/\/rancher.com\/img\/blog\/2018\/scaling-hybrid-cloud\/20-rancher-nginx-statuscode.png\" alt=\"20\" \/><\/p>\n<h2>Conclusion<\/h2>\n<p>Let\u2019s recap our hands-on scaling exercise:<\/p>\n<ul>\n<li>we created two clusters using Rancher<\/li>\n<li>we deployed workloads having a deployment of 1 Pod (nginx) and a deployment of 5 Pods (httpd)<\/li>\n<li>scaled in\/out those two deployments<\/li>\n<li>resized the cluster<\/li>\n<\/ul>\n<p>All of these actions were done with a few simple clicks, from <a href=\"https:\/\/rancher.com\/\">Rancher<\/a>, making use of the friendly and intuitive UI. Of course, you can do this entirely from the API as well. In either case, you have a single central point from where you can manage all your kubernetes clusters, observe their state or scale Deployments if needed. If you are looking for a tool to help you with infrastructure management and container orchestration in a hybrid\/multi-cloud, multi-region clusters, then <a href=\"https:\/\/rancher.com\/\">Rancher<\/a> might be the perfect fit for you.<\/p>\n<p><a href=\"https:\/\/rancher.com\/blog\/2018\/2018-10-18-scaling-kubernetes-in-hybrid-cloud\/\" target=\"_blank\" rel=\"noopener\">Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Introduction Let\u2019s assume you\u2019re in business online: you have your own datacenter and a private cloud running your website. You\u2019ll have a number of servers deployed to run applications and store their data. The overall website traffic in this scenario is pretty constant, yet there are times where you expect traffic growth. How do you &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/www.appservgrid.com\/paw93\/index.php\/2019\/03\/08\/challenges-and-solutions-for-scaling-kubernetes-in-the-hybrid-cloud\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Challenges and Solutions for Scaling Kubernetes in the Hybrid Cloud&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-1495","post","type-post","status-publish","format-standard","hentry","category-kubernetes"],"_links":{"self":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts\/1495","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/comments?post=1495"}],"version-history":[{"count":2,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts\/1495\/revisions"}],"predecessor-version":[{"id":1550,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts\/1495\/revisions\/1550"}],"wp:attachment":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/media?parent=1495"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/categories?post=1495"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/tags?post=1495"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}