{"id":1089,"date":"2019-01-18T11:27:57","date_gmt":"2019-01-18T11:27:57","guid":{"rendered":"https:\/\/www.appservgrid.com\/paw93\/?p=1089"},"modified":"2019-01-24T03:50:58","modified_gmt":"2019-01-24T03:50:58","slug":"getting-microservices-deployments-on-kubernetes-with-rancher","status":"publish","type":"post","link":"https:\/\/www.appservgrid.com\/paw93\/index.php\/2019\/01\/18\/getting-microservices-deployments-on-kubernetes-with-rancher\/","title":{"rendered":"Getting Microservices Deployments on Kubernetes with Rancher"},"content":{"rendered":"<p>Most people running Docker in production use it as a way to build and<br \/>\nmove deployment artifacts. However, their deployment model is still very<br \/>\nmonolithic or comprises of a few large services. The major stumbling<br \/>\nblock in the way of using true containerized microservices is the lack<br \/>\nof clarity on how to manage and orchestrate containerized workloads at<br \/>\nscale. Today we are going to talk about building a Kubernetes based<br \/>\nmicroservice deployment. <a href=\"http:\/\/kubernetes.io\/\">Kubernetes<\/a> is the open<br \/>\nsource successor to Google\u2019s long running Borg project, which has been<br \/>\nrunning such workloads at scale for about a decade. While there are<br \/>\nstill some rough edges, Kubernetes represents one of the most mature<br \/>\ncontainer orchestration systems available today.<\/p>\n<h3>[[Launching Kubernetes Environment ]]<\/h3>\n<p>[[You can take a look at the<br \/>\n]]<a href=\"http:\/\/kubernetes.io\/docs\/getting-started-guides\/docker-multinode\/\">Kubernetes<br \/>\nDocumentation<\/a><br \/>\nfor instructions on how launch a Kubernetes cluster in various<br \/>\nenvironments. In this post, I\u2019m going to focus on launching <a href=\"http:\/\/rancher.com\/kubernetes\">Rancher\u2019s<br \/>\ndistribution of Kubernetes<\/a> as an<br \/>\nenvironment within the <a href=\"http:\/\/rancher.com\/rancher\">Rancher container management<br \/>\nplatform<\/a>. We\u2019ll start by setting up<br \/>\na Rancher server as described<br \/>\n<a href=\"http:\/\/docs.rancher.com\/rancher\/v1.2\/en\/quick-start-guide\/\">here<\/a> and<br \/>\nselect <em>Environment\/Default &gt; Manage Environments &gt; Add Environment<\/em>.<br \/>\nSelect <em>Kubernetes<\/em> from Container Orchestration options and create your<br \/>\nenvironment. Now select <em>Infrastructure &gt; Hosts &gt; Add Host<\/em> and launch<br \/>\na few nodes for Kubernetes to run on. Note: we recommend adding at least<br \/>\n3 hosts, which will run the Rancher agent container. Once the hosts come<br \/>\nup, you should see the following screen, and in a few minutes your<br \/>\ncluster should be up and ready.<\/p>\n<p>There are lots of advantages to running Kubernetes within Rancher.<br \/>\nMostly, it just makes the deployment and management dramatically easier<br \/>\nfor both users and the IT team. Rancher automatically implements an HA<br \/>\nimplementation of etcd for the Kubernetes backend, and deploys all of<br \/>\nthe necessary services onto any hosts you add into this environment. It<br \/>\nsets up access controls, and can tie into existing LDAP and AD<br \/>\ninfrastructure easily. Rancher also automatically implements container<br \/>\nnetworking and load balancing services for Kubernetes. Using Rancher,<br \/>\nyou should have an HA implementation of Kubernetes in a few minutes.<\/p>\n<p><a href=\"http:\/\/cdn.rancher.com\/wp-content\/uploads\/2016\/08\/13055117\/Screen-Shot-2016-07-16-at-8.31.03-AM.png\"><img decoding=\"async\" src=\"http:\/\/cdn.rancher.com\/wp-content\/uploads\/2016\/08\/13055117\/Screen-Shot-2016-07-16-at-8.31.03-AM.png\" alt=\"kubernetes\nlaunching\" \/><\/a><\/p>\n<h3>Namespaces<\/h3>\n<p>Now that we have our cluster running, let\u2019s jump in and start going<br \/>\nthrough some basic Kubernetes resources. You can access the Kubernetes<br \/>\ncluster either directly through the kubectl CLI, or through the Rancher<br \/>\nUI. Rancher\u2019s access management layer controls who can access the<br \/>\ncluster, so you\u2019ll need to generate an API key from the Rancher UI<br \/>\nbefore accessing the CLI.<\/p>\n<p>The first Kubernetes resource we are going to look at is namespaces.<br \/>\nWithin a given namespace, all resources must have unique names. In<br \/>\naddition, labels used to link resources are scoped to a single<br \/>\nnamespace. This is why namespaces can be very useful for creating<br \/>\nisolated environments on the same Kubernetes cluster. For example, you<br \/>\nmay want to create an Alpha, Beta and Production environment for your<br \/>\napplication so that you can test latest changes without impacting real<br \/>\nusers. To create a namespace, copy the following text into a file called<br \/>\nnamespace.yaml and run the kubectl create -f namespace.yaml command to<br \/>\ncreate a namespace called beta.<\/p>\n<p>kind: Namespace<br \/>\napiVersion: v1<br \/>\nmetadata:<br \/>\nname: beta<br \/>\nlabels:<br \/>\nname: beta<\/p>\n<p>You can also create, view and select namespaces from the Rancher UI by<br \/>\nusing the Namespace menu on the top menu bar.<br \/>\n<a href=\"http:\/\/cdn.rancher.com\/wp-content\/uploads\/2016\/08\/13023604\/Screen-Shot-2016-08-13-at-5.35.48-AM.png\"><img decoding=\"async\" src=\"http:\/\/cdn.rancher.com\/wp-content\/uploads\/2016\/08\/13023604\/Screen-Shot-2016-08-13-at-5.35.48-AM.png\" alt=\"namespaces\" \/><\/a><\/p>\n<p>You can use the following command to set the namespace in for CLI<br \/>\ninteractions using kubectl:<\/p>\n<p>$ kubectl config set-context Kubernetes &#8211;namespace=beta.<\/p>\n<p>To verify that the context was set currently, use the config view<br \/>\ncommand and verify the output matches the namespace you expect.<\/p>\n<p>$ kubectl config view | grep namespace command<br \/>\nnamespace: beta<\/p>\n<h3>Pods<\/h3>\n<p>Now that we have our namespaces defined, let\u2019s start creating<br \/>\nresources. The first resource we are going to look at is a Pod. A group<br \/>\nof one or more containers is referred to by Kubernetes as a pod.<br \/>\nContainers in a pod are deployed, started, stopped, and replicated as a<br \/>\ngroup. There can only be one pod of a given type on each host, and all<br \/>\ncontainers in the pod are run on the same host. Pods share network<br \/>\nnamespace and can reach each other via the localhost domain. Pods are<br \/>\nthe basic unit of scaling and cannot span across hosts, hence it\u2019s<br \/>\nideal to make them as close to single workload as possible. This will<br \/>\neliminate the side-effects of scaling a pod up or down as well as<br \/>\nensuring we don\u2019t create pods that are too resource intensive for our<br \/>\nunderlying hosts.<\/p>\n<p>Lets define a very simple pod named <em>mywebservice<\/em> which has one<br \/>\ncontainer in its spec named <em>web-1-10<\/em> using the <em>nginx<\/em> container image<br \/>\nand exposing the port 80. Add the following text into a file called<br \/>\npod.yaml.<\/p>\n<p>apiVersion: v1<br \/>\nkind: Pod<br \/>\nmetadata:<br \/>\nname: mywebservice<br \/>\nspec:<br \/>\ncontainers:<br \/>\n&#8211; name: web-1-10<br \/>\nimage: nginx:1.10<br \/>\nports:<br \/>\n&#8211; containerPort: 80<\/p>\n<p>Run the kubectl create command to create your pod. If you set your<br \/>\nnamespace above using the set-context command then the pods will be<br \/>\ncreated in the specified namespace. You can verify the status of your<br \/>\npod by running the <em>get pods<\/em> command. Once you are done we can delete<br \/>\nthe pod by running the kubectl delete command.<\/p>\n<p>$ kubectl create -f .\/pod.yaml<br \/>\npod &#8220;mywebservice&#8221; created<br \/>\n$ kubectl get pods<br \/>\nNAME READY STATUS RESTARTS AGE<br \/>\nmywebservice 1\/1 Running 0 37s<br \/>\n$ kubectl delete -f pod.yaml<br \/>\npod &#8220;mywebservice&#8221; deleted<\/p>\n<p>You should also be able see your pod in the Rancher UI by selecting<br \/>\n<em>Kubernetes &gt; Pods<\/em> from the top menu bar.<\/p>\n<p><a href=\"http:\/\/cdn.rancher.com\/wp-content\/uploads\/2016\/08\/13025234\/Screen-Shot-2016-08-13-at-5.52.20-AM.png\"><img decoding=\"async\" src=\"http:\/\/cdn.rancher.com\/wp-content\/uploads\/2016\/08\/13025234\/Screen-Shot-2016-08-13-at-5.52.20-AM.png\" alt=\"Screen Shot 2016-08-13 at 5.52.20\nAM\" \/><\/a><\/p>\n<h3>Replica Sets<\/h3>\n<p>Replica Sets, as the name implies, define how many replicas of each pod<br \/>\nwill be running. They also monitor and ensure the required number of<br \/>\npods are running, replacing pods that die. Note that replica sets are a<br \/>\nreplacement for Replication Controllers &#8211; however, for most<br \/>\nuse-cases you will not use Replica Sets directly but instead use<br \/>\nDeployments. Deployments wrap replica sets and add the the<br \/>\nfunctionality to do rolling updates to your application.<\/p>\n<h3>Deployments<\/h3>\n<p>Deployments are a declarative mechanism to manage rolling updates of<br \/>\nyour application. With this in mind, let\u2019s define our first deployment<br \/>\nusing the pod definition above. The only difference is that we take out<br \/>\nthe name parameter, as a name for our container will be auto-generated<br \/>\nby the deployment. The text below shows the configuration for our<br \/>\ndeployment; copy it to a file called deployment.yaml.<\/p>\n<p>apiVersion: extensions\/v1beta1<br \/>\nkind: Deployment<br \/>\nmetadata:<br \/>\nname: mywebservice-deployment<br \/>\nspec:<br \/>\nreplicas: 2 # We want two pods for this deployment<br \/>\ntemplate:<br \/>\nmetadata:<br \/>\nlabels:<br \/>\napp: mywebservice<br \/>\nspec:<br \/>\ncontainers:<br \/>\n&#8211; name: web-1-10<br \/>\nimage: nginx:1.10<br \/>\nports:<br \/>\n&#8211; containerPort: 80<\/p>\n<p>Launch your deployment using the kubectl create command and then verify<br \/>\nthat the deployment is up using the get deployments command.<\/p>\n<p>$ kubectl create -f .\/deployment.yaml<br \/>\ndeployment &#8220;mywebservice-deployment&#8221; create<br \/>\n$ kubectl get deployments<br \/>\nNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br \/>\nmywebservice-deployment 2 2 2 2 7m<\/p>\n<p>You can get details about your deployment using the describe deployment<br \/>\ncommand. One of the useful items output by the describe command is a set<br \/>\nof events. A truncated example of the output from the describe command<br \/>\nis shown below. Currently your deployment should have only one event<br \/>\nwith the message: Scaled up replica set &#8230; to 2.<\/p>\n<p>$ kubectl describe deployment mywebservice<br \/>\nName: mywebservice-deployment<br \/>\nNamespace: beta<br \/>\nCreationTimestamp: Sat, 13 Aug 2016 06:26:44 -0400<br \/>\nLabels: app=mywebservice<br \/>\n&#8230;..<br \/>\n&#8230;.. Scaled up replica set mywebservice-deployment-3208086093 to 2<\/p>\n<h3>Scaling Deployments<\/h3>\n<p>You can modify the scale of the deployment by updating the<br \/>\ndeployment.yaml file from earlier to replace replicas: 2<br \/>\nwith replicas: 3 and run the apply command shown below. If you run<br \/>\nthe describe deployment command again you will see a second event with<br \/>\nthe message:<br \/>\nScaled up replica set mywebservice-deployment-3208086093 to 3.<\/p>\n<p>$ kubectl apply -f deployment.yaml<br \/>\ndeployment &#8220;mywebservice-deployment&#8221; configured<\/p>\n<h3>Updating Deployments<\/h3>\n<p>You can also use the apply command to update your application by<br \/>\nchanging the image version. Modify the deployment.yaml file from earlier<br \/>\nto replace image: nginx:1.10 to image: nginx:1.11 and run the<br \/>\nkubectl apply command. If you run the describe deployment command again<br \/>\nyou will see new events whose messages are shown below. You can see how<br \/>\nthe new deployment (2303032576) was scaled up and the old deployment<br \/>\n(3208086093) was scaled down and the in steps. The total number of pods<br \/>\nacross both deployments is kept constant however the pods are gradually<br \/>\nmoved from the old to the new deployments. This allows us to run<br \/>\ndeployments under load without service interruption.<\/p>\n<p>Scaled up replica set mywebservice-deployment-2303032576 to 1<br \/>\nScaled down replica set mywebservice-deployment-3208086093 to 2<br \/>\nScaled up replica set mywebservice-deployment-2303032576 to 2<br \/>\nScaled down replica set mywebservice-deployment-3208086093 to 1<br \/>\nScaled up replica set mywebservice-deployment-2303032576 to 3<br \/>\nScaled down replica set mywebservice-deployment-3208086093 to 0<\/p>\n<p>If during or after the deployment you realize something is wrong and the<br \/>\ndeployment has caused problems you can use the rollout command to undo<br \/>\nyour deployment change. This will apply the reverse operation to the one<br \/>\nabove and move load back to the previous version of the container.<\/p>\n<p>$ kubectl rollout undo deployment\/mywebservice-deployment<br \/>\ndeployment &#8220;mywebservice-deployment&#8221; rolled back<\/p>\n<h3>Health check<\/h3>\n<p>With deployments we have seen how to scale our service up and down, as<br \/>\nwell as how to do deployments themselves. However, when running services<br \/>\nin production, it\u2019s also important to have live monitoring and<br \/>\nreplacement of service instances when they go down. Kubernetes provides<br \/>\nhealth checks to address this issue. Update the deployment.yaml file<br \/>\nfrom earlier by adding a livenessProbe configuration in the spec<br \/>\nsection. There are three types of liveness probes, <em>http<\/em>, <em>tcp<\/em> and<br \/>\n<em>container exec<\/em>. The first two will check whether Kubernetes is able to<br \/>\nmake an http or tcp connection to the specified port. The container exec<br \/>\nprobe runs a specified command inside the container and asserts a zero<br \/>\nresponse code. In the snippet shown below, we are using the http probe<br \/>\nto issue a GET request to port 80 at the root URL.<\/p>\n<p>apiVersion: extensions\/v1beta1<br \/>\nkind: Deployment<br \/>\nmetadata:<br \/>\nname: mywebservice-deployment<br \/>\nspec:<br \/>\nreplicas: 3<br \/>\ntemplate:<br \/>\nmetadata:<br \/>\nlabels:<br \/>\napp: mywebservice<br \/>\nspec:<br \/>\ncontainers:<br \/>\n&#8211; name: web-1-11<br \/>\nimage: nginx:1.11<br \/>\nports:<br \/>\n&#8211; containerPort: 80<br \/>\nlivenessProbe:<br \/>\nhttpGet:<br \/>\npath: \/<br \/>\nport: 80<br \/>\ninitialDelaySeconds: 30<br \/>\ntimeoutSeconds: 1<\/p>\n<p>If you recreate your deployment with the additional helthcheck and run<br \/>\ndescribe deployment, you should see that Kubernetes now tells you that 3<br \/>\nof your replicas are unavailable. If you run describe again after the<br \/>\ninitial delay period of 30 seconds, you will see that the replicas are<br \/>\nnow marked as available. This is a good way to make sure that your<br \/>\ncontainers are healthy and to give your application time to come up<br \/>\nbefore Kubernetes starts routing traffic to it.<\/p>\n<p>$ kubectl create -f deployment.yaml<br \/>\ndeployment &#8220;mywebservice-deployment&#8221; created<br \/>\n$ kubectl describe deployment mywebservice<br \/>\n&#8230;<br \/>\nReplicas: 3 updated | 3 total | 0 available | 3 unavailable<\/p>\n<h3>Service<\/h3>\n<p>Now that we have a monitored, scalable deployment which can be updated<br \/>\nunder load, it\u2019s time to actually expose the service to real users.<br \/>\nCopy the following text into a file called service.yaml. Each node in<br \/>\nyour cluster exposes a port which can route traffic to the replicas<br \/>\nusing the Kube Proxy.<\/p>\n<p>apiVersion: v1<br \/>\nkind: Service<br \/>\nmetadata:<br \/>\nname: mywebservice<br \/>\nlabels:<br \/>\nrun: mywebservice<br \/>\nspec:<br \/>\ntype: NodePort<br \/>\nports:<br \/>\n&#8211; port: 80<br \/>\nprotocol: TCP<br \/>\nname: http<br \/>\nselector:<br \/>\napp: mywebservice<\/p>\n<p>With the service.yaml file we create service using the create command<br \/>\nand then we can lookup the NodePort using the describe service command.<br \/>\nFor example, in my service I can access the application on port 31673 on<br \/>\nany of my Kubernetes\/Rancher agent nodes. Kubernetes will route traffic<br \/>\nto available nodes automatically if nodes are scaled up and down, become<br \/>\nunhealthy or are relaunched.<\/p>\n<p>$ kubectl create -f service.yaml<br \/>\nservice &#8220;mywebservice&#8221; created<br \/>\n$ kubectl describe service mywebservice | grep NodePort<br \/>\nNodePort: http 31673\/TCP<\/p>\n<p>In today\u2019s article, we looked some basic Kubernetes resources including<br \/>\nNamespaces, Pods, Deployments and Services. We looked at how to scale<br \/>\nour application up and down manually as well as how to perform rolling<br \/>\nupdates of our application. Lastly, we looked at configuring services in<br \/>\norder to expose our application externally. In subsequent articles, we<br \/>\nwill be looking at how to use these together to orchestrate a more<br \/>\nrealistic deployment. We will look at the resources covered today in<br \/>\nmore detail, including how to setup SSL\/TLS termination, multi-service<br \/>\ndeployments, service discovery and how the application would react to<br \/>\nfailures scenarios.<\/p>\n<p><em>Note: <a href=\"http:\/\/rancher.com\/creating-microservices-deployments-on-kubernetes-with-rancher-part-2\/\">Part<br \/>\n2<\/a><br \/>\nof this series is now available!<\/em><\/p>\n<p><a href=\"https:\/\/rancher.com\/getting-micro-services-production-kubernetes\/\" target=\"_blank\" rel=\"noopener\">Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Most people running Docker in production use it as a way to build and move deployment artifacts. However, their deployment model is still very monolithic or comprises of a few large services. The major stumbling block in the way of using true containerized microservices is the lack of clarity on how to manage and orchestrate &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/www.appservgrid.com\/paw93\/index.php\/2019\/01\/18\/getting-microservices-deployments-on-kubernetes-with-rancher\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Getting Microservices Deployments on Kubernetes with Rancher&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-1089","post","type-post","status-publish","format-standard","hentry","category-kubernetes"],"_links":{"self":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts\/1089","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/comments?post=1089"}],"version-history":[{"count":1,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts\/1089\/revisions"}],"predecessor-version":[{"id":1120,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts\/1089\/revisions\/1120"}],"wp:attachment":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/media?parent=1089"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/categories?post=1089"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/tags?post=1089"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}