{"id":376,"date":"2018-10-16T13:58:45","date_gmt":"2018-10-16T13:58:45","guid":{"rendered":"https:\/\/www.appservgrid.com\/paw93\/?p=376"},"modified":"2018-10-17T08:55:12","modified_gmt":"2018-10-17T08:55:12","slug":"from-cattle-to-k8s-scheduling-workloads-in-rancher-2-0","status":"publish","type":"post","link":"https:\/\/www.appservgrid.com\/paw93\/index.php\/2018\/10\/16\/from-cattle-to-k8s-scheduling-workloads-in-rancher-2-0\/","title":{"rendered":"From Cattle to K8s &#8211; Scheduling Workloads in Rancher 2.0"},"content":{"rendered":"<p>An important and complex aspect of container orchestration is scheduling the application containers. Appropriate placement of containers onto the shared infrastructure resources that are available is the key to achieve maximum performance at optimum compute resource usage.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/yURoPT9.jpg\" alt=\"Imgur\" \/><\/p>\n<p>Cattle, which is the default orchestration engine for Rancher 1.6, provided various scheduling abilities to effectively place services, as documented <a href=\"https:\/\/rancher.com\/docs\/rancher\/v1.6\/en\/cattle\/scheduling\/#scheduling-services\">here<\/a>.<\/p>\n<p>With the release of the 2.0 version based on Kubernetes, Rancher now utilizes native Kubernetes scheduling. In this article we will look at the scheduling methods available in Rancher 2.0 in comparison to Cattle\u2019s scheduling support.<\/p>\n<h5>How to Migrate from Rancher 1.6 to Rancher 2.1 Online Meetup<\/h5>\n<p>Key terminology differences, implementing key elements, and transforming Compose to YAML<\/p>\n<p><a href=\"https:\/\/rancher.com\/events\/2018\/2018-10-23-october-online-meetup-rancher1x-to-rancher2x\/\" target=\"blank\">Watch the video<\/a><\/p>\n<h2>Node Scheduling<\/h2>\n<p>Based on the native Kubernetes behavior, by default, pods in a Rancher 2.0 workload will be spread across the nodes (hosts) that are schedulable and have enough free capacity. But just like the 1.6 version, Rancher 2.0 also facilitates:<\/p>\n<ul>\n<li>Running all pods on a specific node.<\/li>\n<li>Node scheduling using labels.<\/li>\n<\/ul>\n<p>Here is how scheduling in the 1.6 UI looks. Rancher lets you either run all containers on a specific host, specify hard\/soft host labels, or use affinity\/anti-affinity rules while deploying services.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/MiqZSGj.png\" alt=\"Imgur\" \/><\/p>\n<p>And here is the equivalent node scheduling UI for Rancher 2.0 that provides the same features while deploying workloads.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/Keswh5R.png\" alt=\"Imgur\" \/><\/p>\n<p>Rancher uses the underlying native Kubernetes constructs to specify node affinity\/anti-affinity. Detailed documentation from Kubernetes is available <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/configuration\/assign-pod-node\/#affinity-and-anti-affinity\">here<\/a>.<\/p>\n<p>Let\u2019s run through some examples that schedule workload pods using these node scheduling options, and then check how the Kubernetes YAML specs look like in comparison to the 1.6 Docker Compose config.<\/p>\n<h3>Example: Run All Pods on a Specific Node<\/h3>\n<p>While deploying a workload (navigate to your Cluster &gt; Project &gt; Workloads), it is possible to schedule all pods in your workload to a specific node.<\/p>\n<p>Here I am deploying a workload of scale = 2 using the nginx image on a specific node.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/UylWMFP.png\" alt=\"Imgur\" \/><\/p>\n<p>Rancher will choose that node if there is enough compute resource availability and no port conflicts if hostPort is used. If the workload exposes itself using a nodePort that conflicts with another workload, the deployment gets created successfully, but no nodePort service is created. Therefore, the workload doesn\u2019t get exposed at all.<\/p>\n<p>On the Workloads tab, you can list the workload group by node. I can see both of the pods for my Nginx workload are scheduled on the specified node:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/uOLA1p4.png\" alt=\"Imgur\" \/><\/p>\n<p>Now here is what this scheduling rule looks like in Kubernetes pod specs:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/KXOCG56.png\" alt=\"Imgur\" \/><\/p>\n<h3>Example: Host Label Affinity\/Anti-Affinity<\/h3>\n<p>I added a label foo=bar to node1 in my Rancher 2.0 cluster to test the label-based node scheduling rules.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/O0Qy0tk.png\" alt=\"Imgur\" \/><\/p>\n<h4>Host Label Affinity: Hard<\/h4>\n<p>Here is how to specify a host label affinity rule in the Rancher 2.0 UI. A hard affinity rule means that the host chosen must satisfy all the scheduling rules. If no such host can be found, the workload will fail to deploy.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/QU2uKw3.png\" alt=\"Imgur\" \/><\/p>\n<p>In the PodSpec YAML, this rule translates to field nodeAffinity. Also note that I have included the Rancher 1.6 docker-compose.yml used to achieve the same scheduling behavior using labels.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/NkOA1td.png\" alt=\"Imgur\" \/><\/p>\n<h4>Host Label Affinity: Soft<\/h4>\n<p>If you are a Rancher 1.6 user, you know that a soft rule means that the scheduler <em>should<\/em> try to deploy the application per the rule, but can deploy even if the rule is not satisfied by any host. Here is how to specify this rule in Rancher 2.0 UI.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/RVAlMIw.png\" alt=\"Imgur\" \/><\/p>\n<p>The corresponding YAML specs for the pod are shown below.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/8KyEMrS.png\" alt=\"Imgur\" \/><\/p>\n<h4>Host Label Anti-Affinity<\/h4>\n<p>Apart from the key = value host label matching rule, Kubernetes scheduling constructs also support the following operators:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/t214I2R.png\" alt=\"Imgur\" \/><\/p>\n<p>So to achieve anti-affinity, you can use the operators NotIn and DoesNotExist for the node label.<\/p>\n<h2>Support for Other 1.6 Scheduling Options<\/h2>\n<p>If you are a Cattle user, you will be familiar with a few other scheduling options available in Rancher 1.6:<\/p>\n<ul>\n<li><a href=\"https:\/\/rancher.com\/docs\/rancher\/v1.6\/en\/cattle\/scheduling\/#finding-hosts-with-container-labels\">Select a Host Using Container Labels<\/a><\/li>\n<li><a href=\"https:\/\/rancher.com\/docs\/rancher\/v1.6\/en\/rancher-services\/scheduler\/#resource-constraints\">Ability to Schedule Based on Resource Constraints<\/a><\/li>\n<li><a href=\"https:\/\/rancher.com\/docs\/rancher\/v1.6\/en\/rancher-services\/scheduler\/#restrict-services-on-host\">Ability to Schedule Only Specific Services on a Host<\/a><\/li>\n<\/ul>\n<p>If you are using these options in your Rancher 1.6 setups, it is possible to replicate them in Rancher 2.0 using native Kubernetes scheduling options. As of v2.0.8, there is no UI support for these options while deploying workloads, but you can always use them by importing the Kubernetes YAML specs on a Rancher cluster.<\/p>\n<h3>Schedule Using Container labels<\/h3>\n<p>This 1.6 option lets you schedule containers to a host where a container with a specific label is already present. To do this on Rancher 2.0, use Kubernetes <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/configuration\/assign-pod-node\/#affinity-and-anti-affinity\">inter-pod affinity and anti-affinity feature<\/a>.<\/p>\n<p>As noted in these docs, Kubernetes allows you to constrain which nodes your pod can be scheduled to based on pod labels rather than node labels.<\/p>\n<p>One of the most-used scheduling features in 1.6 was anti-affinity to the service itself using labels on containers. To replicate this behavior in Rancher 2.0, we can use pod anti-affinity constructs in Kubernetes YAML specs. For example, consider a Nginx web workload. To ensure that pods in this workload do not land on the same host, you can use the podAntiAffinity construct as shown below. By specifying podAntiAffinity using labels, we ensure that each Nginx replica does not co-locate on a single node.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/VvN6NOU.png\" alt=\"Imgur\" \/><\/p>\n<p>Using <a href=\"https:\/\/rancher.com\/docs\/rancher\/v2.x\/en\/cli\/\">Rancher CLI<\/a>, this workload can be deployed onto the Kubernetes cluster. Note that the above deployment specifies three replicas, and I have three schedulable nodes in the Kubernetes cluster.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/OZP7KqG.png\" alt=\"Imgur\" \/><\/p>\n<p>Since podAntiAffinity is specified, the three pods end up on different nodes. To further check how podAntiAffinity applies, I can scale up the deployment to four pods. Notice that the fourth pod cannot get scheduled since the scheduler cannot find another node to satisfy the podAntiAffinity rule.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/5DdYUHf.png\" alt=\"Imgur\" \/><\/p>\n<h3>Resource-Based Scheduling<\/h3>\n<p>While you are creating a service in Rancher 1.6, you can specify the memory reservation and mCPU reservation in the Security\/Host tab in the UI. Cattle will schedule containers for the service onto hosts that have enough available compute resources.<\/p>\n<p>In Rancher 2.0, you can specify the memory and CPU resources required by your workload pods using resources.requests.memory and resources.requests.cpu under the pod container specs. You can find more detail about these specs <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-compute-resources-container\/#resource-requests-and-limits-of-pod-and-container\">here<\/a>.<\/p>\n<p>When you specify these resource requests, the Kubernetes scheduler will assign the pod to a node with capacity.<\/p>\n<h3>Schedule Only Specific Services to Host<\/h3>\n<p>Rancher 1.6 has the ability to specify container labels on the host to only allow specific containers to be scheduled to it.<\/p>\n<p>To achieve this in Rancher 2.0, use the equivalent Kubernetes feature of <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/configuration\/taint-and-toleration\/\">adding node taints (like host tags) and using tolerations<\/a> in your pod specs.<\/p>\n<h2>Global Service<\/h2>\n<p>In Rancher 1.6, a <a href=\"https:\/\/rancher.com\/docs\/rancher\/v1.6\/en\/cattle\/scheduling\/#global-service\">global service<\/a> is a service with a container deployed on every host in the environment.<\/p>\n<p>If a service has the label io.rancher.scheduler.global: &#8216;true&#8217;, then the Rancher 1.6 scheduler will schedule a service container on each host in the environment. As mentioned in the documentation, if a new host is added to the environment, and the host fulfills the global service\u2019s host requirements, the service will automatically be started on it by Rancher.<\/p>\n<p>The sample below is an example of a global service in Rancher 1.6. Note that just placing the required label is sufficient to make a service global.<\/p>\n<p>version: &#8216;2&#8217;<br \/>\nservices:<br \/>\nglobal:<br \/>\nimage: nginx<br \/>\nstdin_open: true<br \/>\ntty: true<br \/>\nlabels:<br \/>\nio.rancher.container.pull_image: always<br \/>\nio.rancher.scheduler.global: &#8216;true&#8217;<\/p>\n<p>How can we deploy a global service in Rancher 2.0 using Kubernetes?<\/p>\n<p>For this purpose, Rancher deploys a <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/daemonset\/\">Kubernetes DaemonSet<\/a> object for the user\u2019s workload. A <em>DaemonSet<\/em> functions exactly like the Rancher 1.6 global service. The Kubernetes scheduler will deploy a pod on each node of the cluster, and as new nodes are added, the scheduler will start new pods on them provided they match the scheduling requirements of the workload.<\/p>\n<p>Additionally, in 2.0, you can also limit a DaemonSet to be deployed to nodes that have a specific label, as mentioned <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/daemonset\/\">here<\/a>.<\/p>\n<h3>Deploying a <em>DaemonSet<\/em> Using Rancher 2.0 UI<\/h3>\n<p>If you are a Rancher 1.6 user, to migrate your global service to Rancher 2.0 using the UI, navigate to your Cluster &gt; Project &gt; Workloads view. While deploying a workload, you can choose the following workload types:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/rYzp93g.png\" alt=\"Imgur\" \/><\/p>\n<p>This is what the corresponding Kubernetes YAML specs look like for the above <em>DaemonSet<\/em> workload:<\/p>\n<p>apiVersion: apps\/v1beta2<br \/>\nkind: DaemonSet<br \/>\nmetadata:<br \/>\nlabels:<br \/>\nworkload.user.cattle.io\/workloadselector: daemonSet-default-globalapp<br \/>\nname: globalapp<br \/>\nnamespace: default<br \/>\nspec:<br \/>\nselector:<br \/>\nmatchLabels:<br \/>\nworkload.user.cattle.io\/workloadselector: daemonSet-default-globalapp<br \/>\ntemplate:<br \/>\nmetadata:<br \/>\nlabels:<br \/>\nworkload.user.cattle.io\/workloadselector: daemonSet-default-globalapp<br \/>\nspec:<br \/>\naffinity: {}<br \/>\ncontainers:<br \/>\n&#8211; image: nginx<br \/>\nimagePullPolicy: Always<br \/>\nname: globalapp<br \/>\nresources: {}<br \/>\nstdin: true<br \/>\ntty: true<br \/>\nrestartPolicy: Always<\/p>\n<h3>Docker Compose to Kubernetes YAML<\/h3>\n<p>To migrate a Rancher 1.6 global service to Rancher 2.0 using its Compose config, follow these steps.<\/p>\n<p>You can convert the docker-compose.yml file from Rancher 1.6 to Kubernetes YAML using the <a href=\"https:\/\/github.com\/kubernetes\/kompose\">Kompose<\/a> tool, and then deploy the application using either the <a href=\"https:\/\/kubernetes.io\/docs\/reference\/kubectl\/kubectl\/\">Kubectl client tool<\/a> or <a href=\"https:\/\/rancher.com\/docs\/rancher\/v2.x\/en\/cli\/\">Rancher CLI<\/a> in the Kubernetes cluster.<\/p>\n<p>Consider the docker-compose.yml specs mentioned above where the Nginx service is a global service. This is how it can be converted to Kubernetes YAML using Kompose:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/zo3gj38.png\" alt=\"Imgur\" \/><\/p>\n<p>Now configure the Rancher CLI against your Kubernetes Cluster and deploy the generated *-daemonset.yaml file.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/PR3m49c.png\" alt=\"Imgur\" \/><\/p>\n<p>As shown above, my Kubernetes cluster has two worker nodes where workloads can be scheduled, and deploying the global-daemonset.yaml started two pods for the Daemonset, one on each node.<\/p>\n<h2>Conclusion<\/h2>\n<p>In this article, we reviewed how the various scheduling functionalities of Rancher 1.6 can be migrated to Rancher 2.0. Most of the scheduling techniques have equivalent options available in Rancher 2.0, or they can be achieved via native Kubernetes constructs.<\/p>\n<p>In the upcoming article, I will explore a bit about how service discovery support in Cattle can be replicated in a Rancher 2.0 setup &#8211; stay tuned!<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/rancher.com\/img\/bio\/prachi-damle.jpg\" alt=\"Prachi Damle\" width=\"100\" height=\"100\" \/><\/p>\n<p>Prachi Damle<\/p>\n<p>Principal Software Engineer<\/p>\n<p><a href=\"https:\/\/rancher.com\/blog\/2018\/2018-08-29-scheduling-options-in-2-dot-0\/\" target=\"_blank\" rel=\"noopener\">Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>An important and complex aspect of container orchestration is scheduling the application containers. Appropriate placement of containers onto the shared infrastructure resources that are available is the key to achieve maximum performance at optimum compute resource usage. Cattle, which is the default orchestration engine for Rancher 1.6, provided various scheduling abilities to effectively place services, &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/www.appservgrid.com\/paw93\/index.php\/2018\/10\/16\/from-cattle-to-k8s-scheduling-workloads-in-rancher-2-0\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;From Cattle to K8s &#8211; Scheduling Workloads in Rancher 2.0&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-376","post","type-post","status-publish","format-standard","hentry","category-kubernetes"],"_links":{"self":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts\/376","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/comments?post=376"}],"version-history":[{"count":1,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts\/376\/revisions"}],"predecessor-version":[{"id":523,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts\/376\/revisions\/523"}],"wp:attachment":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/media?parent=376"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/categories?post=376"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/tags?post=376"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}