{"id":310,"date":"2018-10-16T08:52:14","date_gmt":"2018-10-16T08:52:14","guid":{"rendered":"https:\/\/www.appservgrid.com\/paw93\/?p=310"},"modified":"2018-10-16T21:03:03","modified_gmt":"2018-10-16T21:03:03","slug":"using-rke-to-deploy-a-kubernetes-cluster-on-exoscale","status":"publish","type":"post","link":"https:\/\/www.appservgrid.com\/paw93\/index.php\/2018\/10\/16\/using-rke-to-deploy-a-kubernetes-cluster-on-exoscale\/","title":{"rendered":"Using RKE to Deploy a Kubernetes Cluster on Exoscale"},"content":{"rendered":"<h5>How to Manage Workloads on Kubernetes with Rancher 2.0 Online Meetup<\/h5>\n<p>Learn how to deploy Kubernetes applications in Rancher 2.0, and use the monitoring, logging and pipeline features.<\/p>\n<p><a href=\"https:\/\/rancher.com\/events\/2018\/april-online-meetup-managing-workloads-on-kubernetes-with-rancher-2-0\/\" target=\"blank\">Watch the video<\/a><\/p>\n<h2>Introduction<\/h2>\n<p>One of the biggest challenges with Kubernetes is bringing up a cluster for the<br \/>\nfirst time. There have been several projects that attempt to address this gap<br \/>\nincluding <a href=\"https:\/\/github.com\/skippbox\/kmachine\">kmachine<\/a> and<br \/>\n<a href=\"https:\/\/github.com\/kubernetes\/minikube\">minikube<\/a>. However, both assume<br \/>\nyou are starting from a blank slate. What if you have your instances provisioned,<br \/>\nor have another means of provisioning hosts with configuration management?<\/p>\n<p>This is the problem that the people at Rancher are solving with the<br \/>\nRancher Kubernetes Engine or <a href=\"https:\/\/rancher.com\/an-introduction-to-rke\/\">RKE<\/a>.<br \/>\nAt the core is a YAML based configuration file that is used to define the hosts<br \/>\nand make-up for your Kubernetes cluster. RKE will do the heavy lifting of<br \/>\ninstalling the required Kubernetes components, and configuring them for your<br \/>\ncluster.<\/p>\n<p>What this tutorial will show you is how to quickly set up a few virtual systems<br \/>\nthat can host our Kubernetes cluster, bring up Kubernetes<br \/>\nusing RKE, and lastly setup a sample application hosted inside Kubernetes.<\/p>\n<blockquote><p>NOTE: This example will bring up a bare minimum cluster with a sample<br \/>\napplication for experimentation, and in no way should be considered<br \/>\na production ready deployment.<\/p><\/blockquote>\n<h2>Prerequisites and Setup<\/h2>\n<p>For our virtual cluster, we will be using <a href=\"https:\/\/www.exoscale.com\">Exoscale<\/a><br \/>\nas they will allow us to get up and running with minimal effort. There are three<br \/>\nbinaries that you will need to install to fully utilize this guide. While this<br \/>\nguide is written assuming Linux, the binaries are available for Linux, Windows,<br \/>\nand MacOS.<\/p>\n<ol>\n<li><a href=\"https:\/\/github.com\/exoscale\/egoscale\/releases\">Exoscale CLI<\/a> &#8211; Required to setup the environment and manage our systems<\/li>\n<li><a href=\"https:\/\/github.com\/rancher\/rke\/releases\">RKE CLI<\/a> &#8211; Required to provision Kubernetes<\/li>\n<li><a href=\"https:\/\/kubernetes.io\/docs\/tasks\/tools\/install-kubectl\">kubectl<\/a> &#8211; Required to manage our new Kubernetes cluster<\/li>\n<\/ol>\n<p>In addition, you will need an <a href=\"https:\/\/www.exoscale.com\/signup\/\">Exoscale account<\/a><br \/>\nsince it will be used to setup the Exoscale CLI.<\/p>\n<h3>Configure Exoscale CLI<\/h3>\n<p>Once you have your Exoscale account set-up, you need to configure the Exoscale<br \/>\nclient. Assuming you are in the same directory containing the program you will run:<\/p>\n<p>$ .\/exo config<\/p>\n<p>Hi happy Exoscalian, some configuration is required to use exo.<\/p>\n<p>We now need some very important information, find them there.<br \/>\n&lt;https:\/\/portal.exoscale.com\/account\/profile\/api&gt;<\/p>\n<p>[+] API Key [none]: EXO\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7<br \/>\n[+] Secret Key [none]: \u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7<br \/>\n[&#8230;]<\/p>\n<blockquote><p>NOTE: When you go to the <a href=\"https:\/\/portal.exoscale.com\/account\/profile\/api\">API profile page<\/a><br \/>\nyou will see the API Key and Secret. Be sure to copy and paste both at the prompts.<\/p><\/blockquote>\n<h2>Provisioning the Kubernetes Environment with the Exoscale CLI<\/h2>\n<p>Now that we have configured the Exoscale CLI, we need to prepare the Exoscale<br \/>\ncloud environment. This will require setting up a firewall rule that will be<br \/>\ninherited by the instances that will become the Kubernetes cluster, and an optional<br \/>\nstep of creating and adding your ssh public key.<\/p>\n<h3>Defining the firewall rules<\/h3>\n<p>The firewall or security group we will create must have at least three<br \/>\nports exposed: 22 for ssh access, 6443 and 10240 for kubectl and rke to bring up<br \/>\nand manage the cluster. Lastly, we need to grant access to the security group so<br \/>\nthe instances can interact amongst itself.<\/p>\n<p>The first step to this is to create the firewall or <em>security group<\/em>:<\/p>\n<p>$ .\/exo firewall create rke-k8s -d &#8220;RKE k8s SG&#8221;<br \/>\n\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c<br \/>\n\u2502 NAME \u2502 DESCRIPTION \u2502 ID \u2502<br \/>\n\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c<br \/>\n\u2502 rke-k8s \u2502 RKE K8S SG \u2502 01a3b13f-a312-449c-a4ce-4c0c68bda457 \u2502<br \/>\n\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c<\/p>\n<p>The next step is to add the rules (command output omitted):<\/p>\n<p>$ .\/exo firewall add rke-k8s -p ALL -s rke-k8s<br \/>\n$ .\/exo firewall add rke-k8s -p tcp -P 6443 -c 0.0.0.0\/0<br \/>\n$ .\/exo firewall add rke-k8s -p tcp -P 10240 -c 0.0.0.0\/0<br \/>\n$ .\/exo firewall add rke-k8s ssh<\/p>\n<p>You can confirm the results by invoking exo firewall show:<\/p>\n<p>$ .\/exo firewall show rke-k8s<br \/>\n\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c<br \/>\n\u2502 TYPE \u2502 SOURCE \u2502 PROTOCOL \u2502 PORT \u2502 DESCRIPTION \u2502 ID \u2502<br \/>\n\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c<br \/>\n\u2502 INGRESS \u2502 CIDR 0.0.0.0\/0 \u2502 tcp \u2502 22 (ssh) \u2502 \u2502 40d82512-2196-4d94-bc3e-69b259438c57 \u2502<br \/>\n\u2502 \u2502 CIDR 0.0.0.0\/0 \u2502 tcp \u2502 10240 \u2502 \u2502 12ceea53-3a0f-44af-8d28-3672307029a5 \u2502<br \/>\n\u2502 \u2502 CIDR 0.0.0.0\/0 \u2502 tcp \u2502 6443 \u2502 \u2502 18aa83f3-f996-4032-87ef-6a06220ce850 \u2502<br \/>\n\u2502 \u2502 SG \u2502 all \u2502 0 \u2502 \u2502 7de233ad-e900-42fb-8d93-05631bcf2a70 \u2502<br \/>\n\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c<\/p>\n<h3>Optional: Creating and adding an ssh key<\/h3>\n<p>One of the nice things about the Exoscale CLI is that you can use it to create<br \/>\nan ssh key for each instance you bring up. However there are times where you will<br \/>\nwant a single administrative ssh key for the cluster. You can have the Exoscale<br \/>\nCLI create it or use the CLI to import your own key. To do that, you will use<br \/>\nExoscale\u2019s sshkey subcommand.<\/p>\n<p>If you have a key you want to use:<\/p>\n<p>$ .\/exo sshkey upload [keyname] [ssh-public-key-path]<\/p>\n<p>Or if you\u2019d like to create a unique for this cluster:<\/p>\n<p>$ .\/exo sshkey create rke-k8s-key<br \/>\n\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c<br \/>\n\u2502 NAME \u2502 FINGERPRINT \u2502<br \/>\n\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c<br \/>\n\u2502 rke-k8s-key \u2502 0d:03:46:c6:b2:72:43:dd:dd:04:bc:8c:df:84:f4:d1 \u2502<br \/>\n\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c<br \/>\n&#8212;&#8211;BEGIN RSA PRIVATE KEY&#8212;&#8211;<br \/>\nMIIC&#8230;<br \/>\n&#8212;&#8211;END RSA PRIVATE KEY&#8212;&#8211;<\/p>\n<p>$<\/p>\n<p>Save the contents of the RSA PRIVATE KEY component into a file as<br \/>\nthat will be your sole means of accessing the cluster using that key name. In<br \/>\nboth cases, we will need to make sure that the ssh-agent daemon is running, and<br \/>\nour key is added to it. If you haven\u2019t done so already, run:<\/p>\n<p>$ ssh-add [path-to-private-ssh-key]<\/p>\n<h3>Creating your Exoscale instances<\/h3>\n<p>At this point we are ready to create the instances. We will utilize the medium<br \/>\nsized templates as that will provide enough RAM for both Kubernetes and our<br \/>\nsample application to run. The OS Image we will use is Ubuntu-16.04 due to the<br \/>\nversion of Docker required by RKE to bring up our cluster. Lastly, we use 10g of<br \/>\ndisk space which will be enough to experiment with.<\/p>\n<blockquote><p>NOTE: If you go with a smaller instance size than medium, you will not<br \/>\nhave enough RAM to bootstrap the Kubernetes cluster.<\/p><\/blockquote>\n<p>Step 1: Create instance configuration script<\/p>\n<p>To automate the instance configuration, we will use <a href=\"https:\/\/cloud-init.io\/\">cloud-init<\/a>.<br \/>\nThis is as easy as creating a YAML file to describe our actions, and specifying<br \/>\nthe file on the Exoscale command line:<\/p>\n<p>#cloud-config<\/p>\n<p>manage_etc_hosts: true<\/p>\n<p>package_update: true<br \/>\npackage_upgrade: true<\/p>\n<p>packages:<br \/>\n&#8211; curl<\/p>\n<p>runcmd:<br \/>\n&#8211; &#8220;curl https:\/\/releases.rancher.com\/install-docker\/17.03.sh| bash&#8221;<br \/>\n&#8211; &#8220;usermod -aG docker ubuntu&#8221;<br \/>\n&#8211; &#8220;mkdir \/data&#8221;<\/p>\n<p>power_state:<br \/>\nmode: reboot<\/p>\n<p>Copy and paste the block of text above into a new file called cloud-init.yml.<\/p>\n<p>Step 2: Create the instances<\/p>\n<p>Next, we are going to create 4 instances:<\/p>\n<p>$ for i in 1 2 3 4; do<br \/>\n.\/exo vm create rancher-$i<br \/>\n&#8211;cloud-init-file cloud-init.yml<br \/>\n&#8211;service-offering medium<br \/>\n&#8211;template &#8220;Ubuntu 16.04 LTS&#8221;<br \/>\n&#8211;security-group rke-k8s<br \/>\n&#8211;disk 10<br \/>\n&gt; done<br \/>\nCreating private SSH key<br \/>\nDeploying &#8220;rancher-1&#8221; &#8230;&#8230;&#8230;&#8230;. success!<\/p>\n<p>What to do now?<\/p>\n<p>1. Connect to the machine<\/p>\n<p>&gt; exo ssh rancher-1<br \/>\nssh -i &#8220;\/home\/cab\/.exoscale\/instances\/85fc654f-5761-4a02-b501-664ae53c671d\/id_rsa&#8221; <a href=\"\/cdn-cgi\/l\/email-protection\">[email protected]<\/a><\/p>\n<p>2. Put the SSH configuration into &#8220;.ssh\/config&#8221;<\/p>\n<p>&gt; exo ssh rancher-1 &#8211;info<br \/>\nHost rancher-1<br \/>\nHostName 185.19.29.207<br \/>\nUser ubuntu<br \/>\nIdentityFile \/home\/cab\/.exoscale\/instances\/85fc654f-5761-4a02-b501-664ae53c671d\/id_rsa<\/p>\n<p>Tip of the day:<br \/>\nYou&#8217;re the sole owner of the private key.<br \/>\nBe cautious with it.<\/p>\n<blockquote><p>NOTE: If you created or uploaded an SSH Keypair, then you can add the<br \/>\n&#8211;keypair &lt;common key&gt; argument where common key is the key name you chose<br \/>\nto upload.<\/p>\n<p>NOTE 2: Save the hostname and IP address. You will need these for the RKE set-up<\/p><\/blockquote>\n<p>After waiting several minutes (about 5 to be safe) you will have four brand new<br \/>\ninstances configured with docker and ready to go. A sample configuration will<br \/>\nresemble the following when you run .\/exo vm list:<\/p>\n<p>\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c<br \/>\n\u2502 NAME \u2502 SECURITY GROUP \u2502 IP ADDRESS \u2502 STATUS \u2502 ZONE \u2502 ID \u2502<br \/>\n\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c<br \/>\n\u2502 rancher-4 \u2502 rke-k8s \u2502 159.100.240.102 \u2502 Running \u2502 ch-gva-2 \u2502 acb53efb-95d1-48e7-ac26-aaa9b35c305f \u2502<br \/>\n\u2502 rancher-3 \u2502 rke-k8s \u2502 159.100.240.9 \u2502 Running \u2502 ch-gva-2 \u2502 6b7707bd-9905-4547-a7d4-3fd3fdd83ac0 \u2502<br \/>\n\u2502 rancher-2 \u2502 rke-k8s \u2502 185.19.30.203 \u2502 Running \u2502 ch-gva-2 \u2502 c99168a0-46db-4f75-bd0b-68704d1c7f79 \u2502<br \/>\n\u2502 rancher-1 \u2502 rke-k8s \u2502 185.19.30.83 \u2502 Running \u2502 ch-gva-2 \u2502 50605a5d-b5b6-481c-bb34-1f7ee9e1bde8 \u2502<br \/>\n\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c<\/p>\n<h2>RKE and Kubernetes<\/h2>\n<p>The Rancher Kubernetes Engine command is used to bring up, tear down, and<br \/>\nbackup the configuration for a Kubernetes cluster. The core consists of<br \/>\na configuration file that has the name of cluster.yml. While RKE<br \/>\nsupports the creation of this configuration file with the command rke config,<br \/>\nit can be tedious to go through the prompts. Instead, we will pre-create<br \/>\nthe config file.<\/p>\n<p>The file below is a sample file that can be saved and modified as cluster.yml:<\/p>\n<p>&#8212;<\/p>\n<p>ssh_key_path: [path to ssh private key]<br \/>\nssh_agent_auth: true<\/p>\n<p>cluster_name: rke-k8s<\/p>\n<p>nodes:<br \/>\n&#8211; address: [ip address of rancher-1]<br \/>\nname: rancher-1<br \/>\nuser: ubuntu<br \/>\nrole:<br \/>\n&#8211; controlplane<br \/>\n&#8211; etcd<br \/>\n&#8211; worker<br \/>\n&#8211; address: [ip address of rancher-2]<br \/>\nname: rancher-2<br \/>\nuser: ubuntu<br \/>\nrole:<br \/>\n&#8211; worker<br \/>\n&#8211; address: [ip address of rancher-3]<br \/>\nname: rancher-3<br \/>\nuser: ubuntu<br \/>\nrole:<br \/>\n&#8211; worker<br \/>\n&#8211; address: [ip address of rancher-4]<br \/>\nname: rancher-4<br \/>\nuser: ubuntu<br \/>\nrole:<br \/>\n&#8211; worker<\/p>\n<p>Things you will need to modify:<\/p>\n<ul>\n<li>ssh_key_path: If you uploaded or created a public ssh key, then the path<br \/>\nshould be changed to reflect your private key location. Otherwise, you will<br \/>\nneed to move the ssh_key_path line to be inside each node entry and change<br \/>\nthe path to match the key generated for each instance that was created.<\/li>\n<li>address These should be changed to the IP addresses you saved from the<br \/>\nprevious step.<\/li>\n<li>cluster_name This should match your firewall\/security group name.<\/li>\n<\/ul>\n<p>Once you have saved your updated cluster.yml, $ .\/rke up is all you need to<br \/>\nbring up the Kubernetes cluster. There will be a flurry of status updates as the<br \/>\ndocker containers for the various Kubernetes components are downloaded into each<br \/>\nnode, installed, and configured.<\/p>\n<p>If everything goes well, then you will see the following when RKE finishes:<\/p>\n<p>$ rke up<\/p>\n<p>&#8230;<\/p>\n<p>INFO[0099] Finished building Kubernetes cluster successfully<\/p>\n<p>Congratulations, you have just brought up a Kubernetes cluster!<\/p>\n<h3>Configuring and using kubectl<\/h3>\n<p>One thing you will see that RKE created is the Kubernetes configuration file<br \/>\nkube_config_cluster.yml which is used by kubectl to communicate with the cluster.<br \/>\nTo make running kubctl easier going forward, you will want to set the environment<br \/>\nvariable KUBECONFIG so you don\u2019t need to pass the config parameter each time:<\/p>\n<p>export KUBECONFIG=\/path\/to\/kube_config_cluster.yml<\/p>\n<p>Here are a few sample status commands. The first command will give<br \/>\nyou a listing of all registered nodes, their roles as defined from the<br \/>\ncluster.yml file above, and the Kubernetes version each node is running running.<\/p>\n<p>$ .\/kubectl get nodes<br \/>\nNAME STATUS ROLES AGE VERSION<br \/>\n159.100.240.102 Ready worker 3m v1.11.1<br \/>\n159.100.240.9 Ready worker 3m v1.11.1<br \/>\n185.19.30.203 Ready worker 3m v1.11.1<br \/>\n185.19.30.83 Ready controlplane,etcd,worker 3m v1.11.1<\/p>\n<p>The second command is used to give you the cluster status.<\/p>\n<p>$ .\/kubectl get cs<br \/>\nNAME STATUS MESSAGE ERROR<br \/>\nscheduler Healthy ok<br \/>\ncontroller-manager Healthy ok<br \/>\netcd-0 Healthy {&#8220;health&#8221;: &#8220;true&#8221;}<\/p>\n<blockquote><p>NOTE: For more information about RKE and the cluster configuration file, you can visit<br \/>\nRancher\u2019s <a href=\"https:\/\/rancher.com\/docs\/rke\/v0.1.x\/en\/\">documentation page.<\/a><\/p><\/blockquote>\n<h3>Optional: Installing the Kubernetes Dashboard<\/h3>\n<p>To make it easier to collect status information on the Kubernetes cluster, we<br \/>\nwill install the Kubernetes dashboard. To install the dashboard, you will run:<\/p>\n<p>$ .\/kubectl create -f https:\/\/raw.githubusercontent.com\/kubernetes\/dashboard\/master\/src\/deploy\/recommended\/kubernetes-dashboard.yaml<\/p>\n<p>At this point, the dashboard is installed and running, but the only way to access<br \/>\nit is from inside the Kubernetes cluster. To expose the dashboard port onto your<br \/>\nworkstation so that you can interact with it, you will need to proxy the port by<br \/>\nrunning:<\/p>\n<p>Now you can now visit the dashboard at:<br \/>\n<a href=\"http:\/\/localhost:8001\/api\/v1\/namespaces\/kube-system\/services\/https:kubernetes-dashboard:\/proxy\/\">http:\/\/localhost:8001<\/a>.<\/p>\n<p>However, to be able to make full use of the dashboard, you will need to<br \/>\nauthenticate your session. This will require a token from a specific<br \/>\nsubsystem utilizing a set of secrets that were generated at the time we ran<br \/>\nrke up. The following command will extract the correct token that you can use<br \/>\nto authenticate against for the dashboard:<\/p>\n<p>$ .\/kubectl -n kube-system describe secrets<br \/>\n`.\/kubectl -n kube-system get secrets |awk &#8216;\/clusterrole-aggregation-controller\/ &#8216;`<br \/>\n|awk &#8216;\/token:\/ &#8216;<\/p>\n<p>Copy and paste the long string that is returned into the authentication prompt<br \/>\non the dashboard webpage, and explore the details of your cluster.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/aZZ9iow.png\" alt=\"Kubernetes Dashboard Example\" \/><\/p>\n<h2>Adding Cassandra to Kubernetes<\/h2>\n<p>Now for the fun part. We are going to bring up <a href=\"https:\/\/cassandra.apache.org\">Cassandra<\/a>.<br \/>\nThis will be a simple cluster that will use the local disks for storage. This<br \/>\nwill give us something to play with when it comes to installing a service, and<br \/>\nseeing what happens inside Kubernetes.<\/p>\n<p>To install Cassandra, we need to specify a service configuration that will be<br \/>\nexposed by Kubernetes, and an application definition file that specifies things<br \/>\nlike networking, storage configuration, number of replicas, etc.<\/p>\n<h3>Step 1: Cassandra Service File<\/h3>\n<p>First, we will start with the services file:<\/p>\n<p>apiVersion: v1<br \/>\nkind: Service<br \/>\nmetadata:<br \/>\nlabels:<br \/>\napp: cassandra<br \/>\nname: cassandra<br \/>\nspec:<br \/>\nclusterIP: None<br \/>\nports:<br \/>\n&#8211; port: 9042<br \/>\nselector:<br \/>\napp: cassandra<\/p>\n<p>Copy and save the services file as cassandra-services.yaml, and load it:<\/p>\n<p>.\/kubectl create -f .\/cassandra-service.yml<\/p>\n<p>You should see it load successfully, and you can verify using kubectl:<\/p>\n<p>$ .\/kubectl get svc cassandra<br \/>\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE<br \/>\ncassandra ClusterIP None &lt;none&gt; 9042\/TCP 46s<\/p>\n<blockquote><p>NOTE: For more details on Service configurations, you can read more about it in the<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/service\/\">Kubernetes Service Networking Guide<\/a>.<\/p><\/blockquote>\n<h3>Cassandra StatefulSet<\/h3>\n<p>The <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/statefulset\/\">StatefulSet<\/a><br \/>\nis a type of Kubernetes workload where the application is expected to persist<br \/>\nsome kind of state such as our Cassandra example.<\/p>\n<p>We will download the configuration from XXXXX\/cassandra-statefulset.yaml,<br \/>\nand apply it:<\/p>\n<p>$ .\/kubectl create -f https:\/\/XXXX\/cassandra-statefulset.yaml<br \/>\nstatefulset.apps\/cassandra created<br \/>\nstorageclass.storage.k8s.io\/fast created<\/p>\n<p>You can check the state of the StatefulSet we are loading:<\/p>\n<p>$ .\/kubectl get statefulset<br \/>\nNAME DESIRED CURRENT AGE<br \/>\ncassandra 3 3 4m<\/p>\n<p>You can even interact with Cassandra inside its pod such as verifying that<br \/>\nCassandra is up:<\/p>\n<p>$ .\/kubectl exec -it cassandra-0 &#8212; nodetool status<br \/>\nDatacenter: DC1-K8Demo<br \/>\n======================<br \/>\nStatus=Up\/Down<br \/>\n|\/ State=Normal\/Leaving\/Joining\/Moving<br \/>\n&#8212; Address Load Tokens Owns (effective) Host ID Rack<br \/>\nUN 10.42.2.2 104.55 KiB 32 59.1% ac30ba66-bd59-4c8d-ab7b-525daeb85904 Rack1-K8Demo<br \/>\nUN 10.42.1.3 84.81 KiB 32 75.0% 92469b85-eeae-434f-a27d-aa003531cff7 Rack1-K8Demo<br \/>\nUN 10.42.3.3 70.88 KiB 32 65.9% 218a69d8-52f2-4086-892d-f2c3c56b05ae Rack1-K8Demo<\/p>\n<p>Now suppose we want to scale up the number of replicas from 3, to 4? To do that, you<br \/>\nwill run:<\/p>\n<p>$ .\/kubectl edit statefulset cassandra<\/p>\n<p>This will open up your default text editor. Scroll down to the replica line,<br \/>\nchange the value 3 to 4, save and exit. You should see the following with your<br \/>\nnext invocation of kubectl:<\/p>\n<p>$ .\/kubectl get statefulset<br \/>\nNAME DESIRED CURRENT AGE<br \/>\ncassandra 4 4 10m<\/p>\n<p>$ .\/kubectl exec -it cassandra-0 &#8212; nodetool status<br \/>\nDatacenter: DC1-K8Demo<br \/>\n======================<br \/>\nStatus=Up\/Down<br \/>\n|\/ State=Normal\/Leaving\/Joining\/Moving<br \/>\n&#8212; Address Load Tokens Owns (effective) Host ID Rack<br \/>\nUN 10.42.2.2 104.55 KiB 32 51.0% ac30ba66-bd59-4c8d-ab7b-525daeb85904 Rack1-K8Demo<br \/>\nUN 10.42.1.3 84.81 KiB 32 56.7% 92469b85-eeae-434f-a27d-aa003531cff7 Rack1-K8Demo<br \/>\nUN 10.42.3.3 70.88 KiB 32 47.2% 218a69d8-52f2-4086-892d-f2c3c56b05ae Rack1-K8Demo<br \/>\nUN 10.42.0.6 65.86 KiB 32 45.2% 275a5bca-94f4-439d-900f-4d614ba331ee Rack1-K8Demo<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/uWHbSbu.png\" alt=\"Looking at the Kubernetes Dashboard\" \/><\/p>\n<h2>Final Note<\/h2>\n<p>There is one final note about cluster scaling and using the StatefulSet workload.<br \/>\nKubernetes makes it easy to scale your cluster up to account for load, however to<br \/>\nensure data gets preserved, Kubernetes will keep all data in place when you scale<br \/>\nthe number of nodes down. What this means is that you will be responsible for<br \/>\nensuring proper backups are made and everything is deleted before you can<br \/>\nconsider the application cleaned up.<\/p>\n<p>And here are gists with the yaml configurations from this tutorial that might be helpful as well:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/rancher.com\/img\/bio\/chris-baumbauer.jpg\" alt=\"Chris Baumbauer\" width=\"100\" height=\"100\" \/><\/p>\n<p>Chris Baumbauer<\/p>\n<p>Software Engineer<\/p>\n<p><a href=\"https:\/\/github.com\/cab105\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" src=\"https:\/\/rancher.com\/img\/icon-github.svg\" alt=\"github\" width=\"30\" \/><\/a><\/p>\n<p>Chris Baumbauer is a freelance engineer whom has dabbled every piece of the stack from operating systems to mobile and web development with recent projects focused on Kubernetes such as Kompose and Kmachine.<\/p>\n<p><a href=\"https:\/\/rancher.com\/blog\/2018\/2018-09-17-rke-k8s-cluster-exoscale\/\" target=\"_blank\" rel=\"noopener\">Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>How to Manage Workloads on Kubernetes with Rancher 2.0 Online Meetup Learn how to deploy Kubernetes applications in Rancher 2.0, and use the monitoring, logging and pipeline features. Watch the video Introduction One of the biggest challenges with Kubernetes is bringing up a cluster for the first time. There have been several projects that attempt &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/www.appservgrid.com\/paw93\/index.php\/2018\/10\/16\/using-rke-to-deploy-a-kubernetes-cluster-on-exoscale\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Using RKE to Deploy a Kubernetes Cluster on Exoscale&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-310","post","type-post","status-publish","format-standard","hentry","category-kubernetes"],"_links":{"self":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts\/310","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/comments?post=310"}],"version-history":[{"count":1,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts\/310\/revisions"}],"predecessor-version":[{"id":417,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts\/310\/revisions\/417"}],"wp:attachment":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/media?parent=310"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/categories?post=310"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/tags?post=310"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}