{"id":273,"date":"2018-10-16T05:46:29","date_gmt":"2018-10-16T05:46:29","guid":{"rendered":"https:\/\/www.appservgrid.com\/paw93\/?p=273"},"modified":"2018-10-16T05:47:36","modified_gmt":"2018-10-16T05:47:36","slug":"setup-a-basic-kubernetes-cluster-with-ease-using-rke","status":"publish","type":"post","link":"https:\/\/www.appservgrid.com\/paw93\/index.php\/2018\/10\/16\/setup-a-basic-kubernetes-cluster-with-ease-using-rke\/","title":{"rendered":"Setup a basic Kubernetes cluster with ease using RKE"},"content":{"rendered":"<p>&nbsp;<\/p>\n<h5>Expert Training in Kubernetes and Rancher<\/h5>\n<p>Join our free online training sessions to learn how to manage Kubernetes workloads with Rancher.<\/p>\n<p><a href=\"https:\/\/rancher.com\/training\/\" target=\"blank\">Sign up here<\/a><\/p>\n<p>In this post, you will go from 3 Ubuntu 16.04 nodes to a basic Kubernetes cluster in a few simple steps. To accomplish this, you will be using Rancher Kubernetes Engine (RKE). To be able to use RKE, you will need 3 Linux nodes with Docker installed (see Requirements below).<\/p>\n<p>This won\u2019t be a production ready cluster, but enough to get you familiar with RKE, some Kubernetes and be able to play around with the cluster. Keep an eye out for the post for building a production ready cluster.<\/p>\n<h3>Requirements<\/h3>\n<ul>\n<li>RKE<\/li>\n<\/ul>\n<p>You will be using RKE from your workstation. Download the latest version for your platform at:<br \/>\n<a href=\"https:\/\/github.com\/rancher\/rke\/releases\/latest\">https:\/\/github.com\/rancher\/rke\/releases\/latest<\/a><\/p>\n<ul>\n<li>kubectl<\/li>\n<\/ul>\n<p>After creating the cluster, we will use the default Kubernetes command-line tool called kubectl to interact with the cluster.<br \/>\nGet the latest version for your platform at:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/tasks\/tools\/install-kubectl\/\">https:\/\/kubernetes.io\/docs\/tasks\/tools\/install-kubectl\/<\/a><\/p>\n<ul>\n<li>3 Ubuntu 16.04 nodes with 2(v)CPUs, 4GB of memory and with swap disabled<\/li>\n<\/ul>\n<p>Most commonly used Linux distribution is Ubuntu 16.04, this is what will be used in this post. Make sure swap is disabled by running swapoff -a and removing any swap entry in \/etc\/fstab. You must be able to access the node using SSH. As this is a multi-node cluster, <a href=\"https:\/\/rancher.com\/docs\/rke\/v0.1.x\/en\/os\/#ports\">the required ports<\/a> need to be opened before proceeding.<\/p>\n<ul>\n<li>Docker installed on each Linux node<\/li>\n<\/ul>\n<p>Kubernetes only validates Docker up to 17.03.2 (See <a href=\"https:\/\/github.com\/kubernetes\/kubernetes\/blob\/master\/CHANGELOG-1.11.md#external-dependencies\">https:\/\/github.com\/kubernetes\/kubernetes\/blob\/master\/CHANGELOG-1.11.md#external-dependencies<\/a>).<br \/>\nYou can use <a href=\"https:\/\/docs.docker.com\/install\/linux\/docker-ce\/ubuntu\/\">https:\/\/docs.docker.com\/install\/linux\/docker-ce\/ubuntu\/<\/a> to install Docker (make sure you install 17.03.2) or use this one-liner to install the correct version:<br \/>\ncurl <a href=\"https:\/\/releases.rancher.com\/install-docker\/17.03.sh\">https:\/\/releases.rancher.com\/install-docker\/17.03.sh<\/a> | sh<\/p>\n<p>Make sure the requirements listed above are fulfilled before you proceed.<\/p>\n<h3>How RKE works<\/h3>\n<p>RKE can be run from any platform (the binary is available for MacOS\/Linux\/Windows), in this example it will run on your workstation\/laptop\/computer. The examples in this post are based on MacOS\/Linux.<\/p>\n<p>RKE will connect to the nodes using a configured SSH private key (the nodes should have the matching SSH public key installed for the SSH user) and setup a tunnel to access the Docker socket (\/var\/run\/docker.sock by default, but configurable). This means that the configured SSH user must have access to the Docker socket on the machine, we will go over this in Creating the Linux user account.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/rancher.com\/img\/blog\/2018\/2018-09-26-setup-k8s-rkeworks.png\" alt=\"\" \/><\/p>\n<h3>Creating the Linux user account<\/h3>\n<p><em>Note: Make sure Docker is installed following the instructions in the Requirements section above.<\/em><\/p>\n<p>The following steps need to be executed on every node. If you need to use sudo, prefix each command with sudo. If you already have users that can access the machine using a SSH key and can access the Docker socket, you can skip this step.<\/p>\n<p># Login to the node<br \/>\n$ ssh <a href=\"\/cdn-cgi\/l\/email-protection\">[email protected]<\/a><br \/>\n# Create a Linux user called rke, create home directory, and add to docker group<br \/>\n$ useradd -m -G docker rke<br \/>\n# Switch user to rke and create SSH directories<br \/>\n$ su &#8211; rke<br \/>\n$ mkdir $HOME\/.ssh<br \/>\n$ chmod 700 $HOME\/.ssh<br \/>\n$ touch $HOME\/.ssh\/authorized_keys<br \/>\n# Test Docker socket access<br \/>\n$ docker version<br \/>\nClient:<br \/>\nVersion: 17.03.2-ce<br \/>\nAPI version: 1.27<br \/>\nGo version: go1.7.5<br \/>\nGit commit: f5ec1e2<br \/>\nBuilt: Tue Jun 27 03:35:14 2017<br \/>\nOS\/Arch: linux\/amd64<\/p>\n<p>Server:<br \/>\nVersion: 17.03.2-ce<br \/>\nAPI version: 1.27 (minimum version 1.12)<br \/>\nGo version: go1.7.5<br \/>\nGit commit: f5ec1e2<br \/>\nBuilt: Tue Jun 27 03:35:14 2017<br \/>\nOS\/Arch: linux\/amd64<br \/>\nExperimental: false<\/p>\n<h3>Configuring SSH keys<\/h3>\n<p>In this post we will create new keys but feel free to use your existing keys. Just make sure you specify them correctly when we configure the keys in RKE.<\/p>\n<p><em>Note: If you want to use SSH keys with a passphrase, you will need to have ssh-agent running with the key added and specify \u2013ssh-agent-auth when running RKE.<\/em><\/p>\n<p>Creating SSH key pair<br \/>\nCreating an SSH key pair can be done by using ssh-keygen , you can execute this on your workstation\/laptop\/computer. It is highly recommended to put a passphrase on your SSH private key. If you lose your SSH private key (and not have a passphrase on it), anyone can use it to access your nodes.<\/p>\n<p>$ ssh-keygen<br \/>\nGenerating public\/private rsa key pair.<br \/>\nEnter file in which to save the key ($HOME\/.ssh\/id_rsa):<br \/>\nEnter passphrase (empty for no passphrase):<br \/>\nEnter same passphrase again:<br \/>\nYour identification has been saved in $HOME\/.ssh\/id_rsa.<br \/>\nYour public key has been saved in $HOME\/.ssh\/id_rsa.pub.<br \/>\nThe key fingerprint is:<br \/>\nxxx<\/p>\n<p>After creating the SSH key pair, you should have the following files:<\/p>\n<ul>\n<li>$HOME\/.ssh\/id_rsa (SSH private key, keep this secure)<\/li>\n<li>$HOME\/.ssh\/id_rsa.pub (SSH public key)<\/li>\n<\/ul>\n<p>Copy the SSH public key to the nodes<br \/>\nTo be able to access the nodes using the created SSH key pair, you will need to install the SSH public key onto the nodes.<\/p>\n<p>Execute this for every node (where hostname is the IP\/hostname of the node):<\/p>\n<p># Install the SSH public key on the node<br \/>\n$ cat $HOME\/.ssh\/id_rsa.pub | ssh hostname &#8220;sudo tee -a \/home\/rke\/.ssh\/authorized_keys&#8221;<\/p>\n<p><em>Note: This post is demonstrating how you create a separate user for RKE. Because of this, we can\u2019t use ssh-copy-id as it only works for installing keys to the same user as is used for the SSH connection.<\/em><\/p>\n<p>Setup ssh-agent<\/p>\n<p><em>Note: If you chose not to put a passphrase on your SSH private key, you can skip this step.<\/em><\/p>\n<p>This needs to be executed on your workstation\/laptop\/computer:<\/p>\n<p># Run ssh-agent and configure the correct environment variables<br \/>\n$ eval $(ssh-agent)<br \/>\nAgent pid 5151<br \/>\n# Add the private key to the ssh-agent<br \/>\n$ ssh-add $HOME\/.ssh\/id_rsa<br \/>\nIdentity added: $HOME\/.ssh\/id_rsa ($HOME\/.ssh\/id_rsa)<\/p>\n<p>Test SSH connectivity<br \/>\nLast step is to test if we can access the node using the SSH private key. This needs to be executed on your workstation\/laptop\/computer, replacing hostname with each of the nodes IP\/hostname):<\/p>\n<p>$ ssh -i $HOME\/.ssh\/id_rsa <a href=\"\/cdn-cgi\/l\/email-protection\">[email protected]<\/a> docker version<br \/>\nClient:<br \/>\nVersion: 17.03.2-ce<br \/>\nAPI version: 1.27<br \/>\nGo version: go1.7.5<br \/>\nGit commit: f5ec1e2<br \/>\nBuilt: Tue Jun 27 03:35:14 2017<br \/>\nOS\/Arch: linux\/amd64<\/p>\n<p>Server:<br \/>\nVersion: 17.03.2-ce<br \/>\nAPI version: 1.27 (minimum version 1.12)<br \/>\nGo version: go1.7.5<br \/>\nGit commit: f5ec1e2<br \/>\nBuilt: Tue Jun 27 03:35:14 2017<br \/>\nOS\/Arch: linux\/amd64<br \/>\nExperimental: false<\/p>\n<h3>Configuring and running RKE<\/h3>\n<p>Get RKE for your platform at:<br \/>\n<a href=\"https:\/\/github.com\/rancher\/rke\/releases\/latest\">https:\/\/github.com\/rancher\/rke\/releases\/latest<\/a>.<\/p>\n<p>RKE will run on your workstation\/laptop\/computer.<\/p>\n<p>For this post I\u2019ve renamed the RKE binary to rke, to make the commands generic for each platform. You can do the same by running:<\/p>\n<p>Test if RKE can be successfully executed by using the following command:<\/p>\n<p># Download RKE for MacOS (Darwin)<br \/>\n$ wget https:\/\/github.com\/rancher\/rke\/releases\/download\/v0.1.9\/rke_darwin-amd64<br \/>\n# Rename binary to rke<br \/>\nmv rke_darwin-amd64 rke<br \/>\n# Make RKE binary executable<br \/>\n$ chmod +x rke<br \/>\n# Show RKE version<br \/>\n$ .\/rke &#8211;version<br \/>\nrke version v0.1.9<\/p>\n<p>Next step is to create a cluster configuration file (by default it will be cluster.yml). This contains all information to build the Kubernetes cluster, like node connection info, what roles to apply to what node etcetera. All <a href=\"https:\/\/rancher.com\/docs\/rke\/v0.1.x\/en\/config-options\/\">configuration options<\/a> can be found in the documentation. You can create the cluster configuration file by running .\/rke config and answering the questions. For this post, you will create a 3 node cluster with every role on each node (answer y for every role), and we will add the Kubernetes Dashboard as addon (Using <a href=\"https:\/\/raw.githubusercontent.com\/kubernetes\/dashboard\/master\/src\/deploy\/recommended\/kubernetes-dashboard.yaml\">https:\/\/raw.githubusercontent.com\/kubernetes\/dashboard\/master\/src\/deploy\/recommended\/kubernetes-dashboard.yaml<\/a>). To access the Kubernetes Dashboard, you need a Service Account token which will be created by adding <a href=\"https:\/\/gist.githubusercontent.com\/superseb\/499f2caa2637c404af41cfb7e5f4a938\/raw\/930841ac00653fdff8beca61dab9a20bb8983782\/k8s-dashboard-user.yml\">https:\/\/gist.githubusercontent.com\/superseb\/499f2caa2637c404af41cfb7e5f4a938\/raw\/930841ac00653fdff8beca61dab9a20bb8983782\/k8s-dashboard-user.yml<\/a> to the addons.<\/p>\n<p>Regarding answering the question to create the cluster configuration file:<\/p>\n<ul>\n<li>The values in brackets, for instance [22] for SSH Port, are defaults and can just be used by pressing the Enter key.<\/li>\n<li>The default SSH Private Key would do, if you have another key, please change it.\n<p>$ .\/rke config<br \/>\n[+] Cluster Level SSH Private Key Path [~\/.ssh\/id_rsa]: ~\/.ssh\/id_rsa<br \/>\n[+] Number of Hosts [1]: 3<br \/>\n[+] SSH Address of host (1) [none]: ip_or_dns_host1<br \/>\n[+] SSH Port of host (1) [22]:<br \/>\n[+] SSH Private Key Path of host (ip_or_dns_host1) [none]:<br \/>\n[-] You have entered empty SSH key path, trying fetch from SSH key parameter<br \/>\n[+] SSH Private Key of host (ip_or_dns_host1) [none]:<br \/>\n[-] You have entered empty SSH key, defaulting to cluster level SSH key: ~\/.ssh\/id_rsa<br \/>\n[+] SSH User of host (ip_or_dns_host1) [ubuntu]: rke<br \/>\n[+] Is host (ip_or_dns_host1) a Control Plane host (y\/n)? [y]: y<br \/>\n[+] Is host (ip_or_dns_host1) a Worker host (y\/n)? [n]: y<br \/>\n[+] Is host (ip_or_dns_host1) an etcd host (y\/n)? [n]: y<br \/>\n[+] Override Hostname of host (ip_or_dns_host1) [none]:<br \/>\n[+] Internal IP of host (ip_or_dns_host1) [none]:<br \/>\n[+] Docker socket path on host (ip_or_dns_host1) [\/var\/run\/docker.sock]:<br \/>\n[+] SSH Address of host (2) [none]: ip_or_dns_host2<br \/>\n[+] SSH Port of host (2) [22]:<br \/>\n[+] SSH Private Key Path of host (ip_or_dns_host2) [none]:<br \/>\n[-] You have entered empty SSH key path, trying fetch from SSH key parameter<br \/>\n[+] SSH Private Key of host (ip_or_dns_host2) [none]:<br \/>\n[-] You have entered empty SSH key, defaulting to cluster level SSH key: ~\/.ssh\/id_rsa<br \/>\n[+] SSH User of host (ip_or_dns_host2) [ubuntu]: rke<br \/>\n[+] Is host (ip_or_dns_host2) a Control Plane host (y\/n)? [y]: y<br \/>\n[+] Is host (ip_or_dns_host2) a Worker host (y\/n)? [n]: y<br \/>\n[+] Is host (ip_or_dns_host2) an etcd host (y\/n)? [n]: y<br \/>\n[+] Override Hostname of host (ip_or_dns_host2) [none]:<br \/>\n[+] Internal IP of host (ip_or_dns_host2) [none]:<br \/>\n[+] Docker socket path on host (ip_or_dns_host2) [\/var\/run\/docker.sock]:<br \/>\n[+] SSH Address of host (3) [none]: ip_or_dns_host3<br \/>\n[+] SSH Port of host (3) [22]:<br \/>\n[+] SSH Private Key Path of host (ip_or_dns_host3) [none]:<br \/>\n[-] You have entered empty SSH key path, trying fetch from SSH key parameter<br \/>\n[+] SSH Private Key of host (ip_or_dns_host3) [none]:<br \/>\n[-] You have entered empty SSH key, defaulting to cluster level SSH key: ~\/.ssh\/id_rsa<br \/>\n[+] SSH User of host (ip_or_dns_host3) [ubuntu]: rke<br \/>\n[+] Is host (ip_or_dns_host3) a Control Plane host (y\/n)? [y]: y<br \/>\n[+] Is host (ip_or_dns_host3) a Worker host (y\/n)? [n]: y<br \/>\n[+] Is host (ip_or_dns_host3) an etcd host (y\/n)? [n]: y<br \/>\n[+] Override Hostname of host (ip_or_dns_host3) [none]:<br \/>\n[+] Internal IP of host (ip_or_dns_host3) [none]:<br \/>\n[+] Docker socket path on host (ip_or_dns_host3) [\/var\/run\/docker.sock]:<br \/>\n[+] Network Plugin Type (flannel, calico, weave, canal) [canal]:<br \/>\n[+] Authentication Strategy [x509]:<br \/>\n[+] Authorization Mode (rbac, none) [rbac]:<br \/>\n[+] Kubernetes Docker image [rancher\/hyperkube:v1.11.1-rancher1]:<br \/>\n[+] Cluster domain [cluster.local]:<br \/>\n[+] Service Cluster IP Range [10.43.0.0\/16]:<br \/>\n[+] Enable PodSecurityPolicy [n]:<br \/>\n[+] Cluster Network CIDR [10.42.0.0\/16]:<br \/>\n[+] Cluster DNS Service IP [10.43.0.10]:<br \/>\n[+] Add addon manifest URLs or YAML files [no]: yes<br \/>\n[+] Enter the Path or URL for the manifest [none]: https:\/\/raw.githubusercontent.com\/kubernetes\/dashboard\/master\/src\/deploy\/recommended\/kubernetes-dashboard.yaml<br \/>\n[+] Add another addon [no]: yes<br \/>\n[+] Enter the Path or URL for the manifest [none]: https:\/\/gist.githubusercontent.com\/superseb\/499f2caa2637c404af41cfb7e5f4a938\/raw\/930841ac00653fdff8beca61dab9a20bb8983782\/k8s-dashboard-user.yml<br \/>\n[+] Add another addon [no]: no<\/li>\n<\/ul>\n<p>When the last question is answered, the cluster.yml file will be created in the same directory as RKE was run:<\/p>\n<p>ls -la cluster.yml<br \/>\n-rw-r&#8212;&#8211; 1 user user 3688 Sep 17 12:50 cluster.yml<\/p>\n<p>You are now ready to build your Kubernetes cluster. This can be done by running rke up. Before you run the command, make sure the <a href=\"https:\/\/rancher.com\/docs\/rke\/v0.1.x\/en\/os\/#ports\">ports required<\/a> are opened between your workstation\/laptop\/computer and the nodes, and between each of the nodes. You can now build your cluster using the following command:<\/p>\n<p>$ .\/rke up<br \/>\nINFO[0000] Building Kubernetes cluster<br \/>\n&#8230;<br \/>\nINFO[0151] Finished building Kubernetes cluster successfully<\/p>\n<p>If all went well, you should have a lot of output from the command but it should end with Finished building Kubernetes cluster successfully. It will also write a kubeconfig file as kube_config_cluster.yml . You can use that file to connect to your Kubernetes cluster.<\/p>\n<h3>Exploring your Kubernetes cluster<\/h3>\n<p>Make sure you have kubectl installed, see <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/tools\/install-kubectl\/\">https:\/\/kubernetes.io\/docs\/tasks\/tools\/install-kubectl\/<\/a> how to get it for your platform.<\/p>\n<p><em>Note: When running kubectl, it automatically tries to use a kubeconfig from the default location; $HOME\/.kube\/config. In the examples, we explicitly specify the kubeconfig file using &#8211;kubeconfig kube_config_cluster.yml. If you don\u2019t want to specify the kubeconfig file every time, you can copy the file kube_config_cluster.yml to $HOME\/.kube\/config. (you probably need to create the directory $HOME\/.kube first)<\/em><\/p>\n<p>Start with querying the server for its version:<\/p>\n<p>$ kubectl &#8211;kubeconfig kube_config_cluster.yml version<\/p>\n<p>Client Version: version.Info<br \/>\nServer Version: version.Info<\/p>\n<p>One of the first things to check, is if all nodes are in Ready state:<\/p>\n<p>$ kubectl &#8211;kubeconfig kube_config_cluster.yml get nodes<br \/>\nNAME STATUS ROLES AGE VERSION<br \/>\nhost1 Ready controlplane,etcd,worker 11m v1.11.1<br \/>\nhost2 Ready controlplane,etcd,worker 11m v1.11.1<br \/>\nhost3 Ready controlplane,etcd,worker 11m v1.11.1<\/p>\n<p>When you generated the cluster configuration file, you added the Kubernetes dashboard addon to be deployed on the cluster. You can check the status of the deployment using:<\/p>\n<p>$ kubectl &#8211;kubeconfig kube_config_cluster.yml get deploy -n kube-system -l k8s-app=kubernetes-dashboard<\/p>\n<p>NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE<br \/>\nkubernetes-dashboard 1 1 1 1 17m<\/p>\n<p>By default, the deployments are not exposed to the outside. If you want to visit the Kubernetes Dashboard in your browser, you will need to expose the deployment externally (which we will do in our demo application later) or use the built-in proxy functionality of kubectl. This will open the 127.0.0.1:8001 (your local machine on port 8001) and tunnel it to the Kubernetes cluster.<\/p>\n<p>Before you can visit the Kubernetes Dashboard, you need to retrieve the token to login to the dashboard. By default, it runs under a very limited account and will not be able to show you all the resources in your cluster. The second addon we added when creating the cluster configuration file created the account and token we need (this is based upon <a href=\"https:\/\/github.com\/kubernetes\/dashboard\/wiki\/Creating-sample-user\">https:\/\/github.com\/kubernetes\/dashboard\/wiki\/Creating-sample-user<\/a>)<\/p>\n<p>You can retrieve the token by running:<\/p>\n<p>$ kubectl &#8211;kubeconfig kube_config_cluster.yml -n kube-system describe secret $(kubectl &#8211;kubeconfig kube_config_cluster.yml -n kube-system get secret | grep admin-user | awk &#8221;) | grep ^token: | awk &#8216;{ print $2 }&#8217;<\/p>\n<p>eyJhbGciOiJSUzI1NiIs&#8230;.&lt;more_characters&gt;<\/p>\n<p>The string that is returned, is the token you need to login to the dashboard. Copy the whole string.<\/p>\n<p>Set up the kubectl proxy as follows:<\/p>\n<p>$ kubectl &#8211;kubeconfig kube_config_cluster.yml proxy<\/p>\n<p>Starting to serve on 127.0.0.1:8001<\/p>\n<p>And open the following URL:<\/p>\n<p><a href=\"http:\/\/localhost:8001\/api\/v1\/namespaces\/kube-system\/services\/https:kubernetes-dashboard:\/proxy\/\">http:\/\/localhost:8001\/api\/v1\/namespaces\/kube-system\/services\/https:kubernetes-dashboard:\/proxy\/<\/a><\/p>\n<p>When prompted for login, choose Token, paste the token and click Sign In.<\/p>\n<p><em>Note: When you don\u2019t get a login screen, open it manually by clicking Sign In on the top right.<\/em><\/p>\n<h3>Run a demo application<\/h3>\n<p>Last step of this post, running a demo application and exposing it. For this example you will run a demo application superseb\/rancher-demo, which is a web UI showing the scale of a deployment. It will be exposed using an Ingress, which is handled by the NGINX Ingress controller that is deployed by default. If you want to know more about Ingress, please see <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/ingress\/\">https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/ingress\/<\/a><\/p>\n<p>Start by deploying and exposing the demo application (which runs on port 8080):<\/p>\n<p>$ kubectl &#8211;kubeconfig kube_config_cluster.yml run &#8211;image=superseb\/rancher-demo rancher-demo &#8211;port 8080 &#8211;expose<br \/>\nservice\/rancher-demo created<br \/>\ndeployment.apps\/rancher-demo created<\/p>\n<p>Check the status of your deployment:<\/p>\n<p>$ kubectl &#8211;kubeconfig kube_config_cluster.yml rollout status deployment\/rancher-demo<br \/>\n&#8230;<br \/>\ndeployment &#8220;rancher-demo&#8221; successfully rolled out<\/p>\n<p>The command kubectl run is the easiest way to get a container running on your cluster. It takes an image parameter to specify the Docker image and a name at minimum. In this case, we also want to configure the port that this container exposes (internally), and expose it. What happened was that there was a Deployment created (and a ReplicaSet) with a scale of 1 (default), and a Service was created to abstract access to the pods (which can contain one or more containers, in this case 1). For more information on these subjects check the following links:<\/p>\n<ul>\n<li><a href=\"https:\/\/kubernetes.io\/docs\/tutorials\/kubernetes-basics\/deploy-app\/deploy-intro\/\">https:\/\/kubernetes.io\/docs\/tutorials\/kubernetes-basics\/deploy-app\/deploy-intro\/<\/a><\/li>\n<li><a href=\"https:\/\/kubernetes.io\/docs\/tutorials\/kubernetes-basics\/expose\/expose-intro\/\">https:\/\/kubernetes.io\/docs\/tutorials\/kubernetes-basics\/expose\/expose-intro\/<\/a><\/li>\n<\/ul>\n<p>RKE deploys the NGINX Ingress controller by default on every node. This opens op port 80 and port 443, and can serve as main entrypoint for any created Ingress. An Ingress can contain a single host or multiple, multiple paths, and you can configure SSL certificates. In this post you will configure a basic Ingress, making our demo application accessible on a certain hostname. In the example we will use rancher-demo.domain.test as hostname to access the demo application.<\/p>\n<p><em>Note: To access our test domain you have to add the domain name to \/etc\/hosts to visit the UI, as it\u2019s not a valid DNS name. If you have access to your own domain, you can add a DNS A record pointing to each of the nodes.<\/em><\/p>\n<p><img decoding=\"async\" src=\"https:\/\/rancher.com\/img\/blog\/2018\/2018-09-26-setup-k8s-ingress.png\" alt=\"\" \/><\/p>\n<p>The only part that is not created, is the Ingress. Let\u2019s create an Ingress calledrancher-demo-ingress, having a host specification to match requests to our test domain (rancher-demo.domain.test), and pointing it to our Service called rancher-demo on port 8080. Save the following content to a file called ingress.yml:<\/p>\n<p>apiVersion: extensions\/v1beta1<br \/>\nkind: Ingress<br \/>\nmetadata:<br \/>\nname: rancher-demo-ingress<br \/>\nspec:<br \/>\nrules:<br \/>\n&#8211; host: rancher-demo.domain.test<br \/>\nhttp:<br \/>\npaths:<br \/>\n&#8211; path: \/<br \/>\nbackend:<br \/>\nserviceName: rancher-demo<br \/>\nservicePort: 8080<\/p>\n<p>Create this Ingress using kubectl:<\/p>\n<p>$ kubectl &#8211;kubeconfig kube_config_cluster.yml apply -f ingress.yml<br \/>\ningress.extensions\/rancher-demo-ingress created<\/p>\n<p>It is time to test accessing the demo application. You can try it on the command-line first, instruct curl to resolve the test domain to each of the nodes:<\/p>\n<p># Get node IP addresses<br \/>\n$ kubectl &#8211;kubeconfig kube_config_cluster.yml get nodes<br \/>\nNAME STATUS ROLES AGE VERSION<br \/>\n10.0.0.1 Ready controlplane,etcd,worker 3h v1.11.1<br \/>\n10.0.0.2 Ready controlplane,etcd,worker 3h v1.11.1<br \/>\n10.0.0.3 Ready controlplane,etcd,worker 3h v1.11.1<br \/>\n# Test accessing the demo application<br \/>\n$ curl &#8211;resolve rancher-demo.domain.test:80:10.0.0.1 [http:\/\/rancher-demo.domain.test\/ping](http:\/\/rancher-demo.domain.test\/ping)<\/p>\n<p>{&#8220;instance&#8221;:&#8221;rancher-demo-5cbfb4b4-thmbh&#8221;,&#8221;version&#8221;:&#8221;0.1&#8243;}<br \/>\n$ curl &#8211;resolve rancher-demo.domain.test:80:10.0.0.2 [http:\/\/rancher-demo.domain.test\/ping](http:\/\/rancher-demo.domain.test\/ping)<\/p>\n<p>{&#8220;instance&#8221;:&#8221;rancher-demo-5cbfb4b4-thmbh&#8221;,&#8221;version&#8221;:&#8221;0.1&#8243;}<br \/>\n$ curl &#8211;resolve rancher-demo.domain.test:80:10.0.0.3<br \/>\n[http:\/\/rancher-demo.domain.test\/ping](http:\/\/rancher-demo.domain.test\/ping)<\/p>\n<p>{&#8220;instance&#8221;:&#8221;rancher-demo-5cbfb4b4-thmbh&#8221;,&#8221;version&#8221;:&#8221;0.1&#8243;}<\/p>\n<p>If you use the test domain, you will need to add it to your machine\u2019s \/etc\/hosts file to be able to reach it properly.<\/p>\n<p>echo &#8220;10.0.0.1 rancher-demo.domain.test&#8221; | sudo tee -a \/etc\/hosts<\/p>\n<p>Now visit <a href=\"http:\/\/rancher-demo.domain.test\">http:\/\/rancher-demo.domain.test<\/a> in your browser.<\/p>\n<p>If this has all worked out, you can fill up the demo application a bit more by scaling up your Deployment:<\/p>\n<p>$ kubectl &#8211;kubeconfig kube_config_cluster.yml scale deploy\/rancher-demo &#8211;replicas=10<br \/>\ndeployment.extensions\/rancher-demo scaled<\/p>\n<p><em>Note: Make sure to clean up the \/etc\/hosts entry when you are done.<\/em><\/p>\n<h3>Closing words<\/h3>\n<p>This started as a post how to create a Kubernetes cluster in under 10 minutes, but along the way I tried to add some useful information how certain parts work. To avoid having a post that takes a day to read (explaining every part), there will be other posts describing certain parts. For now, I\u2019ve linked as much resources as possible to existing documentation where you can learn more.<\/p>\n<ul>\n<li><a href=\"https:\/\/rancher.com\/docs\/rke\/v0.1.x\/en\/\">RKE documentation<\/a><\/li>\n<li><a href=\"https:\/\/kubernetes.github.io\/ingress-nginx\/\">NGINX Ingress controller<\/a><\/li>\n<li><a href=\"https:\/\/kubernetes.io\/docs\/tutorials\/kubernetes-basics\/\">Learn Kubernetes basics<\/a><\/li>\n<\/ul>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/rancher.com\/img\/bio\/sebastiaan-van-steenis.jpg\" alt=\"Sebastiaan van Steenis\" width=\"100\" height=\"100\" \/><\/p>\n<p>Sebastiaan van Steenis<\/p>\n<p>Support Engineer<\/p>\n<p><a href=\"https:\/\/github.com\/superseb\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" src=\"https:\/\/rancher.com\/img\/icon-github.svg\" alt=\"github\" width=\"30\" \/><\/a><\/p>\n<p><a href=\"https:\/\/rancher.com\/blog\/2018\/2018-09-26-setup-basic-kubernetes-cluster-with-ease-using-rke\/\" target=\"_blank\" rel=\"noopener\">Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>&nbsp; Expert Training in Kubernetes and Rancher Join our free online training sessions to learn how to manage Kubernetes workloads with Rancher. Sign up here In this post, you will go from 3 Ubuntu 16.04 nodes to a basic Kubernetes cluster in a few simple steps. To accomplish this, you will be using Rancher Kubernetes &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/www.appservgrid.com\/paw93\/index.php\/2018\/10\/16\/setup-a-basic-kubernetes-cluster-with-ease-using-rke\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Setup a basic Kubernetes cluster with ease using RKE&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-273","post","type-post","status-publish","format-standard","hentry","category-kubernetes"],"_links":{"self":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts\/273","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/comments?post=273"}],"version-history":[{"count":1,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts\/273\/revisions"}],"predecessor-version":[{"id":275,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts\/273\/revisions\/275"}],"wp:attachment":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/media?parent=273"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/categories?post=273"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/tags?post=273"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}