{"id":671,"date":"2018-10-22T22:16:21","date_gmt":"2018-10-22T22:16:21","guid":{"rendered":"https:\/\/www.appservgrid.com\/paw93\/?p=671"},"modified":"2018-10-22T22:20:20","modified_gmt":"2018-10-22T22:20:20","slug":"running-highly-available-wordpress-with-mysql-on-kubernetes","status":"publish","type":"post","link":"https:\/\/www.appservgrid.com\/paw93\/index.php\/2018\/10\/22\/running-highly-available-wordpress-with-mysql-on-kubernetes\/","title":{"rendered":"Running Highly Available WordPress with MySQL on Kubernetes"},"content":{"rendered":"<h5>Take a deep dive into Best Practices in Kubernetes Networking<\/h5>\n<p>From overlay networking and SSL to ingress controllers and network security policies, we&#8217;ve seen many users get hung up on Kubernetes networking challenges. In this video recording, we dive into Kubernetes networking, and discuss best practices for a wide variety of deployment options.<\/p>\n<p><a href=\"https:\/\/rancher.com\/events\/2018\/kubernetes-networking-masterclass-june-online-meetup\/\" target=\"blank\">Watch the video<\/a><\/p>\n<p>WordPress is a popular platform for editing and publishing content for<br \/>\nthe web. In this tutorial, I\u2019m going to walk you through how to build<br \/>\nout a highly available (HA) WordPress deployment using Kubernetes.<br \/>\nWordPress consists of two major components: the WordPress PHP server,<br \/>\nand a database to store user information, posts, and site data. We need<br \/>\nto make both of these HA for the entire application to be fault<br \/>\ntolerant. Running HA services can be difficult when hardware and<br \/>\naddresses are changing; keeping up is tough. With Kubernetes and its<br \/>\npowerful networking components, we can deploy an HA WordPress site and<br \/>\nMySQL database without typing a single IP address (almost). In this<br \/>\ntutorial, I\u2019ll be showing you how to create storage classes, services,<br \/>\nconfiguration maps, and sets in Kubernetes; run HA MySQL; and hook up an<br \/>\nHA WordPress cluster to the database service. If you don\u2019t already have<br \/>\na Kubernetes cluster, you can spin one up easily on Amazon, Google, or<br \/>\nAzure, or by using <a href=\"https:\/\/rancher.com\/an-introduction-to-rke\/\">Rancher Kubernetes Engine<br \/>\n(RKE)<\/a> on any servers.<\/p>\n<h2>Architecture Overview<\/h2>\n<p>I\u2019ll now present an overview of the technologies we\u2019ll use and their<br \/>\nfunctions:<\/p>\n<ul>\n<li>Storage for WordPress Application Files: NFS with a GCE Persistent<br \/>\nDisk Backing<\/li>\n<li>Database Cluster: MySQL with xtrabackup for parity<\/li>\n<li>Application Level: A WordPress DockerHub image mounted to NFS<br \/>\nStorage<\/li>\n<li>Load Balancing and Networking: Kubernetes-based load balancers and<br \/>\nservice networking<\/li>\n<\/ul>\n<p>The architecture is organized as shown below:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/i.imgur.com\/Urk4o79.png\" alt=\"Diagram\" \/><\/p>\n<h2>Creating Storage Classes, Services, and Configuration Maps in Kubernetes<\/h2>\n<p>In Kubernetes, stateful sets offer a way to define the order of pod<br \/>\ninitialization. We\u2019ll use a stateful set for MySQL, because it ensures<br \/>\nour data nodes have enough time to replicate records from previous pods<br \/>\nwhen spinning up. The way we configure this stateful set will allow the<br \/>\nMySQL master to spin up before any of the slaves, so cloning can happen<br \/>\ndirectly from master to slave when we scale up. To start, we\u2019ll need to<br \/>\ncreate a persistent volume storage class and a configuration map to<br \/>\napply master and slave configurations as needed. We\u2019re using persistent<br \/>\nvolumes so that the data in our databases aren\u2019t tied to any specific<br \/>\npods in the cluster. This method protects the database from data loss in<br \/>\nthe event of a loss of the MySQL master pod. When a master pod is lost,<br \/>\nit can reconnect to the xtrabackup slaves on the slave nodes and<br \/>\nreplicate data from slave to master. MySQL\u2019s replication handles<br \/>\nmaster-to-slave replication but xtrabackup handles slave-to-master<br \/>\nbackward replication. To dynamically allocate persistent volumes, we<br \/>\ncreate the following storage class utilizing GCE Persistent Disks.<br \/>\nHowever, Kubernetes offers a variety of persistent volume storage<br \/>\nproviders:<\/p>\n<p># storage-class.yaml<br \/>\nkind: StorageClass<br \/>\napiVersion: storage.k8s.io\/v1<br \/>\nmetadata:<br \/>\nname: slow<br \/>\nprovisioner: kubernetes.io\/gce-pd<br \/>\nparameters:<br \/>\ntype: pd-standard<br \/>\nzone: us-central1-a<\/p>\n<p>Create the class and deploy with this<br \/>\ncommand: $ kubectl create -f storage-class.yaml. Next, we\u2019ll create<br \/>\nthe configmap, which specifies a few variables to set in the MySQL<br \/>\nconfiguration files. These different configurations are selected by the<br \/>\npods themselves, but they give us a handy way to manage potential<br \/>\nconfiguration variables. Create a YAML file named mysql-configmap.yaml<br \/>\nto handle this configuration as follows:<\/p>\n<p># mysql-configmap.yaml<br \/>\napiVersion: v1<br \/>\nkind: ConfigMap<br \/>\nmetadata:<br \/>\nname: mysql<br \/>\nlabels:<br \/>\napp: mysql<br \/>\ndata:<br \/>\nmaster.cnf: |<br \/>\n# Apply this config only on the master.<br \/>\n[mysqld]<br \/>\nlog-bin<br \/>\nskip-host-cache<br \/>\nskip-name-resolve<br \/>\nslave.cnf: |<br \/>\n# Apply this config only on slaves.<br \/>\n[mysqld]<br \/>\nskip-host-cache<br \/>\nskip-name-resolve<\/p>\n<p>Create the configmap and deploy with this<br \/>\ncommand: $ kubectl create -f mysql-configmap.yaml. Next, we want to<br \/>\nset up the service such that MySQL pods can talk to one another and our<br \/>\nWordPress pods can talk to MySQL, using mysql-services.yaml. This also<br \/>\nenables a service load balancer for the MySQL service.<\/p>\n<p># mysql-services.yaml<br \/>\n# Headless service for stable DNS entries of StatefulSet members.<br \/>\napiVersion: v1<br \/>\nkind: Service<br \/>\nmetadata:<br \/>\nname: mysql<br \/>\nlabels:<br \/>\napp: mysql<br \/>\nspec:<br \/>\nports:<br \/>\n&#8211; name: mysql<br \/>\nport: 3306<br \/>\nclusterIP: None<br \/>\nselector:<br \/>\napp: mysql<\/p>\n<p>With this service declaration, we lay the groundwork to have a multiple<br \/>\nwrite, multiple read cluster of MySQL instances. This configuration is<br \/>\nnecessary because each WordPress instance can potentially write to the<br \/>\ndatabase, so each node must be ready to read and write. To create the<br \/>\nservices above, execute the following command:<br \/>\n$ kubectl create -f mysql-services.yaml At this point, we\u2019ve created<br \/>\nthe volume claim storage class which will hand persistent disks to all<br \/>\npods that request them, we\u2019ve configured the configmap that sets a few<br \/>\nvariables in the MySQL configuration files, and we\u2019ve configured a<br \/>\nnetwork-level service that will load balance requests to the MySQL<br \/>\nservers. This is all just framework for the stateful sets, where the<br \/>\nMySQL servers actually operate, which we\u2019ll explore next.<\/p>\n<h2>Configuring MySQL with Stateful Sets<\/h2>\n<p>In this section, we\u2019ll be writing the YAML configuration for a MySQL<br \/>\ninstance using a stateful set. Let\u2019s define our stateful set:<\/p>\n<ul>\n<li>Create three pods and register them to the MySQL service.<\/li>\n<li>Define the following template for each pod:<\/li>\n<li>Create an initialization container for the master MySQL server<br \/>\nnamed init-mysql.<\/p>\n<ul>\n<li>Use the mysql:5.7 image for this container.<\/li>\n<li>Run a bash script to set up xtrabackup.<\/li>\n<li>Mount two new volumes for the configuration and configmap.<\/li>\n<\/ul>\n<\/li>\n<li>Create an initialization container for the master MySQL server<br \/>\nnamed clone-mysql.<\/p>\n<ul>\n<li>Use the Google Cloud Registry\u2019s xtrabackup:1.0 image for this<br \/>\ncontainer.<\/li>\n<li>Run a bash script to clone existing xtrabackups from the<br \/>\nprevious peer.<\/li>\n<li>Mount two new volumes for data and configuration.<\/li>\n<li>This container effectively hosts the cloned data so the new<br \/>\nslave containers can pick it up.<\/li>\n<\/ul>\n<\/li>\n<li>Create the primary containers for the slave MySQL servers.\n<ul>\n<li>Create a MySQL slave container and configure it to connect to<br \/>\nthe MySQL master.<\/li>\n<li>Create a xtrabackup slave container and configure it to<br \/>\nconnect to the xtrabackup master.<\/li>\n<\/ul>\n<\/li>\n<li>Create a volume claim template to describe each volume to be created<br \/>\nas a 10GB persistent disk.<\/li>\n<\/ul>\n<p>The following configuration defines behavior for masters and slaves of<br \/>\nour MySQL cluster, offering a bash configuration that runs the slave<br \/>\nclient and ensures proper operation of a master before cloning. Slaves<br \/>\nand masters each get their own 10GB volume which they request from the<br \/>\npersistent volume storage class we defined earlier.<\/p>\n<p>apiVersion: apps\/v1beta1<br \/>\nkind: StatefulSet<br \/>\nmetadata:<br \/>\nname: mysql<br \/>\nspec:<br \/>\nselector:<br \/>\nmatchLabels:<br \/>\napp: mysql<br \/>\nserviceName: mysql<br \/>\nreplicas: 3<br \/>\ntemplate:<br \/>\nmetadata:<br \/>\nlabels:<br \/>\napp: mysql<br \/>\nspec:<br \/>\ninitContainers:<br \/>\n&#8211; name: init-mysql<br \/>\nimage: mysql:5.7<br \/>\ncommand:<br \/>\n&#8211; bash<br \/>\n&#8211; &#8220;-c&#8221;<br \/>\n&#8211; |<br \/>\nset -ex<br \/>\n# Generate mysql server-id from pod ordinal index.<br \/>\n[[ `hostname` =~ -([0-9]+)$ ]] || exit 1<br \/>\nordinal=$<br \/>\necho [mysqld] &gt; \/mnt\/conf.d\/server-id.cnf<br \/>\n# Add an offset to avoid reserved server-id=0 value.<br \/>\necho server-id=$((100 + $ordinal)) &gt;&gt; \/mnt\/conf.d\/server-id.cnf<br \/>\n# Copy appropriate conf.d files from config-map to emptyDir.<br \/>\nif [[ $ordinal -eq 0 ]]; then<br \/>\ncp \/mnt\/config-map\/master.cnf \/mnt\/conf.d\/<br \/>\nelse<br \/>\ncp \/mnt\/config-map\/slave.cnf \/mnt\/conf.d\/<br \/>\nfi<br \/>\nvolumeMounts:<br \/>\n&#8211; name: conf<br \/>\nmountPath: \/mnt\/conf.d<br \/>\n&#8211; name: config-map<br \/>\nmountPath: \/mnt\/config-map<br \/>\n&#8211; name: clone-mysql<br \/>\nimage: gcr.io\/google-samples\/xtrabackup:1.0<br \/>\ncommand:<br \/>\n&#8211; bash<br \/>\n&#8211; &#8220;-c&#8221;<br \/>\n&#8211; |<br \/>\nset -ex<br \/>\n# Skip the clone if data already exists.<br \/>\n[[ -d \/var\/lib\/mysql\/mysql ]] &amp;&amp; exit 0<br \/>\n# Skip the clone on master (ordinal index 0).<br \/>\n[[ `hostname` =~ -([0-9]+)$ ]] || exit 1<br \/>\nordinal=$<br \/>\n[[ $ordinal -eq 0 ]] &amp;&amp; exit 0<br \/>\n# Clone data from previous peer.<br \/>\nncat &#8211;recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C \/var\/lib\/mysql<br \/>\n# Prepare the backup.<br \/>\nxtrabackup &#8211;prepare &#8211;target-dir=\/var\/lib\/mysql<br \/>\nvolumeMounts:<br \/>\n&#8211; name: data<br \/>\nmountPath: \/var\/lib\/mysql<br \/>\nsubPath: mysql<br \/>\n&#8211; name: conf<br \/>\nmountPath: \/etc\/mysql\/conf.d<br \/>\ncontainers:<br \/>\n&#8211; name: mysql<br \/>\nimage: mysql:5.7<br \/>\nenv:<br \/>\n&#8211; name: MYSQL_ALLOW_EMPTY_PASSWORD<br \/>\nvalue: &#8220;1&#8221;<br \/>\nports:<br \/>\n&#8211; name: mysql<br \/>\ncontainerPort: 3306<br \/>\nvolumeMounts:<br \/>\n&#8211; name: data<br \/>\nmountPath: \/var\/lib\/mysql<br \/>\nsubPath: mysql<br \/>\n&#8211; name: conf<br \/>\nmountPath: \/etc\/mysql\/conf.d<br \/>\nresources:<br \/>\nrequests:<br \/>\ncpu: 500m<br \/>\nmemory: 1Gi<br \/>\nlivenessProbe:<br \/>\nexec:<br \/>\ncommand: [&#8220;mysqladmin&#8221;, &#8220;ping&#8221;]<br \/>\ninitialDelaySeconds: 30<br \/>\nperiodSeconds: 10<br \/>\ntimeoutSeconds: 5<br \/>\nreadinessProbe:<br \/>\nexec:<br \/>\n# Check we can execute queries over TCP (skip-networking is off).<br \/>\ncommand: [&#8220;mysql&#8221;, &#8220;-h&#8221;, &#8220;127.0.0.1&#8221;, &#8220;-e&#8221;, &#8220;SELECT 1&#8221;]<br \/>\ninitialDelaySeconds: 5<br \/>\nperiodSeconds: 2<br \/>\ntimeoutSeconds: 1<br \/>\n&#8211; name: xtrabackup<br \/>\nimage: gcr.io\/google-samples\/xtrabackup:1.0<br \/>\nports:<br \/>\n&#8211; name: xtrabackup<br \/>\ncontainerPort: 3307<br \/>\ncommand:<br \/>\n&#8211; bash<br \/>\n&#8211; &#8220;-c&#8221;<br \/>\n&#8211; |<br \/>\nset -ex<br \/>\ncd \/var\/lib\/mysql<\/p>\n<p># Determine binlog position of cloned data, if any.<br \/>\nif [[ -f xtrabackup_slave_info ]]; then<br \/>\n# XtraBackup already generated a partial &#8220;CHANGE MASTER TO&#8221; query<br \/>\n# because we&#8217;re cloning from an existing slave.<br \/>\nmv xtrabackup_slave_info change_master_to.sql.in<br \/>\n# Ignore xtrabackup_binlog_info in this case (it&#8217;s useless).<br \/>\nrm -f xtrabackup_binlog_info<br \/>\nelif [[ -f xtrabackup_binlog_info ]]; then<br \/>\n# We&#8217;re cloning directly from master. Parse binlog position.<br \/>\n[[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1<br \/>\nrm xtrabackup_binlog_info<br \/>\necho &#8220;CHANGE MASTER TO MASTER_LOG_FILE=&#8217;$&#8217;,<br \/>\nMASTER_LOG_POS=$&#8221; &gt; change_master_to.sql.in<br \/>\nfi<\/p>\n<p># Check if we need to complete a clone by starting replication.<br \/>\nif [[ -f change_master_to.sql.in ]]; then<br \/>\necho &#8220;Waiting for mysqld to be ready (accepting connections)&#8221;<br \/>\nuntil mysql -h 127.0.0.1 -e &#8220;SELECT 1&#8221;; do sleep 1; done<\/p>\n<p>echo &#8220;Initializing replication from clone position&#8221;<br \/>\n# In case of container restart, attempt this at-most-once.<br \/>\nmv change_master_to.sql.in change_master_to.sql.orig<br \/>\nmysql -h 127.0.0.1 &lt;&lt;EOF<br \/>\n$(&lt;change_master_to.sql.orig),<br \/>\nMASTER_HOST=&#8217;mysql-0.mysql&#8217;,<br \/>\nMASTER_USER=&#8217;root&#8217;,<br \/>\nMASTER_PASSWORD=&#8221;,<br \/>\nMASTER_CONNECT_RETRY=10;<br \/>\nSTART SLAVE;<br \/>\nEOF<br \/>\nfi<\/p>\n<p># Start a server to send backups when requested by peers.<br \/>\nexec ncat &#8211;listen &#8211;keep-open &#8211;send-only &#8211;max-conns=1 3307 -c<br \/>\n&#8220;xtrabackup &#8211;backup &#8211;slave-info &#8211;stream=xbstream &#8211;host=127.0.0.1 &#8211;user=root&#8221;<br \/>\nvolumeMounts:<br \/>\n&#8211; name: data<br \/>\nmountPath: \/var\/lib\/mysql<br \/>\nsubPath: mysql<br \/>\n&#8211; name: conf<br \/>\nmountPath: \/etc\/mysql\/conf.d<br \/>\nresources:<br \/>\nrequests:<br \/>\ncpu: 100m<br \/>\nmemory: 100Mi<br \/>\nvolumes:<br \/>\n&#8211; name: conf<br \/>\nemptyDir: {}<br \/>\n&#8211; name: config-map<br \/>\nconfigMap:<br \/>\nname: mysql<br \/>\nvolumeClaimTemplates:<br \/>\n&#8211; metadata:<br \/>\nname: data<br \/>\nspec:<br \/>\naccessModes: [&#8220;ReadWriteOnce&#8221;]<br \/>\nresources:<br \/>\nrequests:<br \/>\nstorage: 10Gi<\/p>\n<p>Save this file as mysql-statefulset.yaml.<br \/>\nType kubectl create -f mysql-statefulset.yaml and let Kubernetes<br \/>\ndeploy your database. Now, when you call $ kubectl get pods, you<br \/>\nshould see three pods spinning up or ready that each have two containers<br \/>\non them. The master pod is denoted as mysql-0 and the slaves follow<br \/>\nas mysql-1 and mysql-2. Give the pods a few minutes to make sure<br \/>\nthe xtrabackup service is synced properly between pods, then move on<br \/>\nto the WordPress deployment. You can check the logs of the individual<br \/>\ncontainers to confirm that there are no error messages being thrown. To<br \/>\ndo this, run $ kubectl logs -f -c &lt;container_name&gt; The master<br \/>\nxtrabackup container should show the two connections from the slaves<br \/>\nand no errors should be visible in the logs.<\/p>\n<h2>Deploying Highly Available WordPress<\/h2>\n<p>The final step in this procedure is to deploy our WordPress pods onto<br \/>\nthe cluster. To do this, we want to define a service for WordPress and a<br \/>\ndeployment. For WordPress to be HA, we want every container running the<br \/>\nserver to be fully replaceable, meaning we can terminate one and spin up<br \/>\nanother with no change to data or service availability. We also want to<br \/>\ntolerate at least one failed container, having a redundant container<br \/>\nthere to pick up the slack. WordPress stores important site-relevant<br \/>\ndata in the application directory \/var\/www\/html. For two instances of<br \/>\nWordPress to serve the same site, that folder has to contain identical<br \/>\ndata. When running WordPress in HA, we need to share<br \/>\nthe \/var\/www\/html folders between instances, so we\u2019ll define an NFS<br \/>\nservice that will be the mount point for these volumes. The following<br \/>\nconfiguration sets up the NFS services. I\u2019ve provided the plain English<br \/>\nversion below:<\/p>\n<ul>\n<li>Define a persistent volume claim to create our shared NFS disk as a<br \/>\nGCE persistent disk at size 200GB.<\/li>\n<li>Define a replication controller for the NFS server which will ensure<br \/>\nat least one instance of the NFS server is running at all times.<\/li>\n<li>Open ports 2049, 20048, and 111 in the container to make the NFS<br \/>\nshare accessible.<\/li>\n<li>Use the Google Cloud Registry\u2019s volume-nfs:0.8 image for the NFS<br \/>\nserver.<\/li>\n<li>Define a service for the NFS server to handle IP address routing.<\/li>\n<li>Allow necessary ports through that service firewall.<\/li>\n<\/ul>\n<p># nfs.yaml<br \/>\n# Define the persistent volume claim<br \/>\napiVersion: v1<br \/>\nkind: PersistentVolumeClaim<br \/>\nmetadata:<br \/>\nname: nfs<br \/>\nlabels:<br \/>\ndemo: nfs<br \/>\nannotations:<br \/>\nvolume.alpha.kubernetes.io\/storage-class: any<br \/>\nspec:<br \/>\naccessModes: [ &#8220;ReadWriteOnce&#8221; ]<br \/>\nresources:<br \/>\nrequests:<br \/>\nstorage: 200Gi<\/p>\n<p>&#8212;<br \/>\n# Define the Replication Controller<br \/>\napiVersion: v1<br \/>\nkind: ReplicationController<br \/>\nmetadata:<br \/>\nname: nfs-server<br \/>\nspec:<br \/>\nreplicas: 1<br \/>\nselector:<br \/>\nrole: nfs-server<br \/>\ntemplate:<br \/>\nmetadata:<br \/>\nlabels:<br \/>\nrole: nfs-server<br \/>\nspec:<br \/>\ncontainers:<br \/>\n&#8211; name: nfs-server<br \/>\nimage: gcr.io\/google_containers\/volume-nfs:0.8<br \/>\nports:<br \/>\n&#8211; name: nfs<br \/>\ncontainerPort: 2049<br \/>\n&#8211; name: mountd<br \/>\ncontainerPort: 20048<br \/>\n&#8211; name: rpcbind<br \/>\ncontainerPort: 111<br \/>\nsecurityContext:<br \/>\nprivileged: true<br \/>\nvolumeMounts:<br \/>\n&#8211; mountPath: \/exports<br \/>\nname: nfs-pvc<br \/>\nvolumes:<br \/>\n&#8211; name: nfs-pvc<br \/>\npersistentVolumeClaim:<br \/>\nclaimName: nfs<\/p>\n<p>&#8212;<br \/>\n# Define the Service<br \/>\nkind: Service<br \/>\napiVersion: v1<br \/>\nmetadata:<br \/>\nname: nfs-server<br \/>\nspec:<br \/>\nports:<br \/>\n&#8211; name: nfs<br \/>\nport: 2049<br \/>\n&#8211; name: mountd<br \/>\nport: 20048<br \/>\n&#8211; name: rpcbind<br \/>\nport: 111<br \/>\nselector:<br \/>\nrole: nfs-server<\/p>\n<p>Deploy the NFS server using $ kubectl create -f nfs.yaml. Now, we need<br \/>\nto run $ kubectl describe services nfs-server to gain the IP address<br \/>\nto use below. Note: In the future, we\u2019ll be able to tie these<br \/>\ntogether using the service names, but for now, you have to hardcode the<br \/>\nIP address.<\/p>\n<p># wordpress.yaml<br \/>\napiVersion: v1<br \/>\nkind: Service<br \/>\nmetadata:<br \/>\nname: wordpress<br \/>\nlabels:<br \/>\napp: wordpress<br \/>\nspec:<br \/>\nports:<br \/>\n&#8211; port: 80<br \/>\nselector:<br \/>\napp: wordpress<br \/>\ntier: frontend<br \/>\ntype: LoadBalancer<\/p>\n<p>&#8212;<\/p>\n<p>apiVersion: v1<br \/>\nkind: PersistentVolume<br \/>\nmetadata:<br \/>\nname: nfs<br \/>\nspec:<br \/>\ncapacity:<br \/>\nstorage: 20G<br \/>\naccessModes:<br \/>\n&#8211; ReadWriteMany<br \/>\nnfs:<br \/>\n# FIXME: use the right IP<br \/>\nserver: &lt;IP of the NFS Service&gt;<br \/>\npath: &#8220;\/&#8221;<\/p>\n<p>&#8212;<\/p>\n<p>apiVersion: v1<br \/>\nkind: PersistentVolumeClaim<br \/>\nmetadata:<br \/>\nname: nfs<br \/>\nspec:<br \/>\naccessModes:<br \/>\n&#8211; ReadWriteMany<br \/>\nstorageClassName: &#8220;&#8221;<br \/>\nresources:<br \/>\nrequests:<br \/>\nstorage: 20G<\/p>\n<p>&#8212;<\/p>\n<p>apiVersion: apps\/v1beta1 # for versions before 1.8.0 use apps\/v1beta1<br \/>\nkind: Deployment<br \/>\nmetadata:<br \/>\nname: wordpress<br \/>\nlabels:<br \/>\napp: wordpress<br \/>\nspec:<br \/>\nselector:<br \/>\nmatchLabels:<br \/>\napp: wordpress<br \/>\ntier: frontend<br \/>\nstrategy:<br \/>\ntype: Recreate<br \/>\ntemplate:<br \/>\nmetadata:<br \/>\nlabels:<br \/>\napp: wordpress<br \/>\ntier: frontend<br \/>\nspec:<br \/>\ncontainers:<br \/>\n&#8211; image: wordpress:4.9-apache<br \/>\nname: wordpress<br \/>\nenv:<br \/>\n&#8211; name: WORDPRESS_DB_HOST<br \/>\nvalue: mysql<br \/>\n&#8211; name: WORDPRESS_DB_PASSWORD<br \/>\nvalue: &#8220;&#8221;<br \/>\nports:<br \/>\n&#8211; containerPort: 80<br \/>\nname: wordpress<br \/>\nvolumeMounts:<br \/>\n&#8211; name: wordpress-persistent-storage<br \/>\nmountPath: \/var\/www\/html<br \/>\nvolumes:<br \/>\n&#8211; name: wordpress-persistent-storage<br \/>\npersistentVolumeClaim:<br \/>\nclaimName: nfs<\/p>\n<p>We\u2019ve now created a persistent volume claim that maps to the NFS<br \/>\nservice we created earlier. It then attaches the volume to the WordPress<br \/>\npod at the \/var\/www\/html root, where WordPress is installed. This<br \/>\npreserves all installation and environments across WordPress pods in the<br \/>\ncluster. With this configuration, we can spin up and tear down any<br \/>\nWordPress node and the data will remain. Because the NFS service is<br \/>\nconstantly using the physical volume, it will retain the volume and<br \/>\nwon\u2019t recycle or misallocate it. Deploy the WordPress instances<br \/>\nusing $ kubectl create -f wordpress.yaml. The default deployment only<br \/>\nruns a single instance of WordPress, so feel free to scale up the number<br \/>\nof WordPress instances<br \/>\nusing $ kubectl scale &#8211;replicas=&lt;number of replicas&gt; deployment\/wordpress.<br \/>\nTo obtain the address of the WordPress service load balancer,<br \/>\ntype $ kubectl get services wordpress and grab the EXTERNAL-IP field<br \/>\nfrom the result to navigate to WordPress.<\/p>\n<h2>Resilience Testing<\/h2>\n<p>OK, now that we\u2019ve deployed our services, let\u2019s start tearing them<br \/>\ndown to see how well our HA architecture handles some chaos. In this<br \/>\napproach, the only single point of failure left is the NFS service (for<br \/>\nreasons explained in the Conclusion). You should be able to demonstrate<br \/>\ntesting the failure of any other services to see how the application<br \/>\nresponds. I\u2019ve started with three replicas of the WordPress service and<br \/>\nthe one master and two slaves on the MySQL service. First, let\u2019s kill<br \/>\nall but one WordPress node and see how the application reacts:<br \/>\n$ kubectl scale &#8211;replicas=1 deployment\/wordpress Now, we should see a<br \/>\ndrop in pod count for the WordPress deployment. $ kubectl get pods We<br \/>\nshould see that the WordPress pods are running only 1\/1 now. When<br \/>\nhitting the WordPress service IP, we\u2019ll see the same site and same<br \/>\ndatabase as before. To scale back up, we can<br \/>\nuse $ kubectl scale &#8211;replicas=3 deployment\/wordpress. We\u2019ll again<br \/>\nsee that data is preserved across all three instances. To test the MySQL<br \/>\nStatefulSet, we can scale down the number of replicas using the<br \/>\nfollowing: $ kubectl scale statefulsets mysql &#8211;replicas=1 We\u2019ll see<br \/>\na loss of both slaves in this instance and, in the event of a loss of<br \/>\nthe master in this moment, the data it has will be preserved on the GCE<br \/>\nPersistent Disk. However, we\u2019ll have to manually recover the data from<br \/>\nthe disk. If all three MySQL nodes go down, you\u2019ll not be able to<br \/>\nreplicate when new nodes come up. However, if a master node goes down, a<br \/>\nnew master will be spun up and via xtrabackup, it will repopulate with<br \/>\nthe data from a slave. Therefore, I don\u2019t recommend ever running with a<br \/>\nreplication factor of less than three when running production databases.<br \/>\nTo conclude, let\u2019s talk about some better solutions for your stateful<br \/>\ndata, as Kubernetes isn\u2019t really designed for state.<\/p>\n<h2>Conclusions and Caveats<\/h2>\n<p>You\u2019ve now built and deployed an HA WordPress and MySQL installation on<br \/>\nKubernetes! Despite this great achievement, your journey may be far from<br \/>\nover. If you haven\u2019t noticed, our installation still has a single point<br \/>\nof failure: the NFS server sharing the \/var\/www\/html directory between<br \/>\nWordPress pods. This service represents a single point of failure<br \/>\nbecause without it running, the html folder disappears on the pods<br \/>\nusing it. The image I\u2019ve selected for the server is incredibly stable<br \/>\nand production ready, but for a true production deployment, you may<br \/>\nconsider<br \/>\nusing <a href=\"https:\/\/github.com\/gluster\/gluster-kubernetes\">GlusterFS<\/a> to<br \/>\nenable multi-read multi-write to the directory shared by WordPress<br \/>\ninstances. This process involves running a distributed storage cluster<br \/>\non Kubernetes, which isn\u2019t really what Kubernetes is built for, so<br \/>\ndespite it <em>working<\/em>, it isn\u2019t a great option for long-term<br \/>\ndeployments. For the database, I\u2019d personally recommend using a managed<br \/>\nRelational Database service to host the MySQL instance, be it Google\u2019s<br \/>\nCloudSQL or AWS\u2019s RDS, as they provide HA and redundancy at a more<br \/>\nsensible price and keep you from worrying about data integrity.<br \/>\nKubernetes isn\u2019t really designed around stateful applications and any<br \/>\nstate built into it is more of an afterthought. Plenty of solutions<br \/>\nexist that offer much more of the assurances one would look for when<br \/>\npicking a database service. That being said, the configuration presented<br \/>\nabove is a labor of love, a hodgepodge of Kubernetes tutorials and<br \/>\nexamples found across the web to create a cohesive, realistic use case<br \/>\nfor Kubernetes and all the new features in Kubernetes 1.8.x. I hope your<br \/>\nexperiences deploying WordPress and MySQL using the guide I\u2019ve prepared<br \/>\nfor you are a bit less exciting than the ones I had ironing out bugs in<br \/>\nthe configurations, and of course, I wish you eternal uptime. That\u2019s<br \/>\nall for now. Tune in next time when I teach you to drive a boat using<br \/>\nonly a Myo gesture band and a cluster of Linode instances running Tails<br \/>\nLinux.<\/p>\n<h3>About the Author<\/h3>\n<p><img decoding=\"async\" src=\"http:\/\/cdn.rancher.com\/wp-content\/uploads\/2017\/10\/02185842\/Eric-300x300.png\" alt=\"\" \/> Eric Volpert is a<br \/>\nstudent at the University of Chicago and works as an evangelist, growth<br \/>\nhacker, and writer for Rancher Labs. He enjoys any engineering<br \/>\nchallenge. He\u2019s spent the last three summers as an internal tools<br \/>\nengineer at Bloomberg and a year building DNS services for the Secure<br \/>\nDomain Foundation with CrowdStrike. Eric enjoys many forms of music<br \/>\nranging from EDM to High Baroque, playing MOBAs and other action-packed<br \/>\ngames on his PC, and late-night hacking sessions, duct taping APIs<br \/>\ntogether so he can make coffee with a voice command.<\/p>\n<p><a href=\"https:\/\/rancher.com\/running-highly-available-wordpress-mysql-kubernetes\/\" target=\"_blank\" rel=\"noopener\">Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Take a deep dive into Best Practices in Kubernetes Networking From overlay networking and SSL to ingress controllers and network security policies, we&#8217;ve seen many users get hung up on Kubernetes networking challenges. In this video recording, we dive into Kubernetes networking, and discuss best practices for a wide variety of deployment options. Watch the &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/www.appservgrid.com\/paw93\/index.php\/2018\/10\/22\/running-highly-available-wordpress-with-mysql-on-kubernetes\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Running Highly Available WordPress with MySQL on Kubernetes&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-671","post","type-post","status-publish","format-standard","hentry","category-kubernetes"],"_links":{"self":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts\/671","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/comments?post=671"}],"version-history":[{"count":1,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts\/671\/revisions"}],"predecessor-version":[{"id":678,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/posts\/671\/revisions\/678"}],"wp:attachment":[{"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/media?parent=671"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/categories?post=671"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw93\/index.php\/wp-json\/wp\/v2\/tags?post=671"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}