{"id":505,"date":"2018-10-17T15:18:24","date_gmt":"2018-10-17T15:18:24","guid":{"rendered":"https:\/\/www.appservgrid.com\/paw92\/?p=505"},"modified":"2018-10-17T16:03:08","modified_gmt":"2018-10-17T16:03:08","slug":"configure-active-passive-nfs-server-on-a-pacemaker-cluster-with-puppet-lisenet-com-linux-security","status":"publish","type":"post","link":"https:\/\/www.appservgrid.com\/paw92\/index.php\/2018\/10\/17\/configure-active-passive-nfs-server-on-a-pacemaker-cluster-with-puppet-lisenet-com-linux-security\/","title":{"rendered":"Configure Active\/Passive NFS Server on a Pacemaker Cluster with Puppet | Lisenet.com :: Linux | Security"},"content":{"rendered":"<p>We\u2019re going to use Puppet to install Pacemaker\/Corosync and configure an NFS cluster.<\/p>\n<p>For instructions on how to compile fence_pve on CentOS 7, scroll to the bottom of the page.<\/p>\n<p>This article is part of the <a href=\"https:\/\/www.lisenet.com\/2018\/homelab-project-with-kvm-katello-and-puppet\/\" target=\"_blank\" rel=\"noopener\">Homelab Project with KVM, Katello and Puppet<\/a> series.<\/p>\n<h2>Homelab<\/h2>\n<p>We have two CentOS 7 servers installed which we want to configure as follows:<\/p>\n<p>storage1.hl.local (10.11.1.15) \u2013 Pacemaker cluster node<br \/>\nstorage2.hl.local (10.11.1.16) \u2013 Pacemaker cluster node<\/p>\n<p>SELinux set to enforcing mode.<\/p>\n<p>See the image below to identify the homelab part this article applies to.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.lisenet.com\/wp-content\/uploads\/2018\/04\/lisenet-homelab-diagram_nfs.png\" alt=\"\" width=\"1200\" height=\"793\" \/><\/p>\n<h2>Cluster Requirements<\/h2>\n<p>To configure the cluster, we are going to need the following:<\/p>\n<ol>\n<li>A virtual IP address, required for the NFS server.<\/li>\n<li>Shared storage for the NFS nodes in the cluster.<\/li>\n<li>A power fencing device for each node of the cluster.<\/li>\n<\/ol>\n<p>The virtual IP is 10.11.1.31 (with the DNS name of nfsvip.hl.local).<\/p>\n<p>With regards to shared storage, while I agree that iSCSI would be ideal, the truth is that \u201c<a href=\"https:\/\/www.lisenet.com\/wp-content\/uploads\/2018\/04\/no-money.jpg\" target=\"_blank\" rel=\"noopener\">we don\u2019t have that kind of money<\/a>\u201c. We will have to make it with a shared disk among different VMs on same Proxmox host.<\/p>\n<p>In terms of fencing, as mentioned earlier, Proxmox does not use libvirt, therefore Pacemaker clusters cannot be fenced by using fence-agents-virsh. There is fence_pve available, but we won\u2019t find it in CentOS\/RHEL. We\u2019ll need to compile it from source.<\/p>\n<h2>Proxmox and Disk Sharing<\/h2>\n<p>I was unable to find a WebUI way to add an existing disk to another VM. Proxmox <a href=\"https:\/\/forum.proxmox.com\/threads\/shared-virtual-disks-among-different-vms-on-same-host.11088\/\" target=\"_blank\" rel=\"noopener\">forum<\/a> was somewhat helpful, and I ended up manually editing the VM\u2019s config file since the WebUI would not let me assign the same disk to two VMs.<\/p>\n<p>Take a look at the following image, showing two disks attached to the storage1.hl.local node:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.lisenet.com\/wp-content\/uploads\/2018\/04\/lisenet-homelab-shared-disk.png\" alt=\"\" width=\"679\" height=\"265\" \/><\/p>\n<p>We want to use the smaller (2GB) disk for NFS.<\/p>\n<p>The VM ID of the storage2.hl.local node is 208 (see <a href=\"https:\/\/www.lisenet.com\/2018\/homelab-project-with-kvm-katello-and-puppet\/\" target=\"_blank\" rel=\"noopener\">here<\/a>), therefore we can add the disk by editing the node\u2019s configuration file.<\/p>\n<p># cat \/etc\/pve\/qemu-server\/208.conf<br \/>\nboot: cn<br \/>\nbootdisk: scsi0<br \/>\ncores: 1<br \/>\nhotplug: disk,cpu<br \/>\nmemory: 768<br \/>\nname: storage2.hl.local<br \/>\nnet0: virtio=00:22:FF:00:00:16,bridge=vmbr0<br \/>\nonboot: 1<br \/>\nostype: l26<br \/>\nscsi0: data_ssd:208\/vm-208-disk-1.qcow2,size=32G<br \/>\nscsi1: data_ssd:207\/vm-207-disk-3.qcow2,size=2G<br \/>\nscsihw: virtio-scsi-pci<br \/>\nsmbios1: uuid=030e28da-72e6-412d-be77-a79f06862351<br \/>\nsockets: 1<br \/>\nstartup: order=208<\/p>\n<p>The disk that we\u2019ve added is scsi1. Note how it references the VM ID 207.<\/p>\n<p>The disk will be visible on both nodes as \/dev\/disk\/by-id\/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1.<\/p>\n<h2>Configuration with Puppet<\/h2>\n<p>Puppet master runs on the <a href=\"https:\/\/www.lisenet.com\/2016\/install-katello-on-centos-7\/\" target=\"_blank\" rel=\"noopener\">Katello<\/a> server.<\/p>\n<h3>Puppet Modules<\/h3>\n<p>We use <a href=\"https:\/\/forge.puppet.com\/puppet\/corosync\" target=\"_blank\" rel=\"noopener\">puppet-corosync<\/a> Puppet module to configure the server. We also use <a href=\"https:\/\/forge.puppet.com\/puppetlabs\/accounts\" target=\"_blank\" rel=\"noopener\">puppetlabs-accounts<\/a> for Linux account creation.<\/p>\n<p>Please see the module documentation for features supported and configuration options available.<\/p>\n<h3>Configure Firewall<\/h3>\n<p>It is essential to ensure that Pacemaker servers can talk to each other. The following needs applying to both cluster nodes:<\/p>\n<p>firewall { &#8216;007 accept HA cluster requests&#8217;:<br \/>\ndport =&gt; [&#8216;2224&#8217;, &#8216;3121&#8217;, &#8216;5403&#8217;, &#8216;21064&#8217;],<br \/>\nproto =&gt; &#8216;tcp&#8217;,<br \/>\nsource =&gt; &#8216;10.11.1.0\/24&#8217;,<br \/>\naction =&gt; &#8216;accept&#8217;,<br \/>\n}-&gt;<br \/>\nfirewall { &#8216;008 accept HA cluster requests&#8217;:<br \/>\ndport =&gt; [&#8216;5404&#8217;, &#8216;5405&#8217;],<br \/>\nproto =&gt; &#8216;udp&#8217;,<br \/>\nsource =&gt; &#8216;10.11.1.0\/24&#8217;,<br \/>\naction =&gt; &#8216;accept&#8217;,<br \/>\n}-&gt;<br \/>\nfirewall { &#8216;009 accept NFS requests&#8217;:<br \/>\ndport =&gt; [&#8216;2049&#8217;],<br \/>\nproto =&gt; &#8216;tcp&#8217;,<br \/>\nsource =&gt; &#8216;10.11.1.0\/24&#8217;,<br \/>\naction =&gt; &#8216;accept&#8217;,<br \/>\n}-&gt;<br \/>\nfirewall { &#8216;010 accept TCP mountd requests&#8217;:<br \/>\ndport =&gt; [&#8216;20048&#8217;],<br \/>\nproto =&gt; &#8216;tcp&#8217;,<br \/>\nsource =&gt; &#8216;10.11.1.0\/24&#8217;,<br \/>\naction =&gt; &#8216;accept&#8217;,<br \/>\n}-&gt;<br \/>\nfirewall { &#8216;011 accept UDP mountd requests&#8217;:<br \/>\ndport =&gt; [&#8216;20048&#8217;],<br \/>\nproto =&gt; &#8216;udp&#8217;,<br \/>\nsource =&gt; &#8216;10.11.1.0\/24&#8217;,<br \/>\naction =&gt; &#8216;accept&#8217;,<br \/>\n}-&gt;<br \/>\nfirewall { &#8216;012 accept TCP rpc-bind requests&#8217;:<br \/>\ndport =&gt; [&#8216;111&#8217;],<br \/>\nproto =&gt; &#8216;tcp&#8217;,<br \/>\nsource =&gt; &#8216;10.11.1.0\/24&#8217;,<br \/>\naction =&gt; &#8216;accept&#8217;,<br \/>\n}-&gt;<br \/>\nfirewall { &#8216;013 accept UDP rpc-bind requests&#8217;:<br \/>\ndport =&gt; [&#8216;111&#8217;],<br \/>\nproto =&gt; &#8216;udp&#8217;,<br \/>\nsource =&gt; &#8216;10.11.1.0\/24&#8217;,<br \/>\naction =&gt; &#8216;accept&#8217;,<br \/>\n}<\/p>\n<h3>Create Apache User and NFS Mountpoint<\/h3>\n<p>Before we configure the cluster, we need to make sure that we have the nfs-utils package installed and that the nfs-lock service is disabled \u2013 it will be managed by pacemaker.<\/p>\n<p>The Apache user is created in order to match ownership and allow web servers to write to the NFS share.<\/p>\n<p>The following needs applying to both cluster nodes:<\/p>\n<p>package { &#8216;nfs-utils&#8217;: ensure =&gt; &#8216;installed&#8217; }-&gt;<br \/>\nservice { &#8216;nfs-lock&#8217;: enable =&gt; false }-&gt;<br \/>\naccounts::user { &#8216;apache&#8217;:<br \/>\ncomment =&gt; &#8216;Apache&#8217;,<br \/>\nuid =&gt; &#8217;48&#8217;,<br \/>\ngid =&gt; &#8217;48&#8217;,<br \/>\nshell =&gt; &#8216;\/sbin\/nologin&#8217;,<br \/>\npassword =&gt; &#8216;!!&#8217;,<br \/>\nhome =&gt; &#8216;\/usr\/share\/httpd&#8217;,<br \/>\nhome_mode =&gt; &#8216;0755&#8217;,<br \/>\nlocked =&gt; false,<br \/>\n}-&gt;<br \/>\nfile {&#8216;\/nfsshare&#8217;:<br \/>\nensure =&gt; &#8216;directory&#8217;,<br \/>\nowner =&gt; &#8216;root&#8217;,<br \/>\ngroup =&gt; &#8216;root&#8217;,<br \/>\nmode =&gt; &#8216;0755&#8217;,<br \/>\n}<\/p>\n<h3>Configure Pacemaker\/Corosync on storage1.hl.local<\/h3>\n<p>We disable STONITH initially because the fencing agent fence_pve is simply not available yet. We will compile it later, however, it\u2019s not required in order to get the cluster into an operational state.<\/p>\n<p>We use colocations to keep primitives together. While colocation defines that a set of primitives must live together on the same node, order definitions will define the order of which each primitive is started. This is importat, as we want to make sure that we start cluster resources in the correct order.<\/p>\n<p>Note how we configure NFS exports to be available to two specific clients only: web1.hl.local and web2.hl.local. In reality there is no need for any other homelab server to have access to the NFS share.<\/p>\n<p>We make the apache user the owner of the NFS share, and export it with no_all_squash.<\/p>\n<p>class { &#8216;corosync&#8217;:<br \/>\nauthkey =&gt; &#8216;\/etc\/puppetlabs\/puppet\/ssl\/certs\/ca.pem&#8217;,<br \/>\nbind_address =&gt; $::ipaddress,<br \/>\ncluster_name =&gt; &#8216;nfs_cluster&#8217;,<br \/>\nenable_secauth =&gt; true,<br \/>\nenable_corosync_service =&gt; true,<br \/>\nenable_pacemaker_service =&gt; true,<br \/>\nset_votequorum =&gt; true,<br \/>\nquorum_members =&gt; [ &#8216;storage1.hl.local&#8217;, &#8216;storage2.hl.local&#8217; ],<br \/>\n}<br \/>\ncorosync::service { &#8216;pacemaker&#8217;:<br \/>\n<em>## See: https:\/\/wiki.clusterlabs.org\/wiki\/Pacemaker<\/em><br \/>\nversion =&gt; &#8216;1.1&#8217;,<br \/>\n}-&gt;<br \/>\ncs_property { &#8216;stonith-enabled&#8217;:<br \/>\nvalue =&gt; &#8216;false&#8217;,<br \/>\n}-&gt;<br \/>\ncs_property { &#8216;no-quorum-policy&#8217;:<br \/>\nvalue =&gt; &#8216;ignore&#8217;,<br \/>\n}-&gt;<br \/>\ncs_primitive { &#8216;nfsshare&#8217;:<br \/>\nprimitive_class =&gt; &#8216;ocf&#8217;,<br \/>\nprimitive_type =&gt; &#8216;Filesystem&#8217;,<br \/>\nprovided_by =&gt; &#8216;heartbeat&#8217;,<br \/>\nparameters =&gt; { &#8216;device&#8217; =&gt; &#8216;\/dev\/disk\/by-id\/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1&#8217;, &#8216;directory&#8217; =&gt; &#8216;\/nfsshare&#8217;, &#8216;fstype&#8217; =&gt; &#8216;ext4&#8217; },<br \/>\n}-&gt;<br \/>\ncs_primitive { &#8216;nfsd&#8217;:<br \/>\nprimitive_class =&gt; &#8216;ocf&#8217;,<br \/>\nprimitive_type =&gt; &#8216;nfsserver&#8217;,<br \/>\nprovided_by =&gt; &#8216;heartbeat&#8217;,<br \/>\nparameters =&gt; { &#8216;nfs_shared_infodir&#8217; =&gt; &#8216;\/nfsshare\/nfsinfo&#8217; },<br \/>\nrequire =&gt; Cs_primitive[&#8216;nfsshare&#8217;],<br \/>\n}-&gt;<br \/>\ncs_primitive { &#8216;nfsroot1&#8217;:<br \/>\nprimitive_class =&gt; &#8216;ocf&#8217;,<br \/>\nprimitive_type =&gt; &#8216;exportfs&#8217;,<br \/>\nprovided_by =&gt; &#8216;heartbeat&#8217;,<br \/>\nparameters =&gt; { &#8216;clientspec&#8217; =&gt; &#8216;web1.hl.local&#8217;, &#8216;options&#8217; =&gt; &#8216;rw,async,no_root_squash,no_all_squash&#8217;, &#8216;directory&#8217; =&gt; &#8216;\/nfsshare&#8217;, &#8216;fsid&#8217; =&gt; &#8216;0&#8217; },<br \/>\nrequire =&gt; Cs_primitive[&#8216;nfsd&#8217;],<br \/>\n}-&gt;<br \/>\ncs_primitive { &#8216;nfsroot2&#8217;:<br \/>\nprimitive_class =&gt; &#8216;ocf&#8217;,<br \/>\nprimitive_type =&gt; &#8216;exportfs&#8217;,<br \/>\nprovided_by =&gt; &#8216;heartbeat&#8217;,<br \/>\nparameters =&gt; { &#8216;clientspec&#8217; =&gt; &#8216;web2.hl.local&#8217;, &#8216;options&#8217; =&gt; &#8216;rw,async,no_root_squash,no_all_squash&#8217;, &#8216;directory&#8217; =&gt; &#8216;\/nfsshare&#8217;, &#8216;fsid&#8217; =&gt; &#8216;0&#8217; },<br \/>\nrequire =&gt; Cs_primitive[&#8216;nfsd&#8217;],<br \/>\n}-&gt;<br \/>\ncs_primitive { &#8216;nfsvip&#8217;:<br \/>\nprimitive_class =&gt; &#8216;ocf&#8217;,<br \/>\nprimitive_type =&gt; &#8216;IPaddr2&#8217;,<br \/>\nprovided_by =&gt; &#8216;heartbeat&#8217;,<br \/>\nparameters =&gt; { &#8216;ip&#8217; =&gt; &#8216;10.11.1.31&#8217;, &#8216;cidr_netmask&#8217; =&gt; &#8217;24&#8217; },<br \/>\nrequire =&gt; Cs_primitive[&#8216;nfsroot1&#8242;,&#8217;nfsroot2&#8217;],<br \/>\n}-&gt;<br \/>\ncs_colocation { &#8216;nfsshare_nfsd_nfsroot_nfsvip&#8217;:<br \/>\nprimitives =&gt; [<br \/>\n[ &#8216;nfsshare&#8217;, &#8216;nfsd&#8217;, &#8216;nfsroot1&#8217;, &#8216;nfsroot2&#8217;, &#8216;nfsvip&#8217; ],<br \/>\n}-&gt;<br \/>\ncs_order { &#8216;nfsshare_before_nfsd&#8217;:<br \/>\nfirst =&gt; &#8216;nfsshare&#8217;,<br \/>\nsecond =&gt; &#8216;nfsd&#8217;,<br \/>\nrequire =&gt; Cs_colocation[&#8216;nfsshare_nfsd_nfsroot_nfsvip&#8217;],<br \/>\n}-&gt;<br \/>\ncs_order { &#8216;nfsd_before_nfsroot1&#8217;:<br \/>\nfirst =&gt; &#8216;nfsd&#8217;,<br \/>\nsecond =&gt; &#8216;nfsroot1&#8217;,<br \/>\nrequire =&gt; Cs_colocation[&#8216;nfsshare_nfsd_nfsroot_nfsvip&#8217;],<br \/>\n}-&gt;<br \/>\ncs_order { &#8216;nfsroot1_before_nfsroot2&#8217;:<br \/>\nfirst =&gt; &#8216;nfsroot1&#8217;,<br \/>\nsecond =&gt; &#8216;nfsroot2&#8217;,<br \/>\nrequire =&gt; Cs_colocation[&#8216;nfsshare_nfsd_nfsroot_nfsvip&#8217;],<br \/>\n}-&gt;<br \/>\ncs_order { &#8216;nfsroot2_before_nfsvip&#8217;:<br \/>\nfirst =&gt; &#8216;nfsroot2&#8217;,<br \/>\nsecond =&gt; &#8216;nfsvip&#8217;,<br \/>\nrequire =&gt; Cs_colocation[&#8216;nfsshare_nfsd_nfsroot_nfsvip&#8217;],<br \/>\n}-&gt;<br \/>\nfile {&#8216;\/nfsshare\/uploads&#8217;:<br \/>\nensure =&gt; &#8216;directory&#8217;,<br \/>\nowner =&gt; &#8216;apache&#8217;,<br \/>\ngroup =&gt; &#8216;root&#8217;,<br \/>\nmode =&gt; &#8216;0755&#8217;,<br \/>\n}<\/p>\n<h3>Configure Pacemaker\/Corosync on storage2.hl.local<\/h3>\n<p>class { &#8216;corosync&#8217;:<br \/>\nauthkey =&gt; &#8216;\/etc\/puppetlabs\/puppet\/ssl\/certs\/ca.pem&#8217;,<br \/>\nbind_address =&gt; $::ipaddress,<br \/>\ncluster_name =&gt; &#8216;nfs_cluster&#8217;,<br \/>\nenable_secauth =&gt; true,<br \/>\nenable_corosync_service =&gt; true,<br \/>\nenable_pacemaker_service =&gt; true,<br \/>\nset_votequorum =&gt; true,<br \/>\nquorum_members =&gt; [ &#8216;storage1.hl.local&#8217;, &#8216;storage2.hl.local&#8217; ],<br \/>\n}<br \/>\ncorosync::service { &#8216;pacemaker&#8217;:<br \/>\nversion =&gt; &#8216;1.1&#8217;,<br \/>\n}-&gt;<br \/>\ncs_property { &#8216;stonith-enabled&#8217;:<br \/>\nvalue =&gt; &#8216;false&#8217;,<br \/>\n}<\/p>\n<h3>Cluster Status<\/h3>\n<p>If all went well, we should have our cluster up and running at this point.<\/p>\n<p>[<a href=\"\/cdn-cgi\/l\/email-protection\">[email protected]<\/a> ~]# pcs status<br \/>\nCluster name: nfs_cluster<br \/>\nStack: corosync<br \/>\nCurrent DC: storage2.hl.local (version 1.1.16-12.el7_4.8-94ff4df) &#8211; partition with quorum<br \/>\nLast updated: Sun Apr 29 17:04:50 2018<br \/>\nLast change: Sun Apr 29 16:56:25 2018 by root via cibadmin on storage1.hl.local<\/p>\n<p>2 nodes configured<br \/>\n5 resources configured<\/p>\n<p>Online: [ storage1.hl.local storage2.hl.local ]<\/p>\n<p>Full list of resources:<\/p>\n<p>nfsshare (ocf::heartbeat:Filesystem): Started storage1.hl.local<br \/>\nnfsd (ocf::heartbeat:nfsserver): Started storage1.hl.local<br \/>\nnfsroot1 (ocf::heartbeat:exportfs): Started storage1.hl.local<br \/>\nnfsroot2 (ocf::heartbeat:exportfs): Started storage1.hl.local<br \/>\nnfsvip (ocf::heartbeat:IPaddr2): Started storage1.hl.local<\/p>\n<p>Daemon Status:<br \/>\ncorosync: active\/enabled<br \/>\npacemaker: active\/enabled<br \/>\npcsd: inactive\/disabled<br \/>\n[<a href=\"\/cdn-cgi\/l\/email-protection\">[email protected]<\/a> ~]# pcs status<br \/>\nCluster name: nfs_cluster<br \/>\nStack: corosync<br \/>\nCurrent DC: storage2.hl.local (version 1.1.16-12.el7_4.8-94ff4df) &#8211; partition with quorum<br \/>\nLast updated: Sun Apr 29 17:05:04 2018<br \/>\nLast change: Sun Apr 29 16:56:25 2018 by root via cibadmin on storage1.hl.local<\/p>\n<p>2 nodes configured<br \/>\n5 resources configured<\/p>\n<p>Online: [ storage1.hl.local storage2.hl.local ]<\/p>\n<p>Full list of resources:<\/p>\n<p>nfsshare (ocf::heartbeat:Filesystem): Started storage1.hl.local<br \/>\nnfsd (ocf::heartbeat:nfsserver): Started storage1.hl.local<br \/>\nnfsroot1 (ocf::heartbeat:exportfs): Started storage1.hl.local<br \/>\nnfsroot2 (ocf::heartbeat:exportfs): Started storage1.hl.local<br \/>\nnfsvip (ocf::heartbeat:IPaddr2): Started storage1.hl.local<\/p>\n<p>Daemon Status:<br \/>\ncorosync: active\/enabled<br \/>\npacemaker: active\/enabled<br \/>\npcsd: inactive\/disabled<\/p>\n<p>Test cluster failover by putting the active node into standby:<\/p>\n<p>[<a href=\"\/cdn-cgi\/l\/email-protection\">[email protected]<\/a> ~]# pcs node standby<\/p>\n<p>Services should become available on the other cluster node:<\/p>\n<p>[<a href=\"\/cdn-cgi\/l\/email-protection\">[email protected]<\/a> ~]# pcs status<br \/>\nCluster name: nfs_cluster<br \/>\nStack: corosync<br \/>\nCurrent DC: storage2.hl.local (version 1.1.16-12.el7_4.8-94ff4df) &#8211; partition with quorum<br \/>\nLast updated: Sun Apr 29 17:06:36 2018<br \/>\nLast change: Sun Apr 29 16:56:25 2018 by root via cibadmin on storage1.hl.local<\/p>\n<p>2 nodes configured<br \/>\n5 resources configured<\/p>\n<p>Node storage1.hl.local: standby<br \/>\nOnline: [ storage2.hl.local ]<\/p>\n<p>Full list of resources:<\/p>\n<p>nfsshare (ocf::heartbeat:Filesystem): Started storage2.hl.local<br \/>\nnfsd (ocf::heartbeat:nfsserver): Started storage2.hl.local<br \/>\nnfsroot1 (ocf::heartbeat:exportfs): Started storage2.hl.local<br \/>\nnfsroot2 (ocf::heartbeat:exportfs): Started storage2.hl.local<br \/>\nnfsvip (ocf::heartbeat:IPaddr2): Started storage2.hl.local<\/p>\n<p>Daemon Status:<br \/>\ncorosync: active\/enabled<br \/>\npacemaker: active\/enabled<br \/>\npcsd: inactive\/disabled<\/p>\n<p>Do showmount on the virtual IP address:<\/p>\n<p>[<a href=\"\/cdn-cgi\/l\/email-protection\">[email protected]<\/a> ~]# showmount -e 10.11.1.31<br \/>\nExport list for 10.11.1.31:<br \/>\n\/nfsshare web1.hl.local,web2.hl.local<\/p>\n<h2>Compile fence_pve on CentOS 7<\/h2>\n<p>This is where the automated part ends I\u2019m afraid, however, there is nothing that stops you from putting the manual steps below into a Puppet manifest.<\/p>\n<h3>Install Packages<\/h3>\n<p># yum install git gcc make automake autoconf libtool<br \/>\npexpect python-requests<\/p>\n<h3>Download Source and Compile<\/h3>\n<p># git clone https:\/\/github.com\/ClusterLabs\/fence-agents.git<\/p>\n<p>Note the configuration part, we are interested in compiling one fencing agent only, fence_pve.<\/p>\n<p># cd fence-agents\/<br \/>\n# .\/autogen.sh<br \/>\n# .\/configure &#8211;with-agents=pve<br \/>\n# make &amp;&amp; make install<\/p>\n<p>Verify:<\/p>\n<p># fence_pve &#8211;version<br \/>\n4.1.1.51-6e6d<\/p>\n<h3>Configure Pacemaker to Use fence_pve<\/h3>\n<p>Big thanks to Igor Cicimov\u2019s blog <a href=\"https:\/\/icicimov.github.io\/blog\/virtualization\/Pacemaker-VM-cluster-fencing-in-Proxmox-with-fence-pve\/\" target=\"_blank\" rel=\"noopener\">post<\/a> which helped me to get it working with minimal effort.<\/p>\n<p>To test the fencing agent, do the following:<\/p>\n<p>[<a href=\"\/cdn-cgi\/l\/email-protection\">[email protected]<\/a> ~]# fence_pve &#8211;ip=10.11.1.5 &#8211;nodename=pve<br \/>\n<a href=\"\/cdn-cgi\/l\/email-protection\">[email protected]<\/a> &#8211;password=passwd<br \/>\n&#8211;plug=208 &#8211;action=off<\/p>\n<p>Where 10.11.1.5 is the IP of the Proxmox hypervisor, pve is the name of the Proxmox node, and the plug is the VM ID. In this case we fenced the storage2.hl.local node.<\/p>\n<p>To configure Pacemaker, we can create two STONITH configurations, one for each node that we want to be able to fence.<\/p>\n<p>[<a href=\"\/cdn-cgi\/l\/email-protection\">[email protected]<\/a> ~]# pcs stonith create my_proxmox_fence207 fence_pve<br \/>\nipaddr=&#8221;10.11.1.5&#8243; inet4_only=&#8221;true&#8221; vmtype=&#8221;qemu&#8221;<br \/>\nlogin=&#8221;<a href=\"\/cdn-cgi\/l\/email-protection\">[email protected]<\/a>&#8221; passwd=&#8221;passwd&#8221;<br \/>\nnode_name=&#8221;pve&#8221; delay=&#8221;15&#8243;<br \/>\nport=&#8221;207&#8243;<br \/>\npcmk_host_check=static-list<br \/>\npcmk_host_list=&#8221;storage1.hl.local&#8221;<br \/>\n[<a href=\"\/cdn-cgi\/l\/email-protection\">[email protected]<\/a> ~]# pcs stonith create my_proxmox_fence208 fence_pve<br \/>\nipaddr=&#8221;10.11.1.5&#8243; inet4_only=&#8221;true&#8221; vmtype=&#8221;qemu&#8221;<br \/>\nlogin=&#8221;<a href=\"\/cdn-cgi\/l\/email-protection\">[email protected]<\/a>&#8221; passwd=&#8221;passwd&#8221;<br \/>\nnode_name=&#8221;pve&#8221; delay=&#8221;15&#8243;<br \/>\nport=&#8221;208&#8243;<br \/>\npcmk_host_check=static-list<br \/>\npcmk_host_list=&#8221;storage2.hl.local&#8221;<\/p>\n<p>Verify:<\/p>\n<p>[<a href=\"\/cdn-cgi\/l\/email-protection\">[email protected]<\/a> ~]# stonith_admin -L<br \/>\nmy_proxmox_fence207<br \/>\nmy_proxmox_fence208<br \/>\n2 devices found<br \/>\n[<a href=\"\/cdn-cgi\/l\/email-protection\">[email protected]<\/a> ~]# pcs status<br \/>\nCluster name: nfs_cluster<br \/>\nStack: corosync<br \/>\nCurrent DC: storage1.hl.local (version 1.1.16-12.el7_4.8-94ff4df) &#8211; partition with quorum<br \/>\nLast updated: Sun Apr 29 17:50:59 2018<br \/>\nLast change: Sun Apr 29 17:50:55 2018 by root via cibadmin on storage1.hl.local<\/p>\n<p>2 nodes configured<br \/>\n7 resources configured<\/p>\n<p>Online: [ storage1.hl.local ]<br \/>\nOFFLINE: [ storage2.hl.local ]<\/p>\n<p>Full list of resources:<\/p>\n<p>nfsshare (ocf::heartbeat:Filesystem): Started storage1.hl.local<br \/>\nnfsd (ocf::heartbeat:nfsserver): Started storage1.hl.local<br \/>\nnfsroot1 (ocf::heartbeat:exportfs): Started storage1.hl.local<br \/>\nnfsroot2 (ocf::heartbeat:exportfs): Started storage1.hl.local<br \/>\nnfsvip (ocf::heartbeat:IPaddr2): Started storage1.hl.local<br \/>\nmy_proxmox_fence207 (stonith:fence_pve): Started storage1.hl.local<br \/>\nmy_proxmox_fence208 (stonith:fence_pve): Started storage1.hl.local<\/p>\n<p>Daemon Status:<br \/>\ncorosync: active\/enabled<br \/>\npacemaker: active\/enabled<br \/>\npcsd: inactive\/disabled<\/p>\n<p>Note how the storage2.hl.local node is down, because we\u2019ve fenced it.<\/p>\n<p>If you decide to use test configuration, do not forget to stop the Puppet agent on the cluster nodes as it will disable STONITH (we set stonith-enabled to false in the manifest).<\/p>\n<p>For more info, do the following:<\/p>\n<p># pcs stonith describe fence_pve<\/p>\n<p>This will give you a list of other STONITH options available.<\/p>\n<p><a href=\"https:\/\/www.lisenet.com\/2018\/configure-active-passive-nfs-server-on-a-pacemaker-cluster-with-puppet\/\" target=\"_blank\" rel=\"noopener\">Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>We\u2019re going to use Puppet to install Pacemaker\/Corosync and configure an NFS cluster. For instructions on how to compile fence_pve on CentOS 7, scroll to the bottom of the page. This article is part of the Homelab Project with KVM, Katello and Puppet series. Homelab We have two CentOS 7 servers installed which we want &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/www.appservgrid.com\/paw92\/index.php\/2018\/10\/17\/configure-active-passive-nfs-server-on-a-pacemaker-cluster-with-puppet-lisenet-com-linux-security\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Configure Active\/Passive NFS Server on a Pacemaker Cluster with Puppet | Lisenet.com :: Linux | Security&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-505","post","type-post","status-publish","format-standard","hentry","category-linux"],"_links":{"self":[{"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/posts\/505","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/comments?post=505"}],"version-history":[{"count":1,"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/posts\/505\/revisions"}],"predecessor-version":[{"id":550,"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/posts\/505\/revisions\/550"}],"wp:attachment":[{"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/media?parent=505"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/categories?post=505"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/tags?post=505"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}