{"id":8147,"date":"2019-01-14T19:51:28","date_gmt":"2019-01-14T19:51:28","guid":{"rendered":"https:\/\/www.appservgrid.com\/paw92\/?p=8147"},"modified":"2019-01-24T03:23:29","modified_gmt":"2019-01-24T03:23:29","slug":"how-to-setup-drbd-to-replicate-storage-on-two-centos-7-servers","status":"publish","type":"post","link":"https:\/\/www.appservgrid.com\/paw92\/index.php\/2019\/01\/14\/how-to-setup-drbd-to-replicate-storage-on-two-centos-7-servers\/","title":{"rendered":"How to Setup DRBD to Replicate Storage on Two CentOS 7 Servers"},"content":{"rendered":"<p>The\u00a0<strong>DRBD<\/strong>\u00a0(stands for\u00a0<strong>Distributed Replicated Block Device<\/strong>) is a distributed, flexible and versatile replicated storage solution for Linux. It mirrors the content of block devices such as hard disks, partitions, logical volumes etc. between servers. It involves a copy of data on two storage devices, such that if one fails, the data on the other can be used.<\/p>\n<p>You can think of it somewhat like a\u00a0<a href=\"https:\/\/www.tecmint.com\/create-raid1-in-linux\/\" target=\"_blank\" rel=\"noopener\">network RAID 1 configuration<\/a>\u00a0with the disks mirrored across servers. However, it operates in a very different way from RAID and even network RAID.<\/p>\n<p>Originally,\u00a0<strong>DRBD<\/strong>\u00a0was mainly used in high availability (HA) computer clusters, however, starting with version 9, it can be used to deploy cloud storage solutions.<\/p>\n<p>In this article, we will show how to install DRBD in CentOS and briefly demonstrate how to use it to replicate storage (partition) on two servers. This is the perfect article to get your started with using DRBD in Linux.<\/p>\n<h4>Testing Environment<\/h4>\n<p>For the purpose of this article, we are using two nodes cluster for this setup.<\/p>\n<ul>\n<li><strong>Node1<\/strong>: 192.168.56.101 \u2013 tecmint.tecmint.lan<\/li>\n<li><strong>Node2<\/strong>: 192.168.10.102 \u2013 server1.tecmint.lan<\/li>\n<\/ul>\n<h3>Step 1: Installing DRBD Packages<\/h3>\n<p><strong>DRBD<\/strong>\u00a0is implemented as a Linux kernel module. It precisely constitutes a driver for a virtual block device, so it\u2019s established right near the bottom of a system\u2019s I\/O stack.<\/p>\n<p><strong>DRBD<\/strong>\u00a0can be installed from the\u00a0<strong>ELRepo<\/strong>\u00a0or\u00a0<strong>EPEL<\/strong>\u00a0repositories. Let\u2019s start by importing the ELRepo package signing key, and enable the repository as shown on both nodes.<\/p>\n<pre># rpm --import https:\/\/www.elrepo.org\/RPM-GPG-KEY-elrepo.org\r\n# rpm -Uvh http:\/\/www.elrepo.org\/elrepo-release-7.0-3.el7.elrepo.noarch.rpm\r\n<\/pre>\n<p>Then we can install the DRBD kernel module and utilities on both nodes by running:<\/p>\n<pre># yum install -y kmod-drbd84 drbd84-utils\r\n<\/pre>\n<p>If you have\u00a0<strong>SELinux<\/strong>\u00a0enabled, you need to modify the policies to exempt DRBD processes from SELinux control.<\/p>\n<pre># semanage permissive -a drbd_t\r\n<\/pre>\n<p>In addition, if your system has a firewall enabled (firewalld), you need to add the\u00a0<strong>DRBD<\/strong>\u00a0port\u00a0<strong>7789<\/strong>\u00a0in the firewall to allow synchronization of data between the two nodes.<\/p>\n<p>Run these commands on the first node:<\/p>\n<pre># firewall-cmd --permanent --add-rich-rule='rule family=\"ipv4\"  source address=\"192.168.56.102\" port port=\"7789\" protocol=\"tcp\" accept'\r\n# firewall-cmd --reload\r\n<\/pre>\n<p>Then run these commands on second node:<\/p>\n<pre># firewall-cmd --permanent --add-rich-rule='rule family=\"ipv4\" source address=\"192.168.56.101\" port port=\"7789\" protocol=\"tcp\" accept'\r\n# firewall-cmd --reload\r\n<\/pre>\n<h3>Step 2: Preparing Lower-level Storage<\/h3>\n<p>Now that we have\u00a0<strong>DRBD<\/strong>\u00a0installed on the two cluster nodes, we must prepare a roughly identically sized storage area on both nodes. This can be a hard drive partition (or a full physical hard drive), a software RAID device, an\u00a0<a href=\"https:\/\/www.tecmint.com\/create-lvm-storage-in-linux\/\" target=\"_blank\" rel=\"noopener\">LVM Logical Volume<\/a>\u00a0or a any other block device type found on your system.<\/p>\n<p>For the purpose of this article, we will create a dummy block device of size\u00a0<strong>2GB<\/strong>\u00a0using the\u00a0<strong>dd command<\/strong>.<\/p>\n<pre> \r\n# dd if=\/dev\/zero of=\/dev\/sdb1 bs=2024k count=1024\r\n<\/pre>\n<p>We will assume that this is an unused partition (<strong>\/dev\/sdb1<\/strong>) on a second block device (<strong>\/dev\/sdb<\/strong>) attached to both nodes.<\/p>\n<h3>Step 3: Configuring DRBD<\/h3>\n<p>DRBD\u2019s main configuration file is located at\u00a0<strong>\/etc\/drbd.conf<\/strong>\u00a0and additional config files can be found in the\u00a0<strong>\/etc\/drbd.d<\/strong>\u00a0directory.<\/p>\n<p>To replicate storage, we need to add the necessary configurations in the\u00a0<strong>\/etc\/drbd.d\/global_common.conf<\/strong>\u00a0file which contains the global and common sections of the DRBD configuration and we can define resources in\u00a0<strong>.res<\/strong>files.<\/p>\n<p>Let\u2019s make a backup of the original file on both nodes, then then open a new file for editing (use a text editor of your liking).<\/p>\n<pre># mv \/etc\/drbd.d\/global_common.conf \/etc\/drbd.d\/global_common.conf.orig\r\n# vim \/etc\/drbd.d\/global_common.conf \r\n<\/pre>\n<p>Add the following lines in in both files:<\/p>\n<pre>global {\r\n usage-count  yes;\r\n}\r\ncommon {\r\n net {\r\n  protocol C;\r\n }\r\n}\r\n<\/pre>\n<p>Save the file, and then close the editor.<\/p>\n<p>Let\u2019s briefly shade more light on the line\u00a0<strong>protocol C<\/strong>. DRBD supports three distinct replication modes (thus three degrees of replication synchronicity) which are:<\/p>\n<ul>\n<li><strong>protocol A<\/strong>: Asynchronous replication protocol; it\u2019s most often used in long distance replication scenarios.<\/li>\n<li><strong>protocol B<\/strong>: Semi-synchronous replication protocol aka Memory synchronous protocol.<\/li>\n<li><strong>protocol C<\/strong>: commonly used for nodes in short distanced networks; it\u2019s by far, the most commonly used replication protocol in DRBD setups.<\/li>\n<\/ul>\n<p><strong>Important<\/strong>: The choice of replication protocol influences two factors of your deployment:\u00a0<strong>protection<\/strong>\u00a0and\u00a0<strong>latency<\/strong>. And\u00a0<strong>throughput<\/strong>, by contrast, is largely independent of the replication protocol selected.<\/p>\n<h3>Step 4: Adding a Resource<\/h3>\n<p>A\u00a0<strong>resource<\/strong>\u00a0is the collective term that refers to all aspects of a particular replicated data set. We will define our resource in a file called\u00a0<strong>\/etc\/drbd.d\/test.res<\/strong>.<\/p>\n<p>Add the following content to the file, on both nodes (remember to replace the variables in the content with the actual values for your environment).<\/p>\n<p>Take note of the\u00a0<strong>hostnames<\/strong>, we need to specify the network hostname which can be obtained by running the command\u00a0<strong>uname -n<\/strong>.<\/p>\n<pre>resource test {\r\n        on tecmint.tecmint.lan {\r\n \t\tdevice \/dev\/drbd0;\r\n       \t\tdisk \/dev\/sdb1;\r\n        \t\tmeta-disk internal;\t\r\n                \taddress 192.168.56.101:7789;\r\n        }\r\n        on server1.tecmint.lan  {\r\n\t\tdevice \/dev\/drbd0;\r\n        \t\tdisk \/dev\/sdb1;\r\n        \t\tmeta-disk internal;\r\n                \taddress 192.168.56.102:7789;\r\n        }\r\n}\r\n}\r\n<\/pre>\n<p>where:<\/p>\n<ul>\n<li><strong>on hostname<\/strong>: the on section states which host the enclosed configuration statements apply to.<\/li>\n<li><strong>test<\/strong>: is the name of the new resource.<\/li>\n<li><strong>device \/dev\/drbd0<\/strong>: specifies the new virtual block device managed by DRBD.<\/li>\n<li><strong>disk \/dev\/sdb1<\/strong>: is the block device partition which is the backing device for the DRBD device.<\/li>\n<li><strong>meta-disk<\/strong>: Defines where DRBD stores its metadata. Using Internal means that DRBD stores its meta data on the same physical lower-level device as the actual production data.<\/li>\n<li><strong>address<\/strong>: specifies the IP address and port number of the respective node.<\/li>\n<\/ul>\n<p>Also note that if the options have equal values on both hosts, you can specify them directly in the resource section.<\/p>\n<p>For example the above configuration can be restructured to:<\/p>\n<pre>resource test {\r\n\tdevice \/dev\/drbd0;\r\n\tdisk \/dev\/sdb1;\r\n        \tmeta-disk internal;\t\r\n        \ton tecmint.tecmint.lan {\r\n \t\taddress 192.168.56.101:7789;\r\n        \t}\r\n        \ton server1.tecmint.lan  {\r\n\t\taddress 192.168.56.102:7789;\r\n        \t\t}\r\n}\r\n<\/pre>\n<h3>Step 4: Initializing and Enabling Resource<\/h3>\n<p>To interact with\u00a0<strong>DRBD<\/strong>, we will use the following administration tools which communicate with the kernel module in order to configure and administer DRBD resources:<\/p>\n<ul>\n<li><strong>drbdadm<\/strong>: a high-level administration tool of the DRBD.<\/li>\n<li><strong>drbdsetup<\/strong>: a lower-level administration tool for to attach DRBD devices with their backing block devices, to set up DRBD device pairs to mirror their backing block devices, and to inspect the configuration of running DRBD devices.<\/li>\n<li><strong>Drbdmeta<\/strong>:is the meta data management tool.<\/li>\n<\/ul>\n<p>After adding all the initial resource configurations, we must bring up the resource on both nodes.<\/p>\n<pre># drbdadm create-md test\r\n<\/pre>\n<div id=\"attachment_31535\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2019\/01\/initialize-meta-data-storage-.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-31535\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2019\/01\/initialize-meta-data-storage-.png\" alt=\"Initialize Meta Data Storage\" width=\"574\" height=\"291\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Initialize Meta Data Storage<\/p>\n<\/div>\n<p>Next, we should enable the\u00a0<strong>resource<\/strong>, which will attach the resource with its backing device, then it sets replication parameters, and connects the resource to its peer:<\/p>\n<pre># drbdadm up test\r\n<\/pre>\n<p>Now if you run the\u00a0<a href=\"https:\/\/www.tecmint.com\/commands-to-collect-system-and-hardware-information-in-linux\/\" target=\"_blank\" rel=\"noopener\">lsblk command<\/a>, you will notice that the DRBD device\/volume\u00a0<strong>drbd0<\/strong>\u00a0is associated with the backing device\u00a0<strong>\/dev\/sdb1<\/strong>:<\/p>\n<pre># lsblk\r\n<\/pre>\n<div id=\"attachment_31536\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2019\/01\/list-block-devices.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-31536\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2019\/01\/list-block-devices.png\" alt=\"List Block Devices\" width=\"652\" height=\"230\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">List Block Devices<\/p>\n<\/div>\n<p>To disable the resource, run:<\/p>\n<pre># drbdadm down test\r\n<\/pre>\n<p>To check the resource status, run the following command (note that the\u00a0<strong>Inconsistent\/Inconsistent<\/strong>\u00a0disk state is expected at this point):<\/p>\n<pre># drbdadm status test\r\nOR\r\n# drbdsetup status test --verbose --statistics \t#for  a more detailed status \r\n<\/pre>\n<div id=\"attachment_31537\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2019\/01\/check-resource-status-on-both-nodes.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-31537\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2019\/01\/check-resource-status-on-both-nodes.png\" sizes=\"auto, (max-width: 802px) 100vw, 802px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2019\/01\/check-resource-status-on-both-nodes.png 802w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2019\/01\/check-resource-status-on-both-nodes-768x336.png 768w\" alt=\"Check Resource Status on Nodes\" width=\"802\" height=\"351\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Check Resource Status on Nodes<\/p>\n<\/div>\n<h3>Step 5: Set Primary Resource\/Source of Initial Device Synchronization<\/h3>\n<p>At this stage,\u00a0<strong>DRBD<\/strong>\u00a0is now ready for operation. We now need to tell it which node should be used as the source of the initial device synchronization.<\/p>\n<p>Run the following command on only one node to start the initial full synchronization:<\/p>\n<pre># drbdadm primary --force test\r\n# drbdadm status test\r\n<\/pre>\n<div id=\"attachment_31539\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2019\/01\/set-primary-node-for-initial-device-sync.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-31539\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2019\/01\/set-primary-node-for-initial-device-sync.png\" alt=\"Set Primary Node for Initial Device\" width=\"662\" height=\"173\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Set Primary Node for Initial Device<\/p>\n<\/div>\n<p>Once the synchronization is complete, the status of both disks should be\u00a0<strong>UpToDate<\/strong>.<\/p>\n<h3>Step 6: Testing DRBD Setup<\/h3>\n<p>Finally, we need to test if the DRBD device will work well for replicated data storage. Remember, we used an empty disk volume, therefore we must create a filesystem on the device, and mount it, to test if we can use it for replicated data storage.<\/p>\n<p>We can create a filesystem on the device with the following command, on the node where we started the initial full synchronization (which has the resource with primary role):<\/p>\n<pre># mkfs -t ext4 \/dev\/drbd0 \r\n<\/pre>\n<div id=\"attachment_31540\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2019\/01\/make-filesystem-type-on-drbd-volume.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-31540\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2019\/01\/make-filesystem-type-on-drbd-volume.png\" sizes=\"auto, (max-width: 822px) 100vw, 822px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2019\/01\/make-filesystem-type-on-drbd-volume.png 822w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2019\/01\/make-filesystem-type-on-drbd-volume-768x428.png 768w\" alt=\"Make Filesystem on Drbd Volume\" width=\"822\" height=\"458\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Make Filesystem on Drbd Volume<\/p>\n<\/div>\n<p>Then mount it as shown (you can give the mount point an appropriate name):<\/p>\n<pre># mkdir -p \/mnt\/DRDB_PRI\/\r\n# mount \/dev\/drbd0 \/mnt\/DRDB_PRI\/\r\n<\/pre>\n<p>Now copy or create some files in the above mount point and do a long listing using\u00a0<a href=\"https:\/\/www.tecmint.com\/tag\/linux-ls-command\/\" target=\"_blank\" rel=\"noopener\">ls command<\/a>:<\/p>\n<pre># cd \/mnt\/DRDB_PRI\/\r\n# ls -l \r\n<\/pre>\n<div id=\"attachment_31541\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2019\/01\/list-contents-of-drbd-volume-on-primary-node.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-31541\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2019\/01\/list-contents-of-drbd-volume-on-primary-node.png\" sizes=\"auto, (max-width: 852px) 100vw, 852px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2019\/01\/list-contents-of-drbd-volume-on-primary-node.png 852w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2019\/01\/list-contents-of-drbd-volume-on-primary-node-768x105.png 768w\" alt=\"List Contents of Drbd Primary Volume\" width=\"852\" height=\"116\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">List Contents of Drbd Primary Volume<\/p>\n<\/div>\n<p>Next,\u00a0<strong>unmount<\/strong>\u00a0the the device (ensure that the mount is not open, change directory after unmounting it to prevent any errors) and change the role of the node from\u00a0<strong>primary<\/strong>\u00a0to\u00a0<strong>secondary<\/strong>:<\/p>\n<pre># umount \/mnt\/DRDB_PRI\/\r\n# cd\r\n# drbdadm secondary test\r\n<\/pre>\n<p>On the other node (which has the resource with a secondary role), make it primary, then mount the device on it and perform a long listing of the mount point. If the setup is working fine, all the files stored in the volume should be there:<\/p>\n<pre># drbdadm primary test\r\n# mkdir -p \/mnt\/DRDB_SEC\/\r\n# mount \/dev\/drbd0 \/mnt\/DRDB_SEC\/\r\n# cd \/mnt\/DRDB_SEC\/\r\n# ls  -l \r\n<\/pre>\n<div id=\"attachment_31542\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2019\/01\/test-drbd-setup-is-working-on-secondary-node.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-31542\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2019\/01\/test-drbd-setup-is-working-on-secondary-node.png\" sizes=\"auto, (max-width: 862px) 100vw, 862px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2019\/01\/test-drbd-setup-is-working-on-secondary-node.png 862w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2019\/01\/test-drbd-setup-is-working-on-secondary-node-768x273.png 768w\" alt=\"Test DRBD Setup Working on Secondary Node\" width=\"862\" height=\"306\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Test DRBD Setup Working on Secondary Node<\/p>\n<\/div>\n<p>For more information, see the man pages of the user space administration tools:<\/p>\n<pre># man drbdadm\r\n# man drbdsetup\r\n# man drbdmeta\r\n<\/pre>\n<p><strong>Reference<\/strong>:\u00a0<a href=\"https:\/\/docs.linbit.com\/docs\/users-guide-8.4\/\" target=\"_blank\" rel=\"nofollow noopener\">The DRBD User\u2019s Guide<\/a>.<\/p>\n<h5>Summary<\/h5>\n<p><strong>DRBD<\/strong>\u00a0is extremely flexible and versatile, which makes it a storage replication solution suitable for adding HA to just about any application. In this article, we have shown how to install\u00a0<strong>DRBD<\/strong>\u00a0in\u00a0<strong>CentOS 7<\/strong>\u00a0and briefly demonstrated how to use it to replicate storage.<\/p>\n<p><a href=\"https:\/\/www.tecmint.com\/setup-drbd-storage-replication-on-centos-7\/\" target=\"_blank\" rel=\"noopener\">Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The\u00a0DRBD\u00a0(stands for\u00a0Distributed Replicated Block Device) is a distributed, flexible and versatile replicated storage solution for Linux. It mirrors the content of block devices such as hard disks, partitions, logical volumes etc. between servers. It involves a copy of data on two storage devices, such that if one fails, the data on the other can be &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/www.appservgrid.com\/paw92\/index.php\/2019\/01\/14\/how-to-setup-drbd-to-replicate-storage-on-two-centos-7-servers\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;How to Setup DRBD to Replicate Storage on Two CentOS 7 Servers&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-8147","post","type-post","status-publish","format-standard","hentry","category-linux"],"_links":{"self":[{"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/posts\/8147","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/comments?post=8147"}],"version-history":[{"count":2,"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/posts\/8147\/revisions"}],"predecessor-version":[{"id":8654,"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/posts\/8147\/revisions\/8654"}],"wp:attachment":[{"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/media?parent=8147"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/categories?post=8147"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/tags?post=8147"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}