{"id":11850,"date":"2019-03-17T14:44:00","date_gmt":"2019-03-17T14:44:00","guid":{"rendered":"http:\/\/www.appservgrid.com\/paw92\/?p=11850"},"modified":"2019-03-17T14:44:00","modified_gmt":"2019-03-17T14:44:00","slug":"introduction-to-raid-concepts-of-raid-and-raid-levels-in-linux","status":"publish","type":"post","link":"https:\/\/www.appservgrid.com\/paw92\/index.php\/2019\/03\/17\/introduction-to-raid-concepts-of-raid-and-raid-levels-in-linux\/","title":{"rendered":"Introduction to RAID, Concepts of RAID and RAID Levels in Linux"},"content":{"rendered":"<p><b>RAID<\/b>\u00a0is a Redundant Array of Inexpensive disks, but nowadays it is called Redundant Array of Independent drives. Earlier it is used to be very costly to buy even a smaller size of disk, but nowadays we can buy a large size of disk with the same amount like before. Raid is just a collection of disks in a pool to become a logical volume.<\/p>\n<div id=\"attachment_9223\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/RAID.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9223\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/RAID.jpg\" alt=\"RAID in Linux\" width=\"600\" height=\"400\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Understanding RAID Setups in Linux<\/p>\n<\/div>\n<p>Raid contains groups or sets or Arrays. A combine of drivers make a group of disks to form a RAID Array or RAID set. It can be a minimum of 2 number of disk connected to a raid controller and make a logical volume or more drives can be in a group. Only one Raid level can be applied in a group of disks. Raid are used when we need excellent performance. According to our selected raid level, performance will differ. Saving our data by fault tolerance &amp; high availability.<\/p>\n<p>This series will be titled Preparation for the setting up RAID \u2018s through Parts 1-9 and covers the following topics.<\/p>\n<div id=\"exam_announcement\"><b>Part 1<\/b>:\u00a0<b>Introduction to RAID, Concepts of RAID and RAID Levels<\/b><\/div>\n<div id=\"exam_announcement\"><b>Part 2<\/b>:\u00a0How to setup RAID0 (Stripe) in Linux<\/div>\n<div id=\"exam_announcement\"><b>Part 3<\/b>:\u00a0How to setup RAID1 (Mirror) in Linux<\/div>\n<div id=\"exam_announcement\"><b>Part 4<\/b>:\u00a0How to setup RAID5 (Striping with Distributed Parity) in Linux<\/div>\n<div id=\"exam_announcement\"><b>Part 5<\/b>:\u00a0How to setup RAID6 (Striping with Double Distributed Parity) in Linux<\/div>\n<div id=\"exam_announcement\"><b>Part 6<\/b>:\u00a0Setting Up RAID 10 or 1+0 (Nested) in Linux<\/div>\n<div id=\"exam_announcement\"><b>Part 7<\/b>:\u00a0Growing an Existing RAID Array and Removing Failed Disks in Raid<\/div>\n<div id=\"exam_announcement\"><b>Part 8<\/b>:\u00a0How to Recover Data and Rebuild Failed Software RAID\u2019s<\/div>\n<div id=\"exam_announcement\"><b>Part 9<\/b>:\u00a0How to Manage Software RAID\u2019s in Linux with \u2018Mdadm\u2019 Tool<\/div>\n<p>This is the Part 1 of a 9-tutorial series, here we will cover the introduction of RAID, Concepts of RAID and RAID Levels that are required for the setting up RAID in Linux.<\/p>\n<h3>Software RAID and Hardware RAID<\/h3>\n<p><b>Software RAID<\/b>\u00a0have low performance, because of consuming resource from hosts. Raid software need to load for read data from software raid volumes. Before loading raid software, OS need to get boot to load the raid software. No need of Physical hardware in software raids. Zero cost investment.<\/p>\n<p><b>Hardware RAID<\/b>\u00a0have high performance. They are dedicated RAID Controller which is Physically built using PCI express cards. It won\u2019t use the host resource. They have NVRAM for cache to read and write. Stores cache while rebuild even if there is power-failure, it will store the cache using battery power backups. Very costly investments needed for a large scale.<\/p>\n<p>Hardware RAID Card will look like below:<\/p>\n<div id=\"attachment_9172\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Hardware-RAID.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9172\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Hardware-RAID.jpg\" alt=\"Hardware RAID\" width=\"500\" height=\"329\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Hardware RAID<\/p>\n<\/div>\n<h4>Featured Concepts of RAID<\/h4>\n<ol>\n<li><b>Parity<\/b>\u00a0method in raid regenerate the lost content from parity saved information\u2019s. RAID 5, RAID 6 Based on Parity.<\/li>\n<li><b>Stripe<\/b>\u00a0is sharing data randomly to multiple disk. This won\u2019t have full data in a single disk. If we use 3 disks half of our data will be in each disks.<\/li>\n<li><b>Mirroring<\/b>\u00a0is used in RAID 1 and RAID 10. Mirroring is making a copy of same data. In RAID 1 it will save the same content to the other disk too.<\/li>\n<li><b>Hot spare<\/b>\u00a0is just a spare drive in our server which can automatically replace the failed drives. If any one of the drive failed in our array this hot spare drive will be used and rebuild automatically.<\/li>\n<li><b>Chunks<\/b>\u00a0are just a size of data which can be minimum from 4KB and more. By defining chunk size we can increase the I\/O performance.<\/li>\n<\/ol>\n<p>RAID\u2019s are in various Levels. Here we will see only the RAID Levels which is used mostly in real environment.<\/p>\n<ol>\n<li><b>RAID0<\/b>\u00a0= Striping<\/li>\n<li><b>RAID1<\/b>\u00a0= Mirroring<\/li>\n<li><b>RAID5<\/b>\u00a0= Single Disk Distributed Parity<\/li>\n<li><b>RAID6<\/b>\u00a0= Double Disk Distributed Parity<\/li>\n<li><b>RAID10<\/b>\u00a0= Combine of Mirror &amp; Stripe. (Nested RAID)<\/li>\n<\/ol>\n<p>RAID are managed using\u00a0<b>mdadm<\/b>\u00a0package in most of the Linux distributions. Let us get a Brief look into each RAID Levels.<\/p>\n<h4>RAID 0 (or) Striping<\/h4>\n<p>Striping have a excellent performance. In Raid 0 (Striping) the data will be written to disk using shared method. Half of the content will be in one disk and another half will be written to other disk.<\/p>\n<p>Let us assume we have 2 Disk drives, for example, if we write data \u201c<b>TECMINT<\/b>\u201d to logical volume it will be saved as \u2018<b>T<\/b>\u2018 will be saved in first disk and \u2018<b>E<\/b>\u2018 will be saved in Second disk and \u2018<b>C<\/b>\u2018 will be saved in First disk and again \u2018<b>M<\/b>\u2018 will be saved in Second disk and it continues in round-robin process.<\/p>\n<p>In this situation if any one of the drive fails we will loose our data, because with half of data from one of the disk can\u2019t use to rebuilt the raid. But while comparing to Write Speed and performance RAID 0 is Excellent. We need at least minimum 2 disks to create a RAID 0 (Striping). If you need your valuable data don\u2019t use this RAID LEVEL.<\/p>\n<ol>\n<li>High Performance.<\/li>\n<li>There is Zero Capacity Loss in RAID 0<\/li>\n<li>Zero Fault Tolerance.<\/li>\n<li>Write and Reading will be good performance.<\/li>\n<\/ol>\n<h4>RAID 1 (or) Mirroring<\/h4>\n<p>Mirroring have a good performance. Mirroring can make a copy of same data what we have. Assuming we have two numbers of 2TB Hard drives, total there we have 4TB, but in mirroring while the drives are behind the RAID Controller to form a Logical drive Only we can see the 2TB of logical drive.<\/p>\n<p>While we save any data, it will write to both 2TB Drives. Minimum two drives are needed to create a RAID 1 or Mirror. If a disk failure occurred we can reproduce the raid set by replacing a new disk. If any one of the disk fails in RAID 1, we can get the data from other one as there was a copy of same content in the other disk. So there is zero data loss.<\/p>\n<ol>\n<li>Good Performance.<\/li>\n<li>Here Half of the Space will be lost in total capacity.<\/li>\n<li>Full Fault Tolerance.<\/li>\n<li>Rebuilt will be faster.<\/li>\n<li>Writing Performance will be slow.<\/li>\n<li>Reading will be good.<\/li>\n<li>Can be used for operating systems and database for small scale.<\/li>\n<\/ol>\n<h4>RAID 5 (or) Distributed Parity<\/h4>\n<p>RAID 5 is mostly used in enterprise levels. RAID 5 work by distributed parity method. Parity info will be used to rebuild the data. It rebuilds from the information left on the remaining good drives. This will protect our data from drive failure.<\/p>\n<p>Assume we have 4 drives, if one drive fails and while we replace the failed drive we can rebuild the replaced drive from parity informations. Parity information\u2019s are Stored in all 4 drives, if we have 4 numbers of 1TB hard-drive. The parity information will be stored in 256GB in each drivers and other 768GB in each drives will be defined for Users. RAID 5 can be survive from a single Drive failure, If drives fails more than 1 will cause loss of data\u2019s.<\/p>\n<ol>\n<li>Excellent Performance<\/li>\n<li>Reading will be extremely very good in speed.<\/li>\n<li>Writing will be Average, slow if we won\u2019t use a Hardware RAID Controller.<\/li>\n<li>Rebuild from Parity information from all drives.<\/li>\n<li>Full Fault Tolerance.<\/li>\n<li>1 Disk Space will be under Parity.<\/li>\n<li>Can be used in file servers, web servers, very important backups.<\/li>\n<\/ol>\n<h4>RAID 6 Two Parity Distributed Disk<\/h4>\n<p>RAID 6 is same as RAID 5 with two parity distributed system. Mostly used in a large number of arrays. We need minimum 4 Drives, even if there 2 Drive fails we can rebuild the data while replacing new drives.<\/p>\n<p>Very slower than RAID 5, because it writes data to all 4 drivers at same time. Will be average in speed while we using a Hardware RAID Controller. If we have 6 numbers of 1TB hard-drives 4 drives will be used for data and 2 drives will be used for Parity.<\/p>\n<ol>\n<li>Poor Performance.<\/li>\n<li>Read Performance will be good.<\/li>\n<li>Write Performance will be Poor if we not using a Hardware RAID Controller.<\/li>\n<li>Rebuild from 2 Parity Drives.<\/li>\n<li>Full Fault tolerance.<\/li>\n<li>2 Disks space will be under Parity.<\/li>\n<li>Can be Used in Large Arrays.<\/li>\n<li>Can be use in backup purpose, video streaming, used in large scale.<\/li>\n<\/ol>\n<h4>RAID 10 (or) Mirror &amp; Stripe<\/h4>\n<p>RAID 10 can be called as 1+0 or 0+1. This will do both works of Mirror &amp; Striping. Mirror will be first and stripe will be the second in RAID 10. Stripe will be the first and mirror will be the second in RAID 01. RAID 10 is better comparing to 01.<\/p>\n<p>Assume, we have 4 Number of drives. While I\u2019m writing some data to my logical volume it will be saved under All 4 drives using mirror and stripe methods.<\/p>\n<p>If I\u2019m writing a data \u201c<b>TECMINT<\/b>\u201d in RAID 10 it will save the data as follow. First \u201c<b>T<\/b>\u201d will write to both disks and second \u201c<b>E<\/b>\u201d will write to both disk, this step will be used for all data write. It will make a copy of every data to other disk too.<\/p>\n<p>Same time it will use the RAID 0 method and write data as follow \u201c<b>T<\/b>\u201d will write to first disk and \u201c<b>E<\/b>\u201d will write to second disk. Again \u201c<b>C<\/b>\u201d will write to first Disk and \u201c<b>M<\/b>\u201d to second disk.<\/p>\n<ol>\n<li>Good read and write performance.<\/li>\n<li>Here Half of the Space will be lost in total capacity.<\/li>\n<li>Fault Tolerance.<\/li>\n<li>Fast rebuild from copying data.<\/li>\n<li>Can be used in Database storage for high performance and availability.<\/li>\n<\/ol>\n<h3>Conclusion<\/h3>\n<p>In this article we have seen what is RAID and which levels are mostly used in RAID in real environment. Hope you have learned the write-up about RAID. For RAID setup one must know about the basic Knowledge about RAID. The above content will fulfil basic understanding about RAID.<\/p>\n<p>In the next upcoming articles I\u2019m going to cover how to setup and create a RAID using Various Levels, Growing a RAID Group (Array) and Troubleshooting with failed Drives and much more.<\/p>\n<h1 class=\"post-title\">Creating Software RAID0 (Stripe) on \u2018Two Devices\u2019 Using \u2018mdadm\u2019 Tool in Linux \u2013 Part 2<\/h1>\n<p><strong>RAID<\/strong>\u00a0is Redundant Array of Inexpensive disks, used for high availability and reliability in large scale environments, where data need to be protected than normal use. Raid is just a collection of disks in a pool to become a logical volume and contains an array. A combine drivers makes an array or called as set of (group).<\/p>\n<p>RAID can be created, if there are minimum 2 number of disk connected to a raid controller and make a logical volume or more drives can be added in an array according to defined RAID Levels. Software Raid are available without using Physical hardware those are called as software raid. Software Raid will be named as Poor man raid.<\/p>\n<div id=\"attachment_9289\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Raid0-in-Linux.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9289\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Raid0-in-Linux.jpg\" alt=\"Setup RAID0 in Linux\" width=\"600\" height=\"400\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Setup RAID0 in Linux<\/p>\n<\/div>\n<p>Main concept of using RAID is to save data from Single point of failure, means if we using a single disk to store the data and if it\u2019s failed, then there is no chance of getting our data back, to stop the data loss we need a fault tolerance method. So, that we can use some collection of disk to form a RAID set.<\/p>\n<h4>What is Stripe in RAID 0?<\/h4>\n<p>Stripe is striping data across multiple disk at the same time by dividing the contents. Assume we have two disks and if we save content to logical volume it will be saved under both two physical disks by dividing the content. For better performance\u00a0<strong>RAID 0<\/strong>\u00a0will be used, but we can\u2019t get the data if one of the drive fails. So, it isn\u2019t a good practice to use RAID 0. The only solution is to install operating system with RAID0 applied logical volumes to safe your important files.<\/p>\n<ol>\n<li>RAID 0 has High Performance.<\/li>\n<li>Zero Capacity Loss in RAID 0. No Space will be wasted.<\/li>\n<li>Zero Fault Tolerance ( Can\u2019t get back the data if any one of disk fails).<\/li>\n<li>Write and Reading will be Excellent.<\/li>\n<\/ol>\n<h4>Requirements<\/h4>\n<p>Minimum number of disks are allowed to create RAID 0 is\u00a0<strong>2<\/strong>, but you can add more disk but the order should be twice as 2, 4, 6, 8. If you have a Physical RAID card with enough ports, you can add more disks.<\/p>\n<p>Here we are not using a Hardware raid, this setup depends only on Software RAID. If we have a physical hardware raid card we can access it from it\u2019s utility\u00a0<strong>UI<\/strong>. Some motherboard by default in-build with RAID feature, there\u00a0<strong>UI<\/strong>\u00a0can be accessed using\u00a0<b>Ctrl+I<\/b>\u00a0keys.<\/p>\n<p>If you\u2019re new to RAID setups, please read our earlier article, where we\u2019ve covered some basic introduction of about RAID.<\/p>\n<ol>\n<li><a href=\"https:\/\/www.tecmint.com\/understanding-raid-setup-in-linux\/\" target=\"_blank\" rel=\"noopener\">Introduction to RAID and RAID Concepts<\/a><\/li>\n<\/ol>\n<h5>My Server Setup<\/h5>\n<pre>Operating System :\tCentOS 6.5 Final\r\nIP Address\t :\t192.168.0.225\r\nTwo Disks\t :\t20 GB each\r\n<\/pre>\n<p>This article is Part 2 of a 9-tutorial RAID series, here in this part, we are going to see how we can create and setup Software\u00a0<strong>RAID0<\/strong>\u00a0or striping in Linux systems or servers using two\u00a0<strong>20GB<\/strong>\u00a0disks named\u00a0<b>sdb<\/b>\u00a0and\u00a0<b>sdc<\/b>.<\/p>\n<h3>Step 1: Updating System and Installing mdadm for Managing RAID<\/h3>\n<p><strong>1.<\/strong>\u00a0Before setting up RAID0 in Linux, let\u2019s do a system update and then install \u2018<strong>mdadm<\/strong>\u2018 package. The mdadm is a small program, which will allow us to configure and manage RAID devices in Linux.<\/p>\n<pre># yum clean all &amp;&amp; yum update\r\n# yum install mdadm -y\r\n<\/pre>\n<div id=\"attachment_9275\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/install-mdadm-in-linux.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9275\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/install-mdadm-in-linux.png\" alt=\"install mdadm in linux\" width=\"497\" height=\"222\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Install mdadm Tool<\/p>\n<\/div>\n<h3>Step 2: Verify Attached Two 20GB Drives<\/h3>\n<p><strong>2.<\/strong>\u00a0Before creating RAID 0, make sure to verify that the attached two hard drives are detected or not, using the following command.<\/p>\n<pre># ls -l \/dev | grep sd\r\n<\/pre>\n<div id=\"attachment_9276\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Check-Hard-Drives.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9276\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Check-Hard-Drives.png\" alt=\"Check Hard Drives in Linux\" width=\"453\" height=\"164\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Check Hard Drives<\/p>\n<\/div>\n<p><strong>3.<\/strong>\u00a0Once the new hard drives detected, it\u2019s time to check whether the attached drives are already using any existing raid with the help of following \u2018mdadm\u2019 command.<\/p>\n<pre># mdadm --examine \/dev\/sd[b-c]\r\n<\/pre>\n<div id=\"attachment_9277\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Check-Drives-using-RAID.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9277\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Check-Drives-using-RAID.png\" alt=\"Check RAID Devices in Linux\" width=\"421\" height=\"127\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Check RAID Devices<\/p>\n<\/div>\n<p>In the above output, we come to know that none of the RAID have been applied to these two\u00a0<b>sdb<\/b>\u00a0and\u00a0<b>sdc<\/b>\u00a0drives.<\/p>\n<h3>Step 3: Creating Partitions for RAID<\/h3>\n<p><strong>4.<\/strong>\u00a0Now create\u00a0<b>sdb<\/b>\u00a0and\u00a0<b>sdc<\/b>\u00a0partitions for raid, with the help of following fdisk command. Here, I will show how to create partition on\u00a0<b>sdb<\/b>\u00a0drive.<\/p>\n<pre># fdisk \/dev\/sdb\r\n<\/pre>\n<p>Follow below instructions for creating partitions.<\/p>\n<ol>\n<li>Press \u2018<strong>n<\/strong>\u2018 for creating new partition.<\/li>\n<li>Then choose \u2018<strong>P<\/strong>\u2018 for Primary partition.<\/li>\n<li>Next select the partition number as\u00a0<strong>1<\/strong>.<\/li>\n<li>Give the default value by just pressing two times\u00a0<strong>Enter<\/strong>\u00a0key.<\/li>\n<li>Next press \u2018<strong>P<\/strong>\u2018 to print the defined partition.<\/li>\n<\/ol>\n<div id=\"attachment_9278\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Create-Partitions-in-Linux.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9278\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Create-Partitions-in-Linux-361x450.png\" sizes=\"auto, (max-width: 361px) 100vw, 361px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Create-Partitions-in-Linux-361x450.png 361w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Create-Partitions-in-Linux.png 542w\" alt=\"Create Partitions in Linux\" width=\"361\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Create Partitions<\/p>\n<\/div>\n<p>Follow below instructions for creating Linux raid auto on partitions.<\/p>\n<ol>\n<li>Press \u2018<strong>L<\/strong>\u2018 to list all available types.<\/li>\n<li>Type \u2018<strong>t<\/strong>\u2018to choose the partitions.<\/li>\n<li>Choose \u2018<strong>fd<\/strong>\u2018 for Linux raid auto and press Enter to apply.<\/li>\n<li>Then again use \u2018<strong>P<\/strong>\u2018 to print the changes what we have made.<\/li>\n<li>Use \u2018<strong>w<\/strong>\u2018 to write the changes.<\/li>\n<\/ol>\n<div id=\"attachment_9279\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Create-RAID-Partitions.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9279\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Create-RAID-Partitions-373x450.png\" sizes=\"auto, (max-width: 373px) 100vw, 373px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Create-RAID-Partitions-373x450.png 373w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Create-RAID-Partitions.png 664w\" alt=\"Create RAID Partitions\" width=\"373\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Create RAID Partitions in Linux<\/p>\n<\/div>\n<p><strong>Note<\/strong>: Please follow same above instructions to create partition on\u00a0<b>sdc<\/b>\u00a0drive now.<\/p>\n<p><strong>5.<\/strong>\u00a0After creating partitions, verify both the drivers are correctly defined for RAID using following command.<\/p>\n<pre># mdadm --examine \/dev\/sd[b-c]\r\n# mdadm --examine \/dev\/sd[b-c]1\r\n<\/pre>\n<div id=\"attachment_9280\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Verify-RAID-Partitions.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9280\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Verify-RAID-Partitions.png\" alt=\"Verify RAID Partitions\" width=\"502\" height=\"289\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Verify RAID Partitions<\/p>\n<\/div>\n<h3>Step 4: Creating RAID md Devices<\/h3>\n<p><strong>6.<\/strong>\u00a0Now create md device (i.e.\u00a0<strong>\/dev\/md0<\/strong>) and apply raid level using below command.<\/p>\n<pre># mdadm -C \/dev\/md0 -l raid0 -n 2 \/dev\/sd[b-c]1\r\n# mdadm --create \/dev\/md0 --level=stripe --raid-devices=2 \/dev\/sd[b-c]1\r\n<\/pre>\n<ol>\n<li><b>-C<\/b>\u00a0\u2013 create<\/li>\n<li><b>-l<\/b>\u00a0\u2013 level<\/li>\n<li><b>-n<\/b>\u00a0\u2013 No of raid-devices<\/li>\n<\/ol>\n<p><strong>7.<\/strong>\u00a0Once md device has been created, now verify the status of\u00a0<strong>RAID Level<\/strong>,\u00a0<strong>Devices<\/strong>\u00a0and\u00a0<strong>Array<\/strong>\u00a0used, with the help of following series of commands as shown.<\/p>\n<pre># cat \/proc\/mdstat\r\n<\/pre>\n<div id=\"attachment_9281\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Verify-RAID-Level.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9281\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Verify-RAID-Level.png\" alt=\"Verify RAID Level\" width=\"508\" height=\"266\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Verify RAID Level<\/p>\n<\/div>\n<pre># mdadm -E \/dev\/sd[b-c]1\r\n<\/pre>\n<div id=\"attachment_9282\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Verify-RAID-Device.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9282\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Verify-RAID-Device-332x450.png\" sizes=\"auto, (max-width: 332px) 100vw, 332px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Verify-RAID-Device-332x450.png 332w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Verify-RAID-Device.png 643w\" alt=\"Verify RAID Device\" width=\"332\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Verify RAID Device<\/p>\n<\/div>\n<pre># mdadm --detail \/dev\/md0\r\n<\/pre>\n<div id=\"attachment_9283\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Verify-RAID-Array.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9283\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Verify-RAID-Array-593x450.png\" sizes=\"auto, (max-width: 593px) 100vw, 593px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Verify-RAID-Array-593x450.png 593w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Verify-RAID-Array.png 663w\" alt=\"Verify RAID Array\" width=\"593\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Verify RAID Array<\/p>\n<\/div>\n<h3>Step 5: Assiging RAID Devices to Filesystem<\/h3>\n<p><strong>8.<\/strong>\u00a0Create a ext4 filesystem for a RAID device\u00a0<strong>\/dev\/md0<\/strong>\u00a0and mount it under\u00a0<strong>\/dev\/raid0<\/strong>.<\/p>\n<pre># mkfs.ext4 \/dev\/md0\r\n<\/pre>\n<div id=\"attachment_9284\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Create-ext4-Filesystem.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9284\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Create-ext4-Filesystem-572x450.png\" sizes=\"auto, (max-width: 572px) 100vw, 572px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Create-ext4-Filesystem-572x450.png 572w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Create-ext4-Filesystem.png 636w\" alt=\"Create ext4 Filesystem in Linux\" width=\"572\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Create ext4 Filesystem<\/p>\n<\/div>\n<p><strong>9.<\/strong>\u00a0Once ext4 filesystem has been created for Raid device, now create a mount point directory (i.e.<strong>\u00a0\/mnt\/raid0<\/strong>) and mount the device\u00a0<strong>\/dev\/md0<\/strong>\u00a0under it.<\/p>\n<pre># mkdir \/mnt\/raid0\r\n# mount \/dev\/md0 \/mnt\/raid0\/\r\n<\/pre>\n<p><strong>10.<\/strong>\u00a0Next, verify that the device\u00a0<strong>\/dev\/md0<\/strong>\u00a0is mounted under\u00a0<strong>\/mnt\/raid0<\/strong>\u00a0directory using\u00a0<strong>df<\/strong>\u00a0command.<\/p>\n<pre># df -h\r\n<\/pre>\n<p><strong>11.<\/strong>\u00a0Next, create a file called \u2018<strong>tecmint.txt<\/strong>\u2018 under the mount point\u00a0<strong>\/mnt\/raid0<\/strong>, add some content to the created file and view the content of a file and directory.<\/p>\n<pre># touch \/mnt\/raid0\/tecmint.txt\r\n# echo \"Hi everyone how you doing ?\" &gt; \/mnt\/raid0\/tecmint.txt\r\n# cat \/mnt\/raid0\/tecmint.txt\r\n# ls -l \/mnt\/raid0\/\r\n<\/pre>\n<div id=\"attachment_9285\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Verify-Mount-Device.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9285\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Verify-Mount-Device-588x450.png\" sizes=\"auto, (max-width: 588px) 100vw, 588px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Verify-Mount-Device-588x450.png 588w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Verify-Mount-Device.png 602w\" alt=\"Verify Mount Device\" width=\"588\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Verify Mount Device<\/p>\n<\/div>\n<p><strong>12.<\/strong>\u00a0Once you\u2019ve verified mount points, it\u2019s time to create an fstab entry in\u00a0<strong>\/etc\/fstab<\/strong>\u00a0file.<\/p>\n<pre># vim \/etc\/fstab\r\n<\/pre>\n<p>Add the following entry as described. May vary according to your mount location and filesystem you using.<\/p>\n<pre>\/dev\/md0                \/mnt\/raid0              ext4    defaults         0 0\r\n<\/pre>\n<div id=\"attachment_9286\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Add-Device-to-Fstab.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9286\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Add-Device-to-Fstab-620x312.png\" sizes=\"auto, (max-width: 620px) 100vw, 620px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Add-Device-to-Fstab-620x312.png 620w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Add-Device-to-Fstab.png 668w\" alt=\"Add Device to Fstab\" width=\"620\" height=\"312\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Add Device to Fstab<\/p>\n<\/div>\n<p><strong>13.<\/strong>\u00a0Run mount \u2018<strong>-a<\/strong>\u2018 to check if there is any error in fstab entry.<\/p>\n<pre># mount -av\r\n<\/pre>\n<div id=\"attachment_9287\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Check-Errors-in-Fstab.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9287\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Check-Errors-in-Fstab.png\" alt=\"Check Errors in Fstab\" width=\"594\" height=\"214\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Check Errors in Fstab<\/p>\n<\/div>\n<h3>Step 6: Saving RAID Configurations<\/h3>\n<p><strong>14.<\/strong>\u00a0Finally, save the raid configuration to one of the file to keep the configurations for future use. Again we use \u2018mdadm\u2019 command with \u2018<strong>-s<\/strong>\u2018 (scan) and \u2018<strong>-v<\/strong>\u2018 (verbose) options as shown.<\/p>\n<pre># mdadm -E -s -v &gt;&gt; \/etc\/mdadm.conf\r\n# mdadm --detail --scan --verbose &gt;&gt; \/etc\/mdadm.conf\r\n# cat \/etc\/mdadm.conf\r\n<\/pre>\n<div id=\"attachment_9288\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Save-RAID-Configurations.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9288\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Save-RAID-Configurations-620x148.png\" sizes=\"auto, (max-width: 620px) 100vw, 620px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Save-RAID-Configurations-620x148.png 620w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Save-RAID-Configurations.png 739w\" alt=\"Save RAID Configurations\" width=\"620\" height=\"148\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Save RAID Configurations<\/p>\n<\/div>\n<p>That\u2019s it, we have seen here, how to configure RAID0 striping with raid levels by using two hard disks. In next article, we will see how to setup\u00a0<strong>RAID5<\/strong>.<\/p>\n<h1 class=\"post-title\">Setting up RAID 1 (Mirroring) using \u2018Two Disks\u2019 in Linux \u2013 Part 3<\/h1>\n<p><strong>RAID Mirroring<\/strong>\u00a0means an exact clone (or mirror) of the same data writing to two drives. A minimum two number of disks are more required in an array to create RAID1 and it\u2019s useful only, when read performance or reliability is more precise than the data storage capacity.<\/p>\n<div id=\"attachment_9336\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Create-RAID1-in-Linux.jpeg\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9336\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Create-RAID1-in-Linux.jpeg\" alt=\"Create Raid1 in Linux\" width=\"600\" height=\"400\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Setup Raid1 in Linux<\/p>\n<\/div>\n<p>Mirrors are created to protect against data loss due to disk failure. Each disk in a mirror involves an exact copy of the data. When one disk fails, the same data can be retrieved from other functioning disk. However, the failed drive can be replaced from the running computer without any user interruption.<\/p>\n<h3>Features of RAID 1<\/h3>\n<ol>\n<li>Mirror has Good Performance.<\/li>\n<li>50% of space will be lost. Means if we have two disk with 500GB size total, it will be 1TB but in Mirroring it will only show us 500GB.<\/li>\n<li>No data loss in Mirroring if one disk fails, because we have the same content in both disks.<\/li>\n<li>Reading will be good than writing data to drive.<\/li>\n<\/ol>\n<h4>Requirements<\/h4>\n<p>Minimum Two number of disks are allowed to create RAID 1, but you can add more disks by using twice as 2, 4, 6, 8. To add more disks, your system must have a RAID physical adapter (hardware card).<\/p>\n<p>Here we\u2019re using software raid not a Hardware raid, if your system has an inbuilt physical hardware raid card you can access it from it\u2019s utility UI or using\u00a0<strong>Ctrl+I key<\/strong>.<\/p>\n<p><center><b>Read Also<\/b>:\u00a0<a href=\"https:\/\/www.tecmint.com\/understanding-raid-setup-in-linux\/\" target=\"_blank\" rel=\"noopener\">Basic Concepts of RAID in Linux<\/a><\/center><\/p>\n<h5>My Server Setup<\/h5>\n<pre>Operating System :\tCentOS 6.5 Final\r\nIP Address\t :\t192.168.0.226\r\nHostname\t :\trd1.tecmintlocal.com\r\nDisk 1 [20GB]\t :\t\/dev\/sdb\r\nDisk 2 [20GB]\t :\t\/dev\/sdc\r\n<\/pre>\n<p>This article will guide you through a step-by-step instructions on how to setup a software\u00a0<strong>RAID 1<\/strong>\u00a0or\u00a0<strong>Mirror<\/strong>\u00a0using\u00a0<strong>mdadm<\/strong>\u00a0(creates and manages raid) on Linux Platform. Although the same instructions also works on other Linux distributions such as RedHat, CentOS, Fedora, etc.<\/p>\n<h3>Step 1: Installing Prerequisites and Examine Drives<\/h3>\n<p><strong>1.<\/strong>\u00a0As I said above, we\u2019re using mdadm utility for creating and managing RAID in Linux. So, let\u2019s install the\u00a0<strong>mdadm<\/strong>\u00a0software package on Linux using yum or apt-get package manager tool.<\/p>\n<pre># yum install mdadm\t\t[on RedHat systems]\r\n# apt-get install mdadm \t[on Debain systems]\r\n<\/pre>\n<p><strong>2.<\/strong>\u00a0Once \u2018<strong>mdadm<\/strong>\u2018 package has been installed, we need to examine our disk drives whether there is already any raid configured using the following command.<\/p>\n<pre># mdadm -E \/dev\/sd[b-c]\r\n<\/pre>\n<div id=\"attachment_9320\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Check-RAID-on-Disks.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9320\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Check-RAID-on-Disks.png\" alt=\"Check RAID on Disks\" width=\"417\" height=\"118\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Check RAID on Disks<\/p>\n<\/div>\n<p>As you see from the above screen, that there is no any\u00a0<strong>super-block<\/strong>\u00a0detected yet, means no RAID defined.<\/p>\n<h3>Step 2: Drive Partitioning for RAID<\/h3>\n<p><strong>3.<\/strong>\u00a0As I mentioned above, that we\u2019re using minimum two partitions\u00a0<strong>\/dev\/sdb<\/strong>\u00a0and\u00a0<strong>\/dev\/sdc<\/strong>\u00a0for creating RAID1. Let\u2019s create partitions on these two drives using \u2018<strong>fdisk<\/strong>\u2018 command and change the type to raid during partition creation.<\/p>\n<pre># fdisk \/dev\/sdb\r\n<\/pre>\n<h6>Follow the below instructions<\/h6>\n<ol>\n<li>Press \u2018<strong>n<\/strong>\u2018 for creating new partition.<\/li>\n<li>Then choose \u2018<strong>P<\/strong>\u2018 for Primary partition.<\/li>\n<li>Next select the partition number as\u00a0<strong>1<\/strong>.<\/li>\n<li>Give the default full size by just pressing two times\u00a0<strong>Enter<\/strong>\u00a0key.<\/li>\n<li>Next press \u2018<strong>p<\/strong>\u2018 to print the defined partition.<\/li>\n<li>Press \u2018<strong>L<\/strong>\u2018 to list all available types.<\/li>\n<li>Type \u2018<strong>t<\/strong>\u2018to choose the partitions.<\/li>\n<li>Choose \u2018<strong>fd<\/strong>\u2018 for Linux raid auto and press Enter to apply.<\/li>\n<li>Then again use \u2018<strong>p<\/strong>\u2018 to print the changes what we have made.<\/li>\n<li>Use \u2018<strong>w<\/strong>\u2018 to write the changes.<\/li>\n<\/ol>\n<div id=\"attachment_9321\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Create-Disk-Partitions.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9321\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Create-Disk-Partitions-333x450.png\" sizes=\"auto, (max-width: 333px) 100vw, 333px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Create-Disk-Partitions-333x450.png 333w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Create-Disk-Partitions.png 621w\" alt=\"Create Disk Partitions\" width=\"333\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Create Disk Partitions<\/p>\n<\/div>\n<p>After \u2018<strong>\/dev\/sdb<\/strong>\u2018 partition has been created, next follow the same instructions to create new partition on\u00a0<strong>\/dev\/sdc<\/strong>\u00a0drive.<\/p>\n<pre># fdisk \/dev\/sdc\r\n<\/pre>\n<div id=\"attachment_9322\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Create-Second-Partitions.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9322\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Create-Second-Partitions-333x450.png\" sizes=\"auto, (max-width: 333px) 100vw, 333px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Create-Second-Partitions-333x450.png 333w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Create-Second-Partitions.png 622w\" alt=\"Create Second Partitions\" width=\"333\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Create Second Partitions<\/p>\n<\/div>\n<p><strong>4.<\/strong>\u00a0Once both the partitions are created successfully, verify the changes on both\u00a0<strong>sdb<\/strong>\u00a0&amp;\u00a0<strong>sdc<\/strong>\u00a0drive using the same \u2018<strong>mdadm<\/strong>\u2018 command and also confirm the RAID type as shown in the following screen grabs.<\/p>\n<pre># mdadm -E \/dev\/sd[b-c]\r\n<\/pre>\n<div id=\"attachment_9323\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Verify-Partitions-Changes.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9323\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Verify-Partitions-Changes.png\" alt=\"Verify Partitions Changes\" width=\"509\" height=\"202\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Verify Partitions Changes<\/p>\n<\/div>\n<div id=\"attachment_9324\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Check-RAID-Type.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9324\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Check-RAID-Type.png\" alt=\"Check RAID Type\" width=\"411\" height=\"131\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Check RAID Type<\/p>\n<\/div>\n<p><strong>Note<\/strong>: As you see in the above picture, there is no any defined RAID on the\u00a0<strong>sdb1<\/strong>\u00a0and\u00a0<strong>sdc1<\/strong>\u00a0drives so far, that\u2019s the reason we are getting as no\u00a0<b>super-blocks<\/b>\u00a0detected.<\/p>\n<h3>Step 3: Creating RAID1 Devices<\/h3>\n<p><strong>5.<\/strong>\u00a0Next create RAID1 Device called \u2018<strong>\/dev\/md0<\/strong>\u2018 using the following command and verity it.<\/p>\n<pre># mdadm --create \/dev\/md0 --level=mirror --raid-devices=2 \/dev\/sd[b-c]1\r\n# cat \/proc\/mdstat\r\n<\/pre>\n<div id=\"attachment_9325\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Create-RAID-Device.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9325\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Create-RAID-Device-620x331.png\" sizes=\"auto, (max-width: 620px) 100vw, 620px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Create-RAID-Device-620x331.png 620w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Create-RAID-Device.png 691w\" alt=\"Create RAID Device\" width=\"620\" height=\"331\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Create RAID Device<\/p>\n<\/div>\n<p><strong>6.<\/strong>\u00a0Next check the raid devices type and raid array using following commands.<\/p>\n<pre># mdadm -E \/dev\/sd[b-c]1\r\n# mdadm --detail \/dev\/md0\r\n<\/pre>\n<div id=\"attachment_9326\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Check-RAID-Device-type.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9326\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Check-RAID-Device-type-504x450.png\" sizes=\"auto, (max-width: 504px) 100vw, 504px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Check-RAID-Device-type-504x450.png 504w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Check-RAID-Device-type.png 641w\" alt=\"Check RAID Device type\" width=\"504\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Check RAID Device type<\/p>\n<\/div>\n<div id=\"attachment_9327\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Check-RAID-Device-Array.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9327\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Check-RAID-Device-Array-535x450.png\" sizes=\"auto, (max-width: 535px) 100vw, 535px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Check-RAID-Device-Array-535x450.png 535w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Check-RAID-Device-Array.png 633w\" alt=\"Check RAID Device Array\" width=\"535\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Check RAID Device Array<\/p>\n<\/div>\n<p>From the above pictures, one can easily understand that raid1 have been created and using\u00a0<strong>\/dev\/sdb1<\/strong>\u00a0and\u00a0<strong>\/dev\/sdc1<\/strong>\u00a0partitions and also you can see the status as resyncing.<\/p>\n<h3>Step 4: Creating File System on RAID Device<\/h3>\n<p><strong>7.<\/strong>\u00a0Create file system using ext4 for\u00a0<strong>md0<\/strong>\u00a0and mount under\u00a0<strong>\/mnt\/raid1<\/strong>.<\/p>\n<pre># mkfs.ext4 \/dev\/md0\r\n<\/pre>\n<div id=\"attachment_9328\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Create-RAID-Device-Filesystem.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9328\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Create-RAID-Device-Filesystem.png\" alt=\"Create RAID Device Filesystem\" width=\"365\" height=\"162\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Create RAID Device Filesystem<\/p>\n<\/div>\n<p><strong>8.<\/strong>\u00a0Next, mount the newly created filesystem under \u2018<strong>\/mnt\/raid1<\/strong>\u2018 and create some files and verify the contents under mount point.<\/p>\n<pre># mkdir \/mnt\/raid1\r\n# mount \/dev\/md0 \/mnt\/raid1\/\r\n# touch \/mnt\/raid1\/tecmint.txt\r\n# echo \"tecmint raid setups\" &gt; \/mnt\/raid1\/tecmint.txt\r\n<\/pre>\n<div id=\"attachment_9329\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Mount-RAID-Device.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9329\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Mount-RAID-Device-432x450.png\" sizes=\"auto, (max-width: 432px) 100vw, 432px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Mount-RAID-Device-432x450.png 432w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Mount-RAID-Device.png 536w\" alt=\"Mount Raid Device\" width=\"432\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Mount Raid Device<\/p>\n<\/div>\n<p><strong>9.<\/strong>\u00a0To auto-mount RAID1 on system reboot, you need to make an entry in fstab file. Open \u2018<strong>\/etc\/fstab<\/strong>\u2018 file and add the following line at the bottom of the file.<\/p>\n<pre>\/dev\/md0                \/mnt\/raid1              ext4    defaults        0 0\r\n<\/pre>\n<div id=\"attachment_9330\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/RAID-Automount-Filesystem.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9330\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/RAID-Automount-Filesystem-620x288.png\" sizes=\"auto, (max-width: 620px) 100vw, 620px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/RAID-Automount-Filesystem-620x288.png 620w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/RAID-Automount-Filesystem.png 709w\" alt=\"Raid Automount Device\" width=\"620\" height=\"288\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Raid Automount Device<\/p>\n<\/div>\n<p><strong>10.<\/strong>\u00a0Run \u2018<strong>mount -a<\/strong>\u2018 to check whether there are any errors in fstab entry.<\/p>\n<pre># mount -av\r\n<\/pre>\n<div id=\"attachment_9331\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Check-Errors-in-fstab.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9331\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Check-Errors-in-fstab.png\" alt=\"Check Errors in fstab\" width=\"598\" height=\"207\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Check Errors in fstab<\/p>\n<\/div>\n<p><strong>11.<\/strong>\u00a0Next, save the raid configuration manually to \u2018<strong>mdadm.conf<\/strong>\u2018 file using the below command.<\/p>\n<pre># mdadm --detail --scan --verbose &gt;&gt; \/etc\/mdadm.conf\r\n<\/pre>\n<div id=\"attachment_9332\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Save-Raid-Configuration.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9332\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Save-Raid-Configuration-620x140.png\" sizes=\"auto, (max-width: 620px) 100vw, 620px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Save-Raid-Configuration-620x140.png 620w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Save-Raid-Configuration.png 715w\" alt=\"Save Raid Configuration\" width=\"620\" height=\"140\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Save Raid Configuration<\/p>\n<\/div>\n<p>The above configuration file is read by the system at the reboots and load the RAID devices.<\/p>\n<h3>Step 5: Verify Data After Disk Failure<\/h3>\n<p><strong>12.<\/strong>\u00a0Our main purpose is, even after any of hard disk fail or crash our data needs to be available. Let\u2019s see what will happen when any of disk disk is unavailable in array.<\/p>\n<pre># mdadm --detail \/dev\/md0\r\n<\/pre>\n<div id=\"attachment_9333\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Raid-Device-Verify.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9333\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Raid-Device-Verify-587x450.png\" sizes=\"auto, (max-width: 587px) 100vw, 587px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Raid-Device-Verify-587x450.png 587w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Raid-Device-Verify.png 636w\" alt=\"Raid Device Verify\" width=\"587\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Raid Device Verify<\/p>\n<\/div>\n<p>In the above image, we can see there are 2 devices available in our RAID and Active Devices are 2. Now let us see what will happen when a disk plugged out (removed\u00a0<strong>sdc<\/strong>\u00a0disk) or fails.<\/p>\n<pre># ls -l \/dev | grep sd\r\n# mdadm --detail \/dev\/md0\r\n<\/pre>\n<div id=\"attachment_9334\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Test-RAID-Devices.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9334\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Test-RAID-Devices-451x450.png\" sizes=\"auto, (max-width: 451px) 100vw, 451px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Test-RAID-Devices-451x450.png 451w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Test-RAID-Devices-150x150.png 150w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Test-RAID-Devices.png 629w\" alt=\"Test RAID Devices\" width=\"451\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Test RAID Devices<\/p>\n<\/div>\n<p>Now in the above image, you can see that one of our drive is lost. I unplugged one of the drive from my Virtual machine. Now let us check our precious data.<\/p>\n<pre># cd \/mnt\/raid1\/\r\n# cat tecmint.txt\r\n<\/pre>\n<div id=\"attachment_9335\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Verify-RAID-Data.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9335\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/10\/Verify-RAID-Data.png\" alt=\"Verify RAID Data\" width=\"543\" height=\"300\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Verify RAID Data<\/p>\n<\/div>\n<p>Did you see our data is still available. From this we come to know the advantage of RAID 1 (mirror). In next article, we will see how to setup a\u00a0<strong>RAID 5<\/strong>\u00a0striping with distributed Parity. Hope this helps you to understand how the RAID 1 (Mirror) Works.<\/p>\n<h1 class=\"post-title\">Creating RAID 5 (Striping with Distributed Parity) in Linux \u2013 Part 4<\/h1>\n<p>In RAID 5, data strips across multiple drives with distributed parity. The striping with distributed parity means it will split the parity information and stripe data over the multiple disks, which will have good data redundancy.<\/p>\n<div id=\"attachment_9760\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/setup-raid-5-in-linux.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9760\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/setup-raid-5-in-linux.jpg\" alt=\"Setup Raid 5 in CentOS\" width=\"600\" height=\"400\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Setup Raid 5 in Linux<\/p>\n<\/div>\n<p>For RAID Level it should have at least three hard drives or more. RAID 5 are being used in the large scale production environment where it\u2019s cost effective and provide performance as well as redundancy.<\/p>\n<h4>What is Parity?<\/h4>\n<p><strong>Parity<\/strong>\u00a0is a simplest common method of detecting errors in data storage. Parity stores information in each disks, Let\u2019s say we have 4 disks, in 4 disks one disk space will be split to all disks to store the parity information\u2019s. If any one of the disks fails still we can get the data by rebuilding from parity information after replacing the failed disk.<\/p>\n<h4>Pros and Cons of RAID 5<\/h4>\n<ol>\n<li>Gives better performance<\/li>\n<li>Support Redundancy and Fault tolerance.<\/li>\n<li>Support hot spare options.<\/li>\n<li>Will loose a single disk capacity for using parity information.<\/li>\n<li>No data loss if a single disk fails. We can rebuilt from parity after replacing the failed disk.<\/li>\n<li>Suits for transaction oriented environment as the reading will be faster.<\/li>\n<li>Due to parity overhead, writing will be slow.<\/li>\n<li>Rebuild takes long time.<\/li>\n<\/ol>\n<h4>Requirements<\/h4>\n<p>Minimum 3 hard drives are required to create Raid 5, but you can add more disks, only if you\u2019ve a dedicated hardware raid controller with multi ports. Here, we are using software RAID and \u2018<strong>mdadm<\/strong>\u2018 package to create raid.<\/p>\n<p><strong>mdadm<\/strong>\u00a0is a package which allow us to configure and manage RAID devices in Linux. By default there is no configuration file is available for RAID, we must save the configuration file after creating and configuring RAID setup in separate file called\u00a0<strong>mdadm.conf<\/strong>.<\/p>\n<p>Before moving further, I suggest you to go through the following articles for understanding the basics of RAID in Linux.<\/p>\n<ol>\n<li><a href=\"https:\/\/www.tecmint.com\/understanding-raid-setup-in-linux\/\" target=\"_blank\" rel=\"noopener\">Basic Concepts of RAID in Linux \u2013 Part 1<\/a><\/li>\n<li><a href=\"https:\/\/www.tecmint.com\/create-raid0-in-linux\/\" target=\"_blank\" rel=\"noopener\">Creating RAID 0 (Stripe) in Linux \u2013 Part 2<\/a><\/li>\n<li><a href=\"https:\/\/www.tecmint.com\/create-raid1-in-linux\/\" target=\"_blank\" rel=\"noopener\">Setting up RAID 1 (Mirroring) in Linux \u2013 Part 3<\/a><\/li>\n<\/ol>\n<h5>My Server Setup<\/h5>\n<pre>Operating System :\tCentOS 6.5 Final\r\nIP Address\t :\t192.168.0.227\r\nHostname\t :\trd5.tecmintlocal.com\r\nDisk 1 [20GB]\t :\t\/dev\/sdb\r\nDisk 2 [20GB]\t :\t\/dev\/sdc\r\nDisk 3 [20GB]\t :\t\/dev\/sdd\r\n<\/pre>\n<p>This article is a\u00a0<strong>Part 4<\/strong>\u00a0of a 9-tutorial RAID series, here we are going to setup a software\u00a0<strong>RAID 5<\/strong>\u00a0with distributed parity in Linux systems or servers using three 20GB disks named \/dev\/sdb, \/dev\/sdc and \/dev\/sdd.<\/p>\n<h3>Step 1: Installing mdadm and Verify Drives<\/h3>\n<p><strong>1.<\/strong>\u00a0As we said earlier, that we\u2019re using CentOS 6.5 Final release for this raid setup, but same steps can be followed for RAID setup in any Linux based distributions.<\/p>\n<pre># lsb_release -a\r\n# ifconfig | grep inet\r\n<\/pre>\n<div id=\"attachment_9740\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/CentOS-6.5-Summary.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9740\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/CentOS-6.5-Summary-620x346.png\" sizes=\"auto, (max-width: 620px) 100vw, 620px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/CentOS-6.5-Summary-620x346.png 620w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/CentOS-6.5-Summary.png 628w\" alt=\"Setup Raid 5 in CentOS\" width=\"620\" height=\"346\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">CentOS 6.5 Summary<\/p>\n<\/div>\n<p><strong>2.<\/strong>\u00a0If you\u2019re following our raid series, we assume that you\u2019ve already installed \u2018<strong>mdadm<\/strong>\u2018 package, if not, use the following command according to your Linux distribution to install the package.<\/p>\n<pre># yum install mdadm\t\t[on RedHat systems]\r\n# apt-get install mdadm \t[on Debain systems]\r\n<\/pre>\n<p><strong>3.<\/strong>\u00a0After the \u2018<strong>mdadm<\/strong>\u2018 package installation, let\u2019s list the three 20GB disks which we have added in our system using \u2018<strong>fdisk<\/strong>\u2018 command.<\/p>\n<pre># fdisk -l | grep sd\r\n<\/pre>\n<div id=\"attachment_9742\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Install-mdadm-Tool.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9742\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Install-mdadm-Tool.png\" alt=\"Install mdadm Tool in CentOS\" width=\"533\" height=\"324\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Install mdadm Tool<\/p>\n<\/div>\n<p><strong>4.<\/strong>\u00a0Now it\u2019s time to examine the attached three drives for any existing RAID blocks on these drives using following command.<\/p>\n<pre># mdadm -E \/dev\/sd[b-d]\r\n# mdadm --examine \/dev\/sdb \/dev\/sdc \/dev\/sdd\r\n<\/pre>\n<div id=\"attachment_9743\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Examine-Drives-For-Raid.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9743\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Examine-Drives-For-Raid.png\" alt=\"Examine Drives For Raid\" width=\"458\" height=\"251\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Examine Drives For Raid<\/p>\n<\/div>\n<p><strong>Note<\/strong>: From the above image illustrated that there is no any super-block detected yet. So, there is no RAID defined in all three drives. Let us start to create one now.<\/p>\n<h3>Step 2: Partitioning the Disks for RAID<\/h3>\n<p><strong>5.<\/strong>\u00a0First and foremost, we have to partition the disks (<strong>\/dev\/sdb<\/strong>,\u00a0<strong>\/dev\/sd<\/strong>c and\u00a0<strong>\/dev\/sdd<\/strong>) before adding to a RAID, So let us define the partition using \u2018fdisk\u2019 command, before forwarding to the next steps.<\/p>\n<pre># fdisk \/dev\/sdb\r\n# fdisk \/dev\/sdc\r\n# fdisk \/dev\/sdd\r\n<\/pre>\n<h5>Create \/dev\/sdb Partition<\/h5>\n<p>Please follow the below instructions to create partition on\u00a0<strong>\/dev\/sdb<\/strong>\u00a0drive.<\/p>\n<ol>\n<li>Press \u2018<strong>n<\/strong>\u2018 for creating new partition.<\/li>\n<li>Then choose \u2018<strong>P<\/strong>\u2018 for Primary partition. Here we are choosing Primary because there is no partitions defined yet.<\/li>\n<li>Then choose \u2018<strong>1<\/strong>\u2018 to be the first partition. By default it will be\u00a0<strong>1<\/strong>.<\/li>\n<li>Here for cylinder size we don\u2019t have to choose the specified size because we need the whole partition for RAID so just Press Enter two times to choose the default full size.<\/li>\n<li>Next press \u2018<strong>p<\/strong>\u2018 to print the created partition.<\/li>\n<li>Change the Type, If we need to know the every available types Press \u2018<strong>L<\/strong>\u2018.<\/li>\n<li>Here, we are selecting \u2018<strong>fd<\/strong>\u2018 as my type is RAID.<\/li>\n<li>Next press \u2018<strong>p<\/strong>\u2018 to print the defined partition.<\/li>\n<li>Then again use \u2018<strong>p<\/strong>\u2018 to print the changes what we have made.<\/li>\n<li>Use \u2018<strong>w<\/strong>\u2018 to write the changes.<\/li>\n<\/ol>\n<div id=\"attachment_9744\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-sdb-Partition1.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9744\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-sdb-Partition1-393x450.png\" sizes=\"auto, (max-width: 393px) 100vw, 393px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-sdb-Partition1-393x450.png 393w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-sdb-Partition1.png 621w\" alt=\"Create sdb Partition\" width=\"393\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Create sdb Partition<\/p>\n<\/div>\n<p><strong>Note<\/strong>: We have to follow the steps mentioned above to create partitions for\u00a0<strong>sdc<\/strong>\u00a0&amp;\u00a0<strong>sdd<\/strong>\u00a0drives too.<\/p>\n<h5>Create \/dev\/sdc Partition<\/h5>\n<p>Now partition the\u00a0<strong>sdc<\/strong>\u00a0and\u00a0<strong>sdd<\/strong>\u00a0drives by following the steps given in the screenshot or you can follow above steps.<\/p>\n<pre># fdisk \/dev\/sdc\r\n<\/pre>\n<div id=\"attachment_9745\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-sdc-Partition1.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9745\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-sdc-Partition1-387x450.png\" sizes=\"auto, (max-width: 387px) 100vw, 387px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-sdc-Partition1-387x450.png 387w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-sdc-Partition1.png 615w\" alt=\"Create sdc Partition\" width=\"387\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Create sdc Partition<\/p>\n<\/div>\n<h5>Create \/dev\/sdd Partition<\/h5>\n<pre># fdisk \/dev\/sdd\r\n<\/pre>\n<div id=\"attachment_9746\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-sdd-Partition1.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9746\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-sdd-Partition1-383x450.png\" sizes=\"auto, (max-width: 383px) 100vw, 383px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-sdd-Partition1-383x450.png 383w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-sdd-Partition1.png 621w\" alt=\"Create sdd Partition\" width=\"383\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Create sdd Partition<\/p>\n<\/div>\n<p><strong>6.<\/strong>\u00a0After creating partitions, check for changes in all three drives sdb, sdc, &amp; sdd.<\/p>\n<pre># mdadm --examine \/dev\/sdb \/dev\/sdc \/dev\/sdd\r\n\r\nor\r\n\r\n# mdadm -E \/dev\/sd[b-c]\r\n<\/pre>\n<div id=\"attachment_9747\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Changes-on-Partitions.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9747\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Changes-on-Partitions.png\" alt=\"Check Partition Changes\" width=\"510\" height=\"244\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Check Partition Changes<\/p>\n<\/div>\n<p><strong>Note<\/strong>: In the above pic. depict the type is fd i.e. for RAID.<\/p>\n<p><strong>7.<\/strong>\u00a0Now Check for the RAID blocks in newly created partitions. If no super-blocks detected, than we can move forward to create a new RAID 5 setup on these drives.<\/p>\n<div id=\"attachment_9748\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Raid-on-Partitions.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9748\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Raid-on-Partitions.png\" alt=\"Check Raid on Partition\" width=\"490\" height=\"127\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Check Raid on Partition<\/p>\n<\/div>\n<h3>Step 3: Creating md device md0<\/h3>\n<p><strong>8.<\/strong>\u00a0Now create a Raid device \u2018<strong>md0<\/strong>\u2018 (i.e.\u00a0<strong>\/dev\/md0<\/strong>) and include raid level on all newly created partitions (sdb1, sdc1 and sdd1) using below command.<\/p>\n<pre># mdadm --create \/dev\/md0 --level=5 --raid-devices=3 \/dev\/sdb1 \/dev\/sdc1 \/dev\/sdd1\r\n\r\nor\r\n\r\n# mdadm -C \/dev\/md0 -l=5 -n=3 \/dev\/sd[b-d]1\r\n<\/pre>\n<p><strong>9.<\/strong>\u00a0After creating raid device, check and verify the RAID, devices included and RAID Level from the<strong>\u00a0mdstat<\/strong>\u00a0output.<\/p>\n<pre># cat \/proc\/mdstat\r\n<\/pre>\n<div id=\"attachment_9749\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Verify-Raid-Device.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9749\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Verify-Raid-Device-620x263.png\" sizes=\"auto, (max-width: 620px) 100vw, 620px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Verify-Raid-Device-620x263.png 620w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Verify-Raid-Device.png 656w\" alt=\"Verify Raid Device\" width=\"620\" height=\"263\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Verify Raid Device<\/p>\n<\/div>\n<p>If you want to monitor the current building process, you can use \u2018<strong>watch<\/strong>\u2018 command, just pass through the \u2018<strong>cat \/proc\/mdstat<\/strong>\u2018 with watch command which will refresh screen every\u00a0<strong>1<\/strong>\u00a0second.<\/p>\n<pre># watch -n1 cat \/proc\/mdstat\r\n<\/pre>\n<div id=\"attachment_9750\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Monitor-Raid-Process.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9750\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Monitor-Raid-Process.png\" alt=\"Monitor Raid Process\" width=\"386\" height=\"86\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Monitor Raid 5 Process<\/p>\n<\/div>\n<div id=\"attachment_9751\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Raid-Process-Summary.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9751\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Raid-Process-Summary-620x187.png\" sizes=\"auto, (max-width: 620px) 100vw, 620px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Raid-Process-Summary-620x187.png 620w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Raid-Process-Summary.png 659w\" alt=\"Raid 5 Process Summary\" width=\"620\" height=\"187\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Raid 5 Process Summary<\/p>\n<\/div>\n<p><strong>10.<\/strong>\u00a0After creation of raid, Verify the raid devices using the following command.<\/p>\n<pre># mdadm -E \/dev\/sd[b-d]1\r\n<\/pre>\n<div id=\"attachment_9752\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Verify-Raid-Level.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9752\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Verify-Raid-Level-529x450.png\" sizes=\"auto, (max-width: 529px) 100vw, 529px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Verify-Raid-Level-529x450.png 529w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Verify-Raid-Level.png 629w\" alt=\"Verify Raid Level\" width=\"529\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Verify Raid Level<\/p>\n<\/div>\n<p><strong>Note<\/strong>: The Output of the above command will be little long as it prints the information of all three drives.<\/p>\n<p><strong>11.<\/strong>\u00a0Next, verify the RAID array to assume that the devices which we\u2019ve included in the RAID level are running and started to re-sync.<\/p>\n<pre># mdadm --detail \/dev\/md0\r\n<\/pre>\n<div id=\"attachment_9753\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Verify-Raid-Array.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9753\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Verify-Raid-Array-528x450.png\" sizes=\"auto, (max-width: 528px) 100vw, 528px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Verify-Raid-Array-528x450.png 528w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Verify-Raid-Array.png 629w\" alt=\"Verify Raid Array\" width=\"528\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Verify Raid Array<\/p>\n<\/div>\n<h3>Step 4: Creating file system for md0<\/h3>\n<p><strong>12.<\/strong>\u00a0Create a file system for \u2018<strong>md0<\/strong>\u2018 device using\u00a0<strong>ext4<\/strong>\u00a0before mounting.<\/p>\n<pre># mkfs.ext4 \/dev\/md0\r\n<\/pre>\n<div id=\"attachment_9754\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-md0-Filesystem.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9754\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-md0-Filesystem-620x439.png\" sizes=\"auto, (max-width: 620px) 100vw, 620px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-md0-Filesystem-620x439.png 620w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-md0-Filesystem.png 635w\" alt=\"Create md0 Filesystem\" width=\"620\" height=\"439\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Create md0 Filesystem<\/p>\n<\/div>\n<p><strong>13.<\/strong>\u00a0Now create a directory under \u2018<strong>\/mnt<\/strong>\u2018 then mount the created filesystem under\u00a0<strong>\/mnt\/raid5<\/strong>\u00a0and check the files under mount point, you will see\u00a0<strong>lost+found<\/strong>\u00a0directory.<\/p>\n<pre># mkdir \/mnt\/raid5\r\n# mount \/dev\/md0 \/mnt\/raid5\/\r\n# ls -l \/mnt\/raid5\/\r\n<\/pre>\n<p><strong>14.<\/strong>\u00a0Create few files under mount point\u00a0<strong>\/mnt\/raid5<\/strong>\u00a0and append some text in any one of the file to verify the content.<\/p>\n<pre># touch \/mnt\/raid5\/raid5_tecmint_{1..5}\r\n# ls -l \/mnt\/raid5\/\r\n# echo \"tecmint raid setups\" &gt; \/mnt\/raid5\/raid5_tecmint_1\r\n# cat \/mnt\/raid5\/raid5_tecmint_1\r\n# cat \/proc\/mdstat\r\n<\/pre>\n<div id=\"attachment_9755\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Mount-Raid-Device.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9755\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Mount-Raid-Device-429x450.png\" sizes=\"auto, (max-width: 429px) 100vw, 429px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Mount-Raid-Device-429x450.png 429w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Mount-Raid-Device.png 659w\" alt=\"Mount Raid 5 Device\" width=\"429\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Mount Raid Device<\/p>\n<\/div>\n<p><strong>15.<\/strong>\u00a0We need to add entry in\u00a0<strong>fstab<\/strong>, else will not display our mount point after system reboot. To add an entry, we should edit the fstab file and append the following line as shown below. The mount point will differ according to your environment.<\/p>\n<pre># vim \/etc\/fstab\r\n\r\n\/dev\/md0                \/mnt\/raid5              ext4    defaults        0 0\r\n<\/pre>\n<div id=\"attachment_9756\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Raid-Device-Automount.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9756\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Raid-Device-Automount-620x340.png\" sizes=\"auto, (max-width: 620px) 100vw, 620px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Raid-Device-Automount-620x340.png 620w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Raid-Device-Automount.png 658w\" alt=\"Raid 5 Automount\" width=\"620\" height=\"340\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Raid 5 Automount<\/p>\n<\/div>\n<p><strong>16.<\/strong>\u00a0Next, run \u2018<strong>mount -av<\/strong>\u2018 command to check whether any errors in fstab entry.<\/p>\n<pre># mount -av\r\n<\/pre>\n<div id=\"attachment_9757\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Fstab-Errors.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9757\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Fstab-Errors.png\" alt=\"Check Fstab Errors\" width=\"585\" height=\"162\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Check Fstab Errors<\/p>\n<\/div>\n<h3>Step 5: Save Raid 5 Configuration<\/h3>\n<p><strong>17.<\/strong>\u00a0As mentioned earlier in requirement section, by default RAID don\u2019t have a config file. We have to save it manually. If this step is not followed RAID device will not be in md0, it will be in some other random number.<\/p>\n<p>So, we must have to save the configuration before system reboot. If the configuration is saved it will be loaded to the kernel during the system reboot and RAID will also gets loaded.<\/p>\n<pre># mdadm --detail --scan --verbose &gt;&gt; \/etc\/mdadm.conf\r\n<\/pre>\n<div id=\"attachment_9758\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Save-Raid-5-Configuration.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9758\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Save-Raid-5-Configuration-620x180.png\" sizes=\"auto, (max-width: 620px) 100vw, 620px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Save-Raid-5-Configuration-620x180.png 620w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Save-Raid-5-Configuration.png 645w\" alt=\"Save Raid 5 Configuration\" width=\"620\" height=\"180\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Save Raid 5 Configuration<\/p>\n<\/div>\n<p><strong>Note<\/strong>: Saving the configuration will keep the RAID level stable in md0 device.<\/p>\n<h3>Step 6: Adding Spare Drives<\/h3>\n<p><strong>18.<\/strong>\u00a0What the use of adding a spare drive? its very useful if we have a spare drive, if any one of the disk fails in our array, this spare drive will get active and rebuild the process and sync the data from other disk, so we can see a redundancy here.<\/p>\n<p>For more instructions on how to add spare drive and check Raid 5 fault tolerance, read\u00a0<strong>#Step 6<\/strong>\u00a0and\u00a0<strong>#Step 7<\/strong>\u00a0in the following article.<\/p>\n<ol>\n<li><a href=\"https:\/\/www.tecmint.com\/create-raid-6-in-linux\/\" target=\"_blank\" rel=\"noopener\">Add Spare Drive to Raid 5 Setup<\/a><\/li>\n<\/ol>\n<h3>Conclusion<\/h3>\n<p>Here, in this article, we have seen how to setup a RAID 5 using three number of disks. Later in my upcoming articles, we will see how to troubleshoot when a disk fails in RAID 5 and how to replace for recovery.<\/p>\n<h1 class=\"post-title\">Setup RAID Level 6 (Striping with Double Distributed Parity) in Linux \u2013 Part 5<\/h1>\n<p><b>RAID 6<\/b>\u00a0is upgraded version of\u00a0<strong>RAID 5<\/strong>, where it has two distributed parity which provides fault tolerance even after two drives fails. Mission critical system still operational incase of two concurrent disks failures. It\u2019s alike\u00a0<b>RAID 5<\/b>, but provides more robust, because it uses one more disk for parity.<\/p>\n<p>In our earlier article, we\u2019ve seen distributed parity in\u00a0<strong>RAID 5<\/strong>, but in this article we will going to see\u00a0<b>RAID 6<\/b>\u00a0with double distributed parity. Don\u2019t expect extra performance than any other RAID, if so we have to install a dedicated RAID Controller too. Here in\u00a0<b>RAID 6<\/b>\u00a0even if we loose our 2 disks we can get the data back by replacing a spare drive and build it from parity.<\/p>\n<div id=\"attachment_9589\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Setup-RAID-6-in-Linux.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9589\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Setup-RAID-6-in-Linux.jpg\" alt=\"Setup RAID 6 in CentOS\" width=\"600\" height=\"400\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Setup RAID 6 in Linux<\/p>\n<\/div>\n<p>To setup a\u00a0<b>RAID 6<\/b>, minimum\u00a0<strong>4<\/strong>\u00a0numbers of disks or more in a set are required.\u00a0<b>RAID 6<\/b>\u00a0have multiple disks even in some set it may be have some bunch of disks, while reading, it will read from all the drives, so reading would be faster whereas writing would be poor because it has to stripe over multiple disks.<\/p>\n<p>Now, many of us comes to conclusion, why we need to use\u00a0<b>RAID 6<\/b>, when it doesn\u2019t perform like any other RAID. Hmm\u2026 those who raise this question need to know that, if they need high fault tolerance choose RAID 6. In every higher environments with high availability for database, they use\u00a0<b>RAID 6<\/b>\u00a0because database is the most important and need to be safe in any cost, also it can be useful for video streaming environments.<\/p>\n<h4>Pros and Cons of RAID 6<\/h4>\n<ol>\n<li>Performance are good.<\/li>\n<li>RAID 6 is expensive, as it requires two independent drives are used for parity functions.<\/li>\n<li>Will loose a two disks capacity for using parity information (double parity).<\/li>\n<li>No data loss, even after two disk fails. We can rebuilt from parity after replacing the failed disk.<\/li>\n<li>Reading will be better than RAID 5, because it reads from multiple disk, But writing performance will be very poor without dedicated RAID Controller.<\/li>\n<\/ol>\n<h4>Requirements<\/h4>\n<p>Minimum 4 numbers of disks are required to create a\u00a0<strong>RAID 6<\/strong>. If you want to add more disks, you can, but you must have dedicated raid controller. In software RAID, we will won\u2019t get better performance in RAID 6. So we need a physical RAID controller.<\/p>\n<p>Those who are new to RAID setup, we recommend to go through RAID articles below.<\/p>\n<ol>\n<li><a href=\"https:\/\/www.tecmint.com\/understanding-raid-setup-in-linux\/\" target=\"_blank\" rel=\"noopener\">Basic Concepts of RAID in Linux \u2013 Part 1<\/a><\/li>\n<li><a href=\"https:\/\/www.tecmint.com\/create-raid0-in-linux\/\" target=\"_blank\" rel=\"noopener\">Creating Software RAID 0 (Stripe) in Linux \u2013 Part 2<\/a><\/li>\n<li><a href=\"https:\/\/www.tecmint.com\/create-raid1-in-linux\/\" target=\"_blank\" rel=\"noopener\">Setting up RAID 1 (Mirroring) in Linux \u2013 Part 3<\/a><\/li>\n<\/ol>\n<h5>My Server Setup<\/h5>\n<pre>Operating System :\tCentOS 6.5 Final\r\nIP Address\t :\t192.168.0.228\r\nHostname\t :\trd6.tecmintlocal.com\r\nDisk 1 [20GB]\t :\t\/dev\/sdb\r\nDisk 2 [20GB]\t :\t\/dev\/sdc\r\nDisk 3 [20GB]\t :\t\/dev\/sdd\r\nDisk 4 [20GB]\t : \t\/dev\/sde\r\n<\/pre>\n<p>This article is a\u00a0<strong>Part 5<\/strong>\u00a0of a 9-tutorial RAID series, here we are going to see how we can create and setup Software\u00a0<strong>RAID 6<\/strong>\u00a0or Striping with Double Distributed Parity in Linux systems or servers using four 20GB disks named \/dev\/sdb, \/dev\/sdc, \/dev\/sdd and \/dev\/sde.<\/p>\n<h3>Step 1: Installing mdadm Tool and Examine Drives<\/h3>\n<p><strong>1.<\/strong>\u00a0If you\u2019re following our last two Raid articles (<strong>Part 2<\/strong>\u00a0and P<strong>art 3<\/strong>), where we\u2019ve already shown how to install \u2018<strong>mdadm<\/strong>\u2018 tool. If you\u2019re new to this article, let me explain that \u2018<strong>mdadm<\/strong>\u2018 is a tool to create and manage Raid in Linux systems, let\u2019s install the tool using following command according to your Linux distribution.<\/p>\n<pre># yum install mdadm\t\t[on RedHat systems]\r\n# apt-get install mdadm \t[on Debain systems]\r\n<\/pre>\n<p><strong>2.<\/strong>\u00a0After installing the tool, now it\u2019s time to verify the attached four drives that we are going to use for raid creation using the following \u2018<strong>fdisk<\/strong>\u2018 command.<\/p>\n<pre># fdisk -l | grep sd\r\n<\/pre>\n<div id=\"attachment_9563\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Verify-Linux-Disks.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9563\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Verify-Linux-Disks.png\" alt=\"Check Hard Disk in Linux\" width=\"539\" height=\"190\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Check Disks in Linux<\/p>\n<\/div>\n<p><strong>3.<\/strong>\u00a0Before creating a RAID drives, always examine our disk drives whether there is any RAID is already created on the disks.<\/p>\n<pre># mdadm -E \/dev\/sd[b-e]\r\n# mdadm --examine \/dev\/sdb \/dev\/sdc \/dev\/sdd \/dev\/sde\r\n<\/pre>\n<div id=\"attachment_9564\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Disk-Raid.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9564\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Disk-Raid.png\" alt=\"Check Raid on Disk\" width=\"426\" height=\"169\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Check Raid on Disk<\/p>\n<\/div>\n<p><strong>Note:<\/strong>\u00a0In the above image depicts that there is no any super-block detected or no RAID is defined in four disk drives. We may move further to start creating RAID 6.<\/p>\n<h3>Step 2: Drive Partitioning for RAID 6<\/h3>\n<p><strong>4.<\/strong>\u00a0Now create partitions for raid on \u2018<strong>\/dev\/sdb<\/strong>\u2018, \u2018<strong>\/dev\/sdc<\/strong>\u2018, \u2018<strong>\/dev\/sdd<\/strong>\u2018 and \u2018<strong>\/dev\/sde<\/strong>\u2018 with the help of following\u00a0<strong>fdisk<\/strong>\u00a0command. Here, we will show how to create partition on\u00a0<strong>sdb<\/strong>\u00a0drive and later same steps to be followed for rest of the drives.<\/p>\n<h6>Create \/dev\/sdb Partition<\/h6>\n<pre># fdisk \/dev\/sdb\r\n<\/pre>\n<p>Please follow the instructions as shown below for creating partition.<\/p>\n<ol>\n<li>Press \u2018<strong>n<\/strong>\u2018 for creating new partition.<\/li>\n<li>Then choose \u2018<strong>P<\/strong>\u2018 for Primary partition.<\/li>\n<li>Next choose the partition number as\u00a0<strong>1<\/strong>.<\/li>\n<li>Define the default value by just pressing two times\u00a0<strong>Enter<\/strong>\u00a0key.<\/li>\n<li>Next press \u2018<strong>P<\/strong>\u2018 to print the defined partition.<\/li>\n<li>Press \u2018<strong>L<\/strong>\u2018 to list all available types.<\/li>\n<li>Type \u2018<strong>t<\/strong>\u2018 to choose the partitions.<\/li>\n<li>Choose \u2018<strong>fd<\/strong>\u2018 for Linux raid auto and press Enter to apply.<\/li>\n<li>Then again use \u2018<strong>P<\/strong>\u2018 to print the changes what we have made.<\/li>\n<li>Use \u2018<strong>w<\/strong>\u2018 to write the changes.<\/li>\n<\/ol>\n<div id=\"attachment_9565\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-sdb-Partition.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9565\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-sdb-Partition-332x450.png\" sizes=\"auto, (max-width: 332px) 100vw, 332px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-sdb-Partition-332x450.png 332w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-sdb-Partition.png 622w\" alt=\"Create sdb Partition\" width=\"332\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Create \/dev\/sdb Partition<\/p>\n<\/div>\n<h6>Create \/dev\/sdb Partition<\/h6>\n<pre># fdisk \/dev\/sdc\r\n<\/pre>\n<div id=\"attachment_9566\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-sdc-Partition.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9566\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-sdc-Partition-330x450.png\" sizes=\"auto, (max-width: 330px) 100vw, 330px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-sdc-Partition-330x450.png 330w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-sdc-Partition.png 617w\" alt=\"Create sdc Partition\" width=\"330\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Create \/dev\/sdc Partition<\/p>\n<\/div>\n<h6>Create \/dev\/sdd Partition<\/h6>\n<pre># fdisk \/dev\/sdd\r\n<\/pre>\n<div id=\"attachment_9567\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-sdd-Partition.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9567\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-sdd-Partition-331x450.png\" sizes=\"auto, (max-width: 331px) 100vw, 331px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-sdd-Partition-331x450.png 331w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-sdd-Partition.png 620w\" alt=\"Create sdd Partition\" width=\"331\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Create \/dev\/sdd Partition<\/p>\n<\/div>\n<h6>Create \/dev\/sde Partition<\/h6>\n<pre># fdisk \/dev\/sde\r\n<\/pre>\n<div id=\"attachment_9568\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-sde-Partition.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9568\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-sde-Partition-332x450.png\" sizes=\"auto, (max-width: 332px) 100vw, 332px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-sde-Partition-332x450.png 332w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-sde-Partition.png 624w\" alt=\"Create sde Partition\" width=\"332\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Create \/dev\/sde Partition<\/p>\n<\/div>\n<p><strong>5.<\/strong>\u00a0After creating partitions, it\u2019s always good habit to examine the drives for super-blocks. If super-blocks does not exist than we can go head to create a new RAID setup.<\/p>\n<pre># mdadm -E \/dev\/sd[b-e]1\r\n\r\n\r\nor\r\n\r\n# mdadm --examine \/dev\/sdb1 \/dev\/sdc1 \/dev\/sdd1 \/dev\/sde1\r\n<\/pre>\n<div id=\"attachment_9569\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Raid-on-New-Partitions.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9569\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Raid-on-New-Partitions.png\" alt=\"Check Raid on New Partitions\" width=\"394\" height=\"157\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Check Raid on New Partitions<\/p>\n<\/div>\n<h3>Step 3: Creating md device (RAID)<\/h3>\n<p><strong>6.<\/strong>\u00a0Now it\u2019s time to create Raid device \u2018<strong>md0<\/strong>\u2018 (i.e.\u00a0<strong>\/dev\/md0<\/strong>) and apply raid level on all newly created partitions and confirm the raid using following commands.<\/p>\n<pre># mdadm --create \/dev\/md0 --level=6 --raid-devices=4 \/dev\/sdb1 \/dev\/sdc1 \/dev\/sdd1 \/dev\/sde1\r\n# cat \/proc\/mdstat\r\n<\/pre>\n<div id=\"attachment_9570\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-Raid-6-Device.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9570\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-Raid-6-Device-620x255.png\" sizes=\"auto, (max-width: 620px) 100vw, 620px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-Raid-6-Device-620x255.png 620w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-Raid-6-Device.png 654w\" alt=\"Create Raid 6 Device\" width=\"620\" height=\"255\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Create Raid 6 Device<\/p>\n<\/div>\n<p><strong>7.<\/strong>\u00a0You can also check the current process of raid using\u00a0<strong>watch<\/strong>\u00a0command as shown in the screen grab below.<\/p>\n<pre># watch -n1 cat \/proc\/mdstat\r\n<\/pre>\n<div id=\"attachment_9571\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Raid-6-Process.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9571\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Raid-6-Process-620x170.png\" sizes=\"auto, (max-width: 620px) 100vw, 620px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Raid-6-Process-620x170.png 620w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Raid-6-Process.png 652w\" alt=\"Check Raid 6 Process\" width=\"620\" height=\"170\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Check Raid 6 Process<\/p>\n<\/div>\n<p><strong>8.<\/strong>\u00a0Verify the raid devices using the following command.<\/p>\n<pre># mdadm -E \/dev\/sd[b-e]1\r\n<\/pre>\n<p><strong>Note:<\/strong>: The above command will be display the information of the four disks, which is quite long so not possible to post the output or screen grab here.<\/p>\n<p><strong>9.<\/strong>\u00a0Next, verify the RAID array to confirm that the re-syncing is started.<\/p>\n<pre># mdadm --detail \/dev\/md0\r\n<\/pre>\n<div id=\"attachment_9572\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Raid-6-Array.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9572\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Raid-6-Array-467x450.png\" sizes=\"auto, (max-width: 467px) 100vw, 467px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Raid-6-Array-467x450.png 467w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Raid-6-Array.png 632w\" alt=\"Check Raid 6 Array\" width=\"467\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Check Raid 6 Array<\/p>\n<\/div>\n<h3>Step 4: Creating FileSystem on Raid Device<\/h3>\n<p><strong>10.<\/strong>\u00a0Create a filesystem using ext4 for \u2018<strong>\/dev\/md0<\/strong>\u2018 and mount it under\u00a0<b>\/mnt\/raid6<\/b>. Here we\u2019ve used ext4, but you can use any type of filesystem as per your choice.<\/p>\n<pre># mkfs.ext4 \/dev\/md0\r\n<\/pre>\n<div id=\"attachment_9573\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-File-System-on-Raid.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9573\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-File-System-on-Raid-606x450.png\" sizes=\"auto, (max-width: 606px) 100vw, 606px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-File-System-on-Raid-606x450.png 606w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-File-System-on-Raid.png 633w\" alt=\"Create File System on Raid\" width=\"606\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Create File System on Raid 6<\/p>\n<\/div>\n<p><strong>11.<\/strong>\u00a0Mount the created filesystem under\u00a0<b>\/mnt\/raid6<\/b>\u00a0and verify the files under mount point, we can see lost+found directory.<\/p>\n<pre># mkdir \/mnt\/raid6\r\n# mount \/dev\/md0 \/mnt\/raid6\/\r\n# ls -l \/mnt\/raid6\/\r\n<\/pre>\n<p><strong>12.<\/strong>\u00a0Create some files under mount point and append some text in any one of the file to verify the content.<\/p>\n<pre># touch \/mnt\/raid6\/raid6_test.txt\r\n# ls -l \/mnt\/raid6\/\r\n# echo \"tecmint raid setups\" &gt; \/mnt\/raid6\/raid6_test.txt\r\n# cat \/mnt\/raid6\/raid6_test.txt\r\n<\/pre>\n<div id=\"attachment_9574\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Verify-Raid-Content.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9574\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Verify-Raid-Content.png\" alt=\"Verify Raid Content\" width=\"560\" height=\"397\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Verify Raid Content<\/p>\n<\/div>\n<p><strong>13.<\/strong>\u00a0Add an entry in\u00a0<b>\/etc\/fstab<\/b>\u00a0to auto mount the device at the system startup and append the below entry, mount point may differ according to your environment.<\/p>\n<pre># vim \/etc\/fstab\r\n\r\n\/dev\/md0                \/mnt\/raid6              ext4    defaults        0 0\r\n<\/pre>\n<div id=\"attachment_9575\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Automount-Raid-Device.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9575\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Automount-Raid-Device-620x340.png\" sizes=\"auto, (max-width: 620px) 100vw, 620px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Automount-Raid-Device-620x340.png 620w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Automount-Raid-Device.png 653w\" alt=\"Automount Raid 6 Device\" width=\"620\" height=\"340\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Automount Raid 6 Device<\/p>\n<\/div>\n<p><strong>14.<\/strong>\u00a0Next, execute \u2018<strong>mount -a<\/strong>\u2018 command to verify whether there is any error in fstab entry.<\/p>\n<pre># mount -av\r\n<\/pre>\n<div id=\"attachment_9576\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Verify-Automount-Raid-Devices.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9576\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Verify-Automount-Raid-Devices.png\" alt=\"Verify Raid Automount\" width=\"598\" height=\"188\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Verify Raid Automount<\/p>\n<\/div>\n<h3>Step 5: Save RAID 6 Configuration<\/h3>\n<p><strong>15.<\/strong>\u00a0Please note by default RAID don\u2019t have a config file. We have to save it by manually using below command and then verify the status of device \u2018<strong>\/dev\/md0<\/strong>\u2018.<\/p>\n<pre># mdadm --detail --scan --verbose &gt;&gt; \/etc\/mdadm.conf\r\n# mdadm --detail \/dev\/md0\r\n<\/pre>\n<div id=\"attachment_9577\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Save-RAID6-Configuration.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9577\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Save-RAID6-Configuration.png\" alt=\"Save Raid 6 Configuration\" width=\"564\" height=\"172\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Save Raid 6 Configuration<\/p>\n<\/div>\n<div id=\"attachment_9578\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Raid-Status.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9578\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Raid-Status-488x450.png\" sizes=\"auto, (max-width: 488px) 100vw, 488px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Raid-Status-488x450.png 488w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Raid-Status.png 635w\" alt=\"Check Raid 6 Status\" width=\"488\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Check Raid 6 Status<\/p>\n<\/div>\n<h3>Step 6: Adding a Spare Drives<\/h3>\n<p><strong>16.<\/strong>\u00a0Now it has<strong>\u00a04<\/strong>\u00a0disks and there are two parity information\u2019s available. In some cases, if any one of the disk fails we can get the data, because there is double parity in RAID 6.<\/p>\n<p>May be if the second disk fails, we can add a new one before loosing third disk. It is possible to add a spare drive while creating our RAID set, But I have not defined the spare drive while creating our raid set. But, we can add a spare drive after any drive failure or while creating the RAID set. Now we have already created the RAID set now let me add a spare drive for demonstration.<\/p>\n<p>For the demonstration purpose, I\u2019ve hot-plugged a new HDD disk (i.e.\u00a0<strong>\/dev\/sdf<\/strong>), let\u2019s verify the attached disk.<\/p>\n<pre># ls -l \/dev\/ | grep sd\r\n<\/pre>\n<div id=\"attachment_9579\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-New-Disk.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9579\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-New-Disk.png\" alt=\"Check New Disk\" width=\"448\" height=\"266\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Check New Disk<\/p>\n<\/div>\n<p><strong>17.<\/strong>\u00a0Now again confirm the new attached disk for any raid is already configured or not using the same\u00a0<strong>mdadm<\/strong>command.<\/p>\n<pre># mdadm --examine \/dev\/sdf\r\n<\/pre>\n<div id=\"attachment_9580\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Raid-on-New-Disk.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9580\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Raid-on-New-Disk.png\" alt=\"Check Raid on New Disk\" width=\"404\" height=\"119\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Check Raid on New Disk<\/p>\n<\/div>\n<p><strong>Note:<\/strong>\u00a0As usual, like we\u2019ve created partitions for four disks earlier, similarly we\u2019ve to create new partition on the new plugged disk using\u00a0<strong>fdisk<\/strong>\u00a0command.<\/p>\n<pre># fdisk \/dev\/sdf\r\n<\/pre>\n<div id=\"attachment_9581\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-Partition-on-sdf.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9581\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-Partition-on-sdf-347x450.png\" sizes=\"auto, (max-width: 347px) 100vw, 347px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-Partition-on-sdf-347x450.png 347w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-Partition-on-sdf.png 642w\" alt=\"Create sdf Partition\" width=\"347\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Create \/dev\/sdf Partition<\/p>\n<\/div>\n<p><strong>18.<\/strong>\u00a0Again after creating new partition on\u00a0<strong>\/dev\/sdf<\/strong>, confirm the raid on the partition, include the spare drive to the<strong>\u00a0\/dev\/md0<\/strong>\u00a0raid device and verify the added device.<\/p>\n<pre># mdadm --examine \/dev\/sdf\r\n# mdadm --examine \/dev\/sdf1\r\n# mdadm --add \/dev\/md0 \/dev\/sdf1\r\n# mdadm --detail \/dev\/md0\r\n<\/pre>\n<div id=\"attachment_9582\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Verify-Raid-on-sdf.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9582\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Verify-Raid-on-sdf.png\" alt=\"Verify Raid on sdf Partition\" width=\"513\" height=\"209\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Verify Raid on sdf Partition<\/p>\n<\/div>\n<div id=\"attachment_9583\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Add-sdf-Partition-to-Raid.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9583\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Add-sdf-Partition-to-Raid.png\" alt=\"Add sdf Partition to Raid\" width=\"415\" height=\"128\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Add sdf Partition to Raid<\/p>\n<\/div>\n<div id=\"attachment_9585\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Verify-sdf-Details.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9585\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Verify-sdf-Details-481x450.png\" sizes=\"auto, (max-width: 481px) 100vw, 481px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Verify-sdf-Details-481x450.png 481w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Verify-sdf-Details.png 649w\" alt=\"Verify sdf Partition Details\" width=\"481\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Verify sdf Partition Details<\/p>\n<\/div>\n<h3>Step 7: Check Raid 6 Fault Tolerance<\/h3>\n<p><strong>19.<\/strong>\u00a0Now, let us check whether spare drive works automatically, if anyone of the disk fails in our Array. For testing, I\u2019ve personally marked one of the drive is failed.<\/p>\n<p>Here, we\u2019re going to mark\u00a0<b>\/dev\/sdd1<\/b>\u00a0as failed drive.<\/p>\n<pre># mdadm --manage --fail \/dev\/md0 \/dev\/sdd1\r\n<\/pre>\n<div id=\"attachment_9586\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Raid-6-Failover.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9586\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Raid-6-Failover.png\" alt=\"Check Raid 6 Fault Tolerance\" width=\"461\" height=\"109\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Check Raid 6 Fault Tolerance<\/p>\n<\/div>\n<p><strong>20.<\/strong>\u00a0Let me get the details of RAID set now and check whether our spare started to sync.<\/p>\n<pre># mdadm --detail \/dev\/md0\r\n<\/pre>\n<div id=\"attachment_9587\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Auto-Raid-Syncing.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9587\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Auto-Raid-Syncing-462x450.png\" sizes=\"auto, (max-width: 462px) 100vw, 462px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Auto-Raid-Syncing-462x450.png 462w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Auto-Raid-Syncing.png 658w\" alt=\"Check Auto Raid Syncing\" width=\"462\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Check Auto Raid Syncing<\/p>\n<\/div>\n<p><strong>Hurray!<\/strong>\u00a0Here, we can see the spare got activated and started rebuilding process. At the bottom we can see the faulty drive\u00a0<b>\/dev\/sdd1<\/b>\u00a0listed as faulty. We can monitor build process using following command.<\/p>\n<pre># cat \/proc\/mdstat\r\n<\/pre>\n<div id=\"attachment_9588\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Raid-6-Auto-Syncing.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9588\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Raid-6-Auto-Syncing-620x198.png\" sizes=\"auto, (max-width: 620px) 100vw, 620px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Raid-6-Auto-Syncing-620x198.png 620w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Raid-6-Auto-Syncing.png 671w\" alt=\"Raid 6 Auto Syncing\" width=\"620\" height=\"198\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Raid 6 Auto Syncing<\/p>\n<\/div>\n<h3>Conclusion:<\/h3>\n<p>Here, we have seen how to setup\u00a0<strong>RAID 6<\/strong>\u00a0using four disks. This RAID level is one of the expensive setup with high redundancy. We will see how to setup a\u00a0<strong>Nested RAID 10<\/strong>\u00a0and much more in the next articles.<\/p>\n<h1 class=\"post-title\">Setting Up RAID 10 or 1+0 (Nested) in Linux \u2013 Part 6<\/h1>\n<p><strong>RAID 10<\/strong>\u00a0is a combine of\u00a0<strong>RAID 0<\/strong>\u00a0and\u00a0<strong>RAID 1<\/strong>\u00a0to form a\u00a0<strong>RAID 10<\/strong>. To setup Raid 10, we need at least 4 number of disks. In our earlier articles, we\u2019ve seen how to setup a RAID 0 and RAID 1 with minimum 2 number of disks.<\/p>\n<p>Here we will use both RAID 0 and RAID 1 to perform a Raid 10 setup with minimum of 4 drives. Assume, that we\u2019ve some data saved to logical volume, which is created with RAID 10. Just for an example, if we are saving a data \u201c<strong>apple<\/strong>\u201d this will be saved under all 4 disk by this following method.<\/p>\n<div id=\"attachment_9860\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/raid10.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9860\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/raid10.jpg\" alt=\"Create Raid 10 in Linux\" width=\"600\" height=\"400\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Create Raid 10 in Linux<\/p>\n<\/div>\n<p>Using\u00a0<strong>RAID 0<\/strong>\u00a0it will save as \u201c<b>A<\/b>\u201d in first disk and \u201c<b>p<\/b>\u201d in the second disk, then again \u201c<b>p<\/b>\u201d in first disk and \u201c<b>l<\/b>\u201d in second disk. Then \u201c<b>e<\/b>\u201d in first disk, like this it will continue the Round robin process to save the data. From this we come to know that RAID 0 will write the half of the data to first disk and other half of the data to second disk.<\/p>\n<p>In\u00a0<strong>RAID 1<\/strong>\u00a0method, same data will be written to other 2 disks as follows. \u201c<b>A<\/b>\u201d will write to both first and second disks, \u201c<b>P<\/b>\u201d will write to both disk, Again other \u201c<b>P<\/b>\u201d will write to both the disks. Thus using RAID 1 it will write to both the disks. This will continue in round robin process.<\/p>\n<p>Now you all came to know that how RAID 10 works by combining of both RAID 0 and RAID 1. If we have 4 number of 20 GB size disks, it will be 80 GB in total, but we will get only 40 GB of Storage capacity, the half of total capacity will be lost for building RAID 10.<\/p>\n<h4>Pros and Cons of RAID 5<\/h4>\n<ol>\n<li>Gives better performance.<\/li>\n<li>We will loose two of the disk capacity in RAID 10.<\/li>\n<li>Reading and writing will be very good, because it will write and read to all those 4 disk at the same time.<\/li>\n<li>It can be used for Database solutions, which needs a high I\/O disk writes.<\/li>\n<\/ol>\n<h4>Requirements<\/h4>\n<p>In RAID 10, we need minimum of 4 disks, the first 2 disks for RAID 0 and other 2 Disks for RAID 1. Like I said before, RAID 10 is just a Combine of RAID 0 &amp; 1. If we need to extended the RAID group, we must increase the disk by minimum 4 disks.<\/p>\n<h5>My Server Setup<\/h5>\n<pre>Operating System :\tCentOS 6.5 Final\r\nIP Address\t \t:\t192.168.0.229\r\nHostname\t \t:\trd10.tecmintlocal.com\r\nDisk 1 [20GB]\t \t:\t\/dev\/sdd\r\nDisk 2 [20GB]\t \t:\t\/dev\/sdc\r\nDisk 3 [20GB]\t \t:\t\/dev\/sdd\r\nDisk 4 [20GB]\t \t:\t\/dev\/sde\r\n<\/pre>\n<p>There are two ways to setup RAID 10, but here I\u2019m going to show you both methods, but I prefer you to follow the first method, which makes the work lot easier for setting up a RAID 10.<\/p>\n<h3>Method 1: Setting Up Raid 10<\/h3>\n<p><strong>1.<\/strong>\u00a0First, verify that all the 4 added disks are detected or not using the following command.<\/p>\n<pre># ls -l \/dev | grep sd\r\n<\/pre>\n<p><strong>2.<\/strong>\u00a0Once the four disks are detected, it\u2019s time to check for the drives whether there is already any raid existed before creating a new one.<\/p>\n<pre># mdadm -E \/dev\/sd[b-e]\r\n# mdadm --examine \/dev\/sdb \/dev\/sdc \/dev\/sdd \/dev\/sde\r\n<\/pre>\n<div id=\"attachment_9844\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Verify-4-Added-Disks.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9844\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Verify-4-Added-Disks.png\" alt=\"Verify 4 Added Disks\" width=\"441\" height=\"332\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Verify 4 Added Disks<\/p>\n<\/div>\n<p><strong>Note<\/strong>: In the above output, you see there isn\u2019t any super-block detected yet, that means there is no RAID defined in all 4 drives.<\/p>\n<h4>Step 1: Drive Partitioning for RAID<\/h4>\n<p><strong>3.<\/strong>\u00a0Now create a new partition on all 4 disks (\/dev\/sdb, \/dev\/sdc, \/dev\/sdd and \/dev\/sde) using the \u2018fdisk\u2019 tool.<\/p>\n<pre># fdisk \/dev\/sdb\r\n# fdisk \/dev\/sdc\r\n# fdisk \/dev\/sdd\r\n# fdisk \/dev\/sde\r\n<\/pre>\n<h5>Create \/dev\/sdb Partition<\/h5>\n<p>Let me show you how to partition one of the disk (\/dev\/sdb) using fdisk, this steps will be the same for all the other disks too.<\/p>\n<pre># fdisk \/dev\/sdb\r\n<\/pre>\n<p>Please use the below steps for creating a new partition on\u00a0<strong>\/dev\/sdb<\/strong>\u00a0drive.<\/p>\n<ol>\n<li>Press \u2018<strong>n<\/strong>\u2018 for creating new partition.<\/li>\n<li>Then choose \u2018<strong>P<\/strong>\u2018 for Primary partition.<\/li>\n<li>Then choose \u2018<strong>1<\/strong>\u2018 to be the first partition.<\/li>\n<li>Next press \u2018<strong>p<\/strong>\u2018 to print the created partition.<\/li>\n<li>Change the Type, If we need to know the every available types Press \u2018<strong>L<\/strong>\u2018.<\/li>\n<li>Here, we are selecting \u2018<strong>fd<\/strong>\u2018 as my type is RAID.<\/li>\n<li>Next press \u2018<strong>p<\/strong>\u2018 to print the defined partition.<\/li>\n<li>Then again use \u2018<strong>p<\/strong>\u2018 to print the changes what we have made.<\/li>\n<li>Use \u2018<strong>w<\/strong>\u2018 to write the changes.<\/li>\n<\/ol>\n<div id=\"attachment_9846\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Disk-sdb-Partition.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9846\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Disk-sdb-Partition-305x450.png\" sizes=\"auto, (max-width: 305px) 100vw, 305px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Disk-sdb-Partition-305x450.png 305w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Disk-sdb-Partition.png 587w\" alt=\"Disk sdb Partition\" width=\"305\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Disk sdb Partition<\/p>\n<\/div>\n<p><strong>Note<\/strong>: Please use the above same instructions for creating partitions on other disks (sdc, sdd sdd sde).<\/p>\n<p><strong>4.<\/strong>\u00a0After creating all 4 partitions, again you need to examine the drives for any already existing raid using the following command.<\/p>\n<pre># mdadm -E \/dev\/sd[b-e]\r\n# mdadm -E \/dev\/sd[b-e]1\r\n\r\nOR\r\n\r\n# mdadm --examine \/dev\/sdb \/dev\/sdc \/dev\/sdd \/dev\/sde\r\n# mdadm --examine \/dev\/sdb1 \/dev\/sdc1 \/dev\/sdd1 \/dev\/sde1\r\n<\/pre>\n<div id=\"attachment_9847\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-All-Disks-for-Raid.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9847\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-All-Disks-for-Raid.png\" alt=\"Check All Disks for Raid\" width=\"498\" height=\"421\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Check All Disks for Raid<\/p>\n<\/div>\n<p><strong>Note<\/strong>: The above outputs shows that there isn\u2019t any super-block detected on all four newly created partitions, that means we can move forward to create RAID 10 on these drives.<\/p>\n<h4>Step 2: Creating \u2018md\u2019 RAID Device<\/h4>\n<p><strong>5.<\/strong>\u00a0Now it\u2019s time to create a \u2018md\u2019 (i.e. \/dev\/md0) device, using \u2018mdadm\u2019 raid management tool. Before, creating device, your system must have \u2018mdadm\u2019 tool installed, if not install it first.<\/p>\n<pre># yum install mdadm\t\t[on RedHat systems]\r\n# apt-get install mdadm \t[on Debain systems]\r\n<\/pre>\n<p>Once \u2018mdadm\u2019 tool installed, you can now create a \u2018md\u2019 raid device using the following command.<\/p>\n<pre># mdadm --create \/dev\/md0 --level=10 --raid-devices=4 \/dev\/sd[b-e]1\r\n<\/pre>\n<p><strong>6.<\/strong>\u00a0Next verify the newly created raid device using the \u2018cat\u2019 command.<\/p>\n<pre># cat \/proc\/mdstat\r\n<\/pre>\n<div id=\"attachment_9848\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-md-raid-Device.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9848\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-md-raid-Device-620x214.png\" sizes=\"auto, (max-width: 620px) 100vw, 620px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-md-raid-Device-620x214.png 620w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-md-raid-Device.png 753w\" alt=\"Create md raid Device\" width=\"620\" height=\"214\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Create md raid Device<\/p>\n<\/div>\n<p><strong>7.<\/strong>\u00a0Next, examine all the 4 drives using the below command. The output of the below command will be long as it displays the information of all 4 disks.<\/p>\n<pre># mdadm --examine \/dev\/sd[b-e]1\r\n<\/pre>\n<p>8. Next, check the details of Raid Array with the help of following command.<\/p>\n<pre># mdadm --detail \/dev\/md0\r\n<\/pre>\n<div id=\"attachment_9849\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Raid-Array-Details.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9849\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Raid-Array-Details-471x450.png\" sizes=\"auto, (max-width: 471px) 100vw, 471px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Raid-Array-Details-471x450.png 471w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Raid-Array-Details.png 649w\" alt=\"Check Raid Array Details\" width=\"471\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Check Raid Array Details<\/p>\n<\/div>\n<p><strong>Note<\/strong>: You see in the above results, that the status of Raid was active and re-syncing.<\/p>\n<h4>Step 3: Creating Filesystem<\/h4>\n<p><strong>9.<\/strong>\u00a0Create a file system using ext4 for \u2018md0\u2019 and mount it under \u2018<strong>\/mnt\/raid10<\/strong>\u2018. Here, I\u2019ve used ext4, but you can use any filesystem type if you want.<\/p>\n<pre># mkfs.ext4 \/dev\/md0\r\n<\/pre>\n<div id=\"attachment_9850\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-md-Filesystem.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9850\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-md-Filesystem-611x450.png\" sizes=\"auto, (max-width: 611px) 100vw, 611px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-md-Filesystem-611x450.png 611w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-md-Filesystem.png 639w\" alt=\"Create md Filesystem\" width=\"611\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Create md Filesystem<\/p>\n<\/div>\n<p><strong>10.<\/strong>\u00a0After creating filesystem, mount the created file-system under \u2018<strong>\/mnt\/raid10<\/strong>\u2018 and list the contents of the mount point using \u2018ls -l\u2019 command.<\/p>\n<pre># mkdir \/mnt\/raid10\r\n# mount \/dev\/md0 \/mnt\/raid10\/\r\n# ls -l \/mnt\/raid10\/\r\n<\/pre>\n<p>Next, add some files under mount point and append some text in any one of the file and check the content.<\/p>\n<pre># touch \/mnt\/raid10\/raid10_files.txt\r\n# ls -l \/mnt\/raid10\/\r\n# echo \"raid 10 setup with 4 disks\" &gt; \/mnt\/raid10\/raid10_files.txt\r\n# cat \/mnt\/raid10\/raid10_files.txt\r\n<\/pre>\n<div id=\"attachment_9851\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Mount-md-Device.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9851\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Mount-md-Device-598x450.png\" sizes=\"auto, (max-width: 598px) 100vw, 598px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Mount-md-Device-598x450.png 598w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Mount-md-Device.png 641w\" alt=\"Mount md Device\" width=\"598\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Mount md Device<\/p>\n<\/div>\n<p><strong>11.<\/strong>\u00a0For automounting, open the \u2018<strong>\/etc\/fstab<\/strong>\u2018 file and append the below entry in fstab, may be mount point will differ according to your environment. Save and quit using wq!.<\/p>\n<pre># vim \/etc\/fstab\r\n\r\n\/dev\/md0                \/mnt\/raid10              ext4    defaults        0 0\r\n<\/pre>\n<div id=\"attachment_9852\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/AutoMount-md-Device.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9852\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/AutoMount-md-Device-620x244.png\" sizes=\"auto, (max-width: 620px) 100vw, 620px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/AutoMount-md-Device-620x244.png 620w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/AutoMount-md-Device.png 790w\" alt=\"AutoMount md Device\" width=\"620\" height=\"244\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">AutoMount md Device<\/p>\n<\/div>\n<p><strong>12.<\/strong>\u00a0Next, verify the \u2018<strong>\/etc\/fstab<\/strong>\u2018 file for any errors before restarting the system using \u2018<strong>mount -a<\/strong>\u2018 command.<\/p>\n<pre># mount -av\r\n<\/pre>\n<div id=\"attachment_9853\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Errors-in-Fstab.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9853\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Errors-in-Fstab.png\" alt=\"Check Errors in Fstab\" width=\"590\" height=\"195\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Check Errors in Fstab<\/p>\n<\/div>\n<h4>Step 4: Save RAID Configuration<\/h4>\n<p><strong>13.<\/strong>\u00a0By default RAID don\u2019t have a config file, so we need to save it manually after making all the above steps, to preserve these settings during system boot.<\/p>\n<pre># mdadm --detail --scan --verbose &gt;&gt; \/etc\/mdadm.conf\r\n<\/pre>\n<div id=\"attachment_9854\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Save-Raid10-Configuration.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9854\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Save-Raid10-Configuration-620x124.png\" sizes=\"auto, (max-width: 620px) 100vw, 620px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Save-Raid10-Configuration-620x124.png 620w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Save-Raid10-Configuration.png 791w\" alt=\"Save Raid10 Configuration\" width=\"620\" height=\"124\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Save Raid10 Configuration<\/p>\n<\/div>\n<p>That\u2019s it, we have created RAID 10 using method 1, this method is the easier one. Now let\u2019s move forward to setup RAID 10 using method 2.<\/p>\n<h3>Method 2: Creating RAID 10<\/h3>\n<p><strong>1.<\/strong>\u00a0In method 2, we have to define 2 sets of RAID 1 and then we need to define a RAID 0 using those created RAID 1 sets. Here, what we will do is to first create 2 mirrors (RAID1) and then striping over RAID0.<\/p>\n<p>First, list the disks which are all available for creating RAID 10.<\/p>\n<pre># ls -l \/dev | grep sd\r\n<\/pre>\n<div id=\"attachment_9855\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/List-4-Devices.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9855\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/List-4-Devices.png\" alt=\"List 4 Devices\" width=\"440\" height=\"202\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">List 4 Devices<\/p>\n<\/div>\n<p><strong>2.<\/strong>\u00a0Partition the all 4 disks using \u2018fdisk\u2019 command. For partitioning, you can follow\u00a0<b>#step 3<\/b>\u00a0above.<\/p>\n<pre># fdisk \/dev\/sdb\r\n# fdisk \/dev\/sdc\r\n# fdisk \/dev\/sdd\r\n# fdisk \/dev\/sde\r\n<\/pre>\n<p><strong>3.<\/strong>\u00a0After partitioning all 4 disks, now examine the disks for any existing raid blocks.<\/p>\n<pre># mdadm --examine \/dev\/sd[b-e]\r\n# mdadm --examine \/dev\/sd[b-e]1\r\n<\/pre>\n<div id=\"attachment_9856\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Examine-4-Disks.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9856\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Examine-4-Disks.png\" alt=\"Examine 4 Disks\" width=\"505\" height=\"386\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Examine 4 Disks<\/p>\n<\/div>\n<h4>Step 1: Creating RAID 1<\/h4>\n<p><strong>4.<\/strong>\u00a0First let me create 2 sets of RAID 1 using 4 disks \u2018sdb1\u2019 and \u2018sdc1\u2019 and other set using \u2018sdd1\u2019 &amp; \u2018sde1\u2019.<\/p>\n<pre># mdadm --create \/dev\/md1 --metadata=1.2 --level=1 --raid-devices=2 \/dev\/sd[b-c]1\r\n# mdadm --create \/dev\/md2 --metadata=1.2 --level=1 --raid-devices=2 \/dev\/sd[d-e]1\r\n# cat \/proc\/mdstat\r\n<\/pre>\n<div id=\"attachment_9857\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Creating-Raid-1.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9857\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Creating-Raid-1-620x416.png\" sizes=\"auto, (max-width: 620px) 100vw, 620px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Creating-Raid-1-620x416.png 620w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Creating-Raid-1.png 668w\" alt=\"Creating Raid 1\" width=\"620\" height=\"416\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Creating Raid 1<\/p>\n<\/div>\n<div id=\"attachment_9858\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Details-of-Raid-1.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9858\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Details-of-Raid-1.png\" alt=\"Check Details of Raid 1\" width=\"388\" height=\"220\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Check Details of Raid 1<\/p>\n<\/div>\n<h4>Step 2: Creating RAID 0<\/h4>\n<p><strong>5.<\/strong>\u00a0Next, create the RAID 0 using md1 and md2 devices.<\/p>\n<pre># mdadm --create \/dev\/md0 --level=0 --raid-devices=2 \/dev\/md1 \/dev\/md2\r\n# cat \/proc\/mdstat\r\n<\/pre>\n<div id=\"attachment_9859\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Creating-Raid-0.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9859\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Creating-Raid-0-620x304.png\" sizes=\"auto, (max-width: 620px) 100vw, 620px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Creating-Raid-0-620x304.png 620w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Creating-Raid-0.png 677w\" alt=\"Creating Raid 0\" width=\"620\" height=\"304\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Creating Raid 0<\/p>\n<\/div>\n<h4>Step 3: Save RAID Configuration<\/h4>\n<p><strong>6.<\/strong>\u00a0We need to save the Configuration under \u2018<strong>\/etc\/mdadm.conf<\/strong>\u2018 to load all raid devices in every reboot times.<\/p>\n<pre># mdadm --detail --scan --verbose &gt;&gt; \/etc\/mdadm.conf\r\n<\/pre>\n<p>After this, we need to follow #step 3 Creating file system of method 1.<\/p>\n<p>That\u2019s it! we have created RAID 1+0 using method 2. We will loose two disks space here, but the performance will be excellent compared to any other raid setups.<\/p>\n<h3>Conclusion<\/h3>\n<p>Here we have created RAID 10 using two methods. RAID 10 has good performance and redundancy too. Hope this helps you to understand about RAID 10 Nested Raid level. Let us see how to grow an existing raid array and much more in my upcoming articles.<\/p>\n<h1 class=\"post-title\">Growing an Existing RAID Array and Removing Failed Disks in Raid \u2013 Part 7<\/h1>\n<p>Every newbies will get confuse of the word array. Array is just a collection of disks. In other words, we can call array as a set or group. Just like a set of eggs containing 6 numbers. Likewise RAID Array contains number of disks, it may be 2, 4, 6, 8, 12, 16 etc. Hope now you know what Array is.<\/p>\n<p>Here we will see how to grow (extend) an existing array or raid group. For example, if we are using 2 disks in an array to form a raid 1 set, and in some situation if we need more space in that group, we can extend the size of an array using\u00a0<b>mdadm \u2013grow<\/b>\u00a0command, just by adding one of the disk to the existing array. After growing (adding disk to an existing array), we will see how to remove one of the failed disk from array.<\/p>\n<div id=\"attachment_9810\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Growing-Raid-Array.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9810\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Growing-Raid-Array.jpg\" alt=\"Grow Raid Array in Linux\" width=\"600\" height=\"400\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Growing Raid Array and Removing Failed Disks<\/p>\n<\/div>\n<p>Assume that one of the disk is little weak and need to remove that disk, till it fails let it under use, but we need to add one of the spare drive and grow the mirror before it fails, because we need to save our data. While the weak disk fails we can remove it from array this is the concept we are going to see in this topic.<\/p>\n<h4>Features of RAID Growth<\/h4>\n<ol>\n<li>We can grow (extend) the size of any raid set.<\/li>\n<li>We can remove the faulty disk after growing raid array with new disk.<\/li>\n<li>We can grow raid array without any downtime.<\/li>\n<\/ol>\n<h4>Requirements<\/h4>\n<ol>\n<li>To grow an RAID array, we need an existing RAID set (Array).<\/li>\n<li>We need extra disks to grow the Array.<\/li>\n<li>Here I\u2019m using 1 disk to grow the existing array.<\/li>\n<\/ol>\n<p>Before we learn about growing and recovering of Array, we have to know about the basics of RAID levels and setups. Follow the below links to know about those setups.<\/p>\n<ol>\n<li><a href=\"https:\/\/www.tecmint.com\/understanding-raid-setup-in-linux\/\" target=\"_blank\" rel=\"noopener\">Understanding Basic RAID Concepts \u2013 Part 1<\/a><\/li>\n<li><a href=\"https:\/\/www.tecmint.com\/create-raid0-in-linux\/\" target=\"_blank\" rel=\"noopener\">Creating a Software Raid 0 in Linux \u2013 Part 2<\/a><\/li>\n<\/ol>\n<h5>My Server Setup<\/h5>\n<pre>Operating System \t:\tCentOS 6.5 Final\r\nIP Address\t \t:\t192.168.0.230\r\nHostname\t\t:\tgrow.tecmintlocal.com\r\n2 Existing Disks \t:\t1 GB\r\n1 Additional Disk\t:\t1 GB\r\n<\/pre>\n<p><center>Here, my already existing RAID has 2 number of disks with each size is 1GB and we are now adding one more disk whose size is 1GB to our existing raid array.<\/center><\/p>\n<h3>Growing an Existing RAID Array<\/h3>\n<p><strong>1.<\/strong>\u00a0Before growing an array, first list the existing Raid array using the following command.<\/p>\n<pre># mdadm --detail \/dev\/md0\r\n<\/pre>\n<div id=\"attachment_9799\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Existing-Raid-Array.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9799\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Existing-Raid-Array-598x450.png\" sizes=\"auto, (max-width: 598px) 100vw, 598px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Existing-Raid-Array-598x450.png 598w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Existing-Raid-Array.png 646w\" alt=\"Check Existing Raid Array\" width=\"598\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Check Existing Raid Array<\/p>\n<\/div>\n<p><strong>Note<\/strong>: The above output shows that I\u2019ve already has two disks in Raid array with raid1 level. Now here we are adding one more disk to an existing array,<\/p>\n<p><strong>2.<\/strong>\u00a0Now let\u2019s add the new disk \u201c<strong>sdd<\/strong>\u201d and create a partition using \u2018<strong>fdisk<\/strong>\u2018 command.<\/p>\n<pre># fdisk \/dev\/sdd\r\n<\/pre>\n<p>Please use the below instructions to create a partition on\u00a0<strong>\/dev\/sdd<\/strong>\u00a0drive.<\/p>\n<ol>\n<li>Press \u2018<strong>n<\/strong>\u2018 for creating new partition.<\/li>\n<li>Then choose \u2018<strong>P<\/strong>\u2018 for Primary partition.<\/li>\n<li>Then choose \u2018<strong>1<\/strong>\u2018 to be the first partition.<\/li>\n<li>Next press \u2018<strong>p<\/strong>\u2018 to print the created partition.<\/li>\n<li>Here, we are selecting \u2018<strong>fd<\/strong>\u2018 as my type is RAID.<\/li>\n<li>Next press \u2018<strong>p<\/strong>\u2018 to print the defined partition.<\/li>\n<li>Then again use \u2018<strong>p<\/strong>\u2018 to print the changes what we have made.<\/li>\n<li>Use \u2018<strong>w<\/strong>\u2018 to write the changes.<\/li>\n<\/ol>\n<div id=\"attachment_9800\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-New-sdd-Partition.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9800\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-New-sdd-Partition-383x450.png\" sizes=\"auto, (max-width: 383px) 100vw, 383px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-New-sdd-Partition-383x450.png 383w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Create-New-sdd-Partition.png 621w\" alt=\"Create New Partition in Linux\" width=\"383\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Create New sdd Partition<\/p>\n<\/div>\n<p><strong>3.<\/strong>\u00a0Once new\u00a0<strong>sdd<\/strong>\u00a0partition created, you can verify it using below command.<\/p>\n<pre># ls -l \/dev\/ | grep sd\r\n<\/pre>\n<div id=\"attachment_9801\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Confirm-sdd-Partition.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9801\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Confirm-sdd-Partition.png\" alt=\"Confirm sdd Partition\" width=\"436\" height=\"218\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Confirm sdd Partition<\/p>\n<\/div>\n<p><strong>4.<\/strong>\u00a0Next, examine the newly created disk for any existing raid, before adding to the array.<\/p>\n<pre># mdadm --examine \/dev\/sdd1\r\n<\/pre>\n<div id=\"attachment_9802\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Raid-on-sdd-Partition.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9802\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Check-Raid-on-sdd-Partition.png\" alt=\"Check Raid on sdd Partition\" width=\"413\" height=\"115\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Check Raid on sdd Partition<\/p>\n<\/div>\n<p><strong>Note<\/strong>: The above output shows that the disk has no super-blocks detected, means we can move forward to add a new disk to an existing array.<\/p>\n<p><strong>4.<\/strong>\u00a0To add the new partition\u00a0<strong>\/dev\/sdd1<\/strong>\u00a0in existing array\u00a0<strong>md0<\/strong>, use the following command.<\/p>\n<pre># mdadm --manage \/dev\/md0 --add \/dev\/sdd1\r\n<\/pre>\n<div id=\"attachment_9803\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Add-Disk-To-Raid-Array.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9803\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Add-Disk-To-Raid-Array.png\" alt=\"Add Disk To Raid-Array\" width=\"451\" height=\"129\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Add Disk To Raid-Array<\/p>\n<\/div>\n<p><strong>5.<\/strong>\u00a0Once the new disk has been added, check for the added disk in our array using.<\/p>\n<pre># mdadm --detail \/dev\/md0\r\n<\/pre>\n<div id=\"attachment_9804\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Confirm-Disk-Added-To-Raid.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9804\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Confirm-Disk-Added-To-Raid-575x450.png\" sizes=\"auto, (max-width: 575px) 100vw, 575px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Confirm-Disk-Added-To-Raid-575x450.png 575w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Confirm-Disk-Added-To-Raid.png 646w\" alt=\"Confirm Disk Added to Raid\" width=\"575\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Confirm Disk Added to Raid<\/p>\n<\/div>\n<p><strong>Note<\/strong>: In the above output, you can see the drive has been added as a spare. Here, we already having 2 disks in the array, but what we are expecting is 3 devices in array for that we need to grow the array.<\/p>\n<p><strong>6.<\/strong>\u00a0To grow the array we have to use the below command.<\/p>\n<pre># mdadm --grow --raid-devices=3 \/dev\/md0\r\n<\/pre>\n<div id=\"attachment_9805\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Grow-Raid-Array.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9805\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Grow-Raid-Array.png\" alt=\"Grow Raid Array\" width=\"441\" height=\"106\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Grow Raid Array<\/p>\n<\/div>\n<p>Now we can see the third disk (<strong>sdd1<\/strong>) has been added to array, after adding third disk it will sync the data from other two disks.<\/p>\n<pre># mdadm --detail \/dev\/md0\r\n<\/pre>\n<div id=\"attachment_9806\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Confirm-Raid-Array.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9806\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Confirm-Raid-Array-578x450.png\" sizes=\"auto, (max-width: 578px) 100vw, 578px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Confirm-Raid-Array-578x450.png 578w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Confirm-Raid-Array.png 653w\" alt=\"Confirm Raid Array\" width=\"578\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Confirm Raid Array<\/p>\n<\/div>\n<p><strong>Note<\/strong>: For large size disk it will take hours to sync the contents. Here I have used 1GB virtual disk, so its done very quickly within seconds.<\/p>\n<h3>Removing Disks from Array<\/h3>\n<p><strong>7.<\/strong>\u00a0After the data has been synced to new disk \u2018<strong>sdd1<\/strong>\u2018 from other two disks, that means all three disks now have same contents.<\/p>\n<p>As I told earlier let\u2019s assume that one of the disk is weak and needs to be removed, before it fails. So, now assume disk \u2018<strong>sdc1<\/strong>\u2018 is weak and needs to be removed from an existing array.<\/p>\n<p>Before removing a disk we have to mark the disk as failed one, then only we can able to remove it.<\/p>\n<pre># mdadm --fail \/dev\/md0 \/dev\/sdc1\r\n# mdadm --detail \/dev\/md0\r\n<\/pre>\n<div id=\"attachment_9807\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Disk-Fail-in-Raid-Array.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9807\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Disk-Fail-in-Raid-Array-492x450.png\" sizes=\"auto, (max-width: 492px) 100vw, 492px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Disk-Fail-in-Raid-Array-492x450.png 492w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Disk-Fail-in-Raid-Array.png 650w\" alt=\"Disk Fail in Raid Array\" width=\"492\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Disk Fail in Raid Array<\/p>\n<\/div>\n<p>From the above output, we clearly see that the disk was marked as faulty at the bottom. Even its faulty, we can see the raid devices are\u00a0<strong>3<\/strong>, failed\u00a0<strong>1<\/strong>\u00a0and state was degraded.<\/p>\n<p>Now we have to remove the faulty drive from the array and grow the array with\u00a0<strong>2<\/strong>\u00a0devices, so that the raid devices will be set to\u00a0<strong>2<\/strong>\u00a0devices as before.<\/p>\n<pre># mdadm --remove \/dev\/md0 \/dev\/sdc1\r\n<\/pre>\n<div id=\"attachment_9808\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Remove-Disk-in-Raid-Array.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-9808\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Remove-Disk-in-Raid-Array.png\" alt=\"Remove Disk in Raid Array\" width=\"420\" height=\"116\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Remove Disk in Raid Array<\/p>\n<\/div>\n<p><strong>8.<\/strong>\u00a0Once the faulty drive is removed, now we\u2019ve to grow the raid array using<strong>\u00a02<\/strong>\u00a0disks.<\/p>\n<pre># mdadm --grow --raid-devices=2 \/dev\/md0\r\n# mdadm --detail \/dev\/md0\r\n<\/pre>\n<div id=\"attachment_9809\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Grow-Disks-in-Raid-Array.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-9809\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Grow-Disks-in-Raid-Array-538x450.png\" sizes=\"auto, (max-width: 538px) 100vw, 538px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Grow-Disks-in-Raid-Array-538x450.png 538w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2014\/11\/Grow-Disks-in-Raid-Array.png 646w\" alt=\"Grow Disks in Raid Array\" width=\"538\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Grow Disks in Raid Array<\/p>\n<\/div>\n<p>From the about output, you can see that our array having only 2 devices. If you need to grow the array again, follow the same steps as described above. If you need to add a drive as spare, mark it as spare so that if the disk fails, it will automatically active and rebuild.<\/p>\n<h3>Conclusion<\/h3>\n<p>In the article, we\u2019ve seen how to grow an existing raid set and how to remove a faulty disk from an array after re-syncing the existing contents. All these steps can be done without any downtime. During data syncing, system users, files and applications will not get affected in any case.<\/p>\n<p>In next, article I will show you how to manage the RAID, till then stay tuned to updates and don\u2019t forget to add your comments.<\/p>\n<h1 class=\"post-title\">How to Recover Data and Rebuild Failed Software RAID\u2019s \u2013 Part 8<\/h1>\n<p>In the previous articles of this\u00a0<a href=\"https:\/\/www.tecmint.com\/understanding-raid-setup-in-linux\/\" target=\"_blank\" rel=\"noopener\">RAID series<\/a>\u00a0you went from zero to RAID hero. We reviewed several software RAID configurations and explained the essentials of each one, along with the reasons why you would lean towards one or the other depending on your specific scenario.<\/p>\n<div id=\"attachment_16012\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Recover-Rebuild-Failed-Software-RAID.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-16012\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Recover-Rebuild-Failed-Software-RAID-620x297.png\" sizes=\"auto, (max-width: 620px) 100vw, 620px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Recover-Rebuild-Failed-Software-RAID-620x297.png 620w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Recover-Rebuild-Failed-Software-RAID.png 720w\" alt=\"Recover Rebuild Failed Software RAID's\" width=\"620\" height=\"297\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Recover Rebuild Failed Software RAID\u2019s \u2013 Part 8<\/p>\n<\/div>\n<p>In this guide we will discuss how to rebuild a software RAID array without data loss when in the event of a disk failure. For brevity, we will only consider a\u00a0<strong>RAID 1<\/strong>\u00a0setup \u2013 but the concepts and commands apply to all cases alike.<\/p>\n<h4>RAID Testing Scenario<\/h4>\n<p>Before proceeding further, please make sure you have set up a\u00a0<strong>RAID 1<\/strong>\u00a0array following the instructions provided in Part 3 of this series:\u00a0<a href=\"https:\/\/www.tecmint.com\/create-raid1-in-linux\/\" target=\"_blank\" rel=\"noopener\">How to set up RAID 1 (Mirror) in Linux<\/a>.<\/p>\n<p>The only variations in our present case will be:<\/p>\n<p><strong>1)<\/strong>\u00a0a different version of CentOS (v7) than the one used in that article (v6.5), and<br \/>\n<strong>2)<\/strong>\u00a0different disk sizes for\u00a0<strong>\/dev\/sdb<\/strong>\u00a0and\u00a0<strong>\/dev\/sdc<\/strong>\u00a0(8 GB each).<\/p>\n<p>In addition, if\u00a0<strong>SELinux<\/strong>\u00a0is enabled in enforcing mode, you will need to add the corresponding labels to the directory where you\u2019ll mount the RAID device. Otherwise, you\u2019ll run into this warning message while attempting to mount it:<\/p>\n<div id=\"attachment_15999\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/SELinux-RAID-Mount-Error.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-15999\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/SELinux-RAID-Mount-Error.png\" alt=\"SELinux RAID Mount Error\" width=\"607\" height=\"173\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">SELinux RAID Mount Error<\/p>\n<\/div>\n<p>You can fix this by running:<\/p>\n<pre># restorecon -R \/mnt\/raid1\r\n<\/pre>\n<h3>Setting up RAID Monitoring<\/h3>\n<p>There is a variety of reasons why a storage device can fail (SSDs have greatly reduced the chances of this happening, though), but regardless of the cause you can be sure that issues can occur anytime and you need to be prepared to replace the failed part and to ensure the availability and integrity of your data.<\/p>\n<p>A word of advice first. Even when you can inspect\u00a0<strong>\/proc\/mdstat<\/strong>\u00a0in order to check the status of your RAIDs, there\u2019s a better and time-saving method that consists of running\u00a0<strong>mdadm<\/strong>\u00a0in monitor + scan mode, which will send alerts via email to a predefined recipient.<\/p>\n<p>To set this up, add the following line in\u00a0<strong>\/etc\/mdadm.conf<\/strong>:<\/p>\n<pre>MAILADDR user@&lt;domain or localhost&gt;\r\n<\/pre>\n<p>In my case:<\/p>\n<pre>MAILADDR gacanepa@localhost\r\n<\/pre>\n<div id=\"attachment_16000\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/RAID-Monitoring-Email-Alerts.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-16000\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/RAID-Monitoring-Email-Alerts-620x43.png\" sizes=\"auto, (max-width: 620px) 100vw, 620px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/RAID-Monitoring-Email-Alerts-620x43.png 620w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/RAID-Monitoring-Email-Alerts.png 879w\" alt=\"RAID Monitoring Email Alerts\" width=\"620\" height=\"43\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">RAID Monitoring Email Alerts<\/p>\n<\/div>\n<p>To run\u00a0<strong>mdadm<\/strong>\u00a0in monitor + scan mode, add the following crontab entry as root:<\/p>\n<pre>@reboot \/sbin\/mdadm --monitor --scan --oneshot\r\n<\/pre>\n<p>By default,\u00a0<strong>mdadm<\/strong>\u00a0will check the RAID arrays every 60 seconds and send an alert if it finds an issue. You can modify this behavior by adding the\u00a0<code>--delay<\/code>\u00a0option to the crontab entry above along with the amount of seconds (for example,\u00a0<code>--delay<\/code>\u00a01800 means 30 minutes).<\/p>\n<p>Finally, make sure you have a\u00a0<strong>Mail User Agent<\/strong>\u00a0(MUA) installed, such as\u00a0<a href=\"https:\/\/www.tecmint.com\/send-mail-from-command-line-using-mutt-command\/\" target=\"_blank\" rel=\"noopener\">mutt or mailx<\/a>. Otherwise, you will not receive any alerts.<\/p>\n<p>In a minute we will see what an alert sent by\u00a0<strong>mdadm<\/strong>\u00a0looks like.<\/p>\n<h3>Simulating and Replacing a failed RAID Storage Device<\/h3>\n<p>To simulate an issue with one of the storage devices in the RAID array, we will use the\u00a0<code>--manage<\/code>\u00a0and\u00a0<code>--set-faulty<\/code>\u00a0options as follows:<\/p>\n<pre># mdadm --manage --set-faulty \/dev\/md0 \/dev\/sdc1  \r\n<\/pre>\n<p>This will result in\u00a0<strong>\/dev\/sdc1<\/strong>\u00a0being marked as faulty, as we can see in\u00a0<strong>\/proc\/mdstat<\/strong>:<\/p>\n<div id=\"attachment_16001\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Stimulate-Issue-with-RAID-Storage.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-16001\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Stimulate-Issue-with-RAID-Storage.png\" alt=\"Stimulate Issue with RAID Storage\" width=\"534\" height=\"160\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Stimulate Issue with RAID Storage<\/p>\n<\/div>\n<p>More importantly, let\u2019s see if we received an email alert with the same warning:<\/p>\n<div id=\"attachment_16002\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Email-Alert-on-Failed-RAID-Device.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-16002\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Email-Alert-on-Failed-RAID-Device-516x450.png\" sizes=\"auto, (max-width: 516px) 100vw, 516px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Email-Alert-on-Failed-RAID-Device-516x450.png 516w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Email-Alert-on-Failed-RAID-Device.png 710w\" alt=\"Email Alert on Failed RAID Device\" width=\"516\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Email Alert on Failed RAID Device<\/p>\n<\/div>\n<p>In this case, you will need to remove the device from the software RAID array:<\/p>\n<pre># mdadm \/dev\/md0 --remove \/dev\/sdc1\r\n<\/pre>\n<p>Then you can physically remove it from the machine and replace it with a spare part (<strong>\/dev\/sdd<\/strong>, where a partition of type\u00a0<strong>fd<\/strong>\u00a0has been previously created):<\/p>\n<pre># mdadm --manage \/dev\/md0 --add \/dev\/sdd1\r\n<\/pre>\n<p>Luckily for us, the system will automatically start rebuilding the array with the part that we just added. We can test this by marking\u00a0<strong>\/dev\/sdb1<\/strong>\u00a0as faulty, removing it from the array, and making sure that the file\u00a0<strong>tecmint.txt<\/strong>\u00a0is still accessible at\u00a0<strong>\/mnt\/raid1<\/strong>:<\/p>\n<pre># mdadm --detail \/dev\/md0\r\n# mount | grep raid1\r\n# ls -l \/mnt\/raid1 | grep tecmint\r\n# cat \/mnt\/raid1\/tecmint.txt\r\n<\/pre>\n<div id=\"attachment_16003\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Rebuilding-RAID-Array.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-16003\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Rebuilding-RAID-Array-461x450.png\" sizes=\"auto, (max-width: 461px) 100vw, 461px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Rebuilding-RAID-Array-461x450.png 461w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Rebuilding-RAID-Array.png 567w\" alt=\"Confirm Rebuilding RAID Array\" width=\"461\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Confirm Rebuilding RAID Array<\/p>\n<\/div>\n<p>The image above clearly shows that after adding\u00a0<strong>\/dev\/sdd1<\/strong>\u00a0to the array as a replacement for\u00a0<strong>\/dev\/sdc1<\/strong>, the rebuilding of data was automatically performed by the system without intervention on our part.<\/p>\n<p>Though not strictly required, it\u2019s a great idea to have a spare device in handy so that the process of replacing the faulty device with a good drive can be done in a snap. To do that, let\u2019s re-add\u00a0<strong>\/dev\/sdb1<\/strong>\u00a0and\u00a0<strong>\/dev\/sdc1<\/strong>:<\/p>\n<pre># mdadm --manage \/dev\/md0 --add \/dev\/sdb1\r\n# mdadm --manage \/dev\/md0 --add \/dev\/sdc1\r\n<\/pre>\n<div id=\"attachment_16004\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Replace-Failed-Raid-Device.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-16004\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Replace-Failed-Raid-Device-486x450.png\" sizes=\"auto, (max-width: 486px) 100vw, 486px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Replace-Failed-Raid-Device-486x450.png 486w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Replace-Failed-Raid-Device.png 514w\" alt=\"Replace Failed Raid Device\" width=\"486\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Replace Failed Raid Device<\/p>\n<\/div>\n<h3>Recovering from a Redundancy Loss<\/h3>\n<p>As explained earlier,\u00a0<strong>mdadm<\/strong>\u00a0will automatically rebuild the data when one disk fails. But what happens if 2 disks in the array fail? Let\u2019s simulate such scenario by marking\u00a0<strong>\/dev\/sdb1<\/strong>\u00a0and\u00a0<strong>\/dev\/sdd1<\/strong>\u00a0as faulty:<\/p>\n<pre># umount \/mnt\/raid1\r\n# mdadm --manage --set-faulty \/dev\/md0 \/dev\/sdb1\r\n# mdadm --stop \/dev\/md0\r\n# mdadm --manage --set-faulty \/dev\/md0 \/dev\/sdd1\r\n<\/pre>\n<p>Attempts to re-create the array the same way it was created at this time (or using the\u00a0<code>--assume-clean<\/code>\u00a0option) may result in data loss, so it should be left as a last resort.<\/p>\n<p>Let\u2019s try to recover the data from\u00a0<strong>\/dev\/sdb1<\/strong>, for example, into a similar disk partition (<strong>\/dev\/sde1<\/strong>\u00a0\u2013 note that this requires that you create a partition of type\u00a0<strong>fd<\/strong>\u00a0in\u00a0<strong>\/dev\/sde<\/strong>\u00a0before proceeding) using\u00a0<strong>ddrescue<\/strong>:<\/p>\n<pre># ddrescue -r 2 \/dev\/sdb1 \/dev\/sde1\r\n<\/pre>\n<div id=\"attachment_16006\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Recovering-Raid-Array.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-16006\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Recovering-Raid-Array.png\" alt=\"Recovering Raid Array\" width=\"597\" height=\"154\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Recovering Raid Array<\/p>\n<\/div>\n<p>Please note that up to this point, we haven\u2019t touched\u00a0<strong>\/dev\/sdb<\/strong>\u00a0or\u00a0<strong>\/dev\/sdd<\/strong>, the partitions that were part of the RAID array.<\/p>\n<p>Now let\u2019s rebuild the array using\u00a0<strong>\/dev\/sde1<\/strong>\u00a0and\u00a0<strong>\/dev\/sdf1<\/strong>:<\/p>\n<pre># mdadm --create \/dev\/md0 --level=mirror --raid-devices=2 \/dev\/sd[e-f]1\r\n<\/pre>\n<p>Please note that in a real situation, you will typically use the same device names as with the original array, that is,\u00a0<strong>\/dev\/sdb1<\/strong>\u00a0and\u00a0<strong>\/dev\/sdc1<\/strong>\u00a0after the failed disks have been replaced with new ones.<\/p>\n<p>In this article I have chosen to use extra devices to re-create the array with brand new disks and to avoid confusion with the original failed drives.<\/p>\n<p>When asked whether to continue writing array, type\u00a0<strong>Y<\/strong>\u00a0and press\u00a0<strong>Enter<\/strong>. The array should be started and you should be able to watch its progress with:<\/p>\n<pre># watch -n 1 cat \/proc\/mdstat\r\n<\/pre>\n<p>When the process completes, you should be able to access the content of your RAID:<\/p>\n<div id=\"attachment_16007\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Raid-Content.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-16007\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Raid-Content.png\" alt=\"Confirm Raid Content\" width=\"440\" height=\"126\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Confirm Raid Content<\/p>\n<\/div>\n<h3>Summary<\/h3>\n<p>In this article we have reviewed how to recover from\u00a0<strong>RAID<\/strong>\u00a0failures and redundancy losses. However, you need to remember that this technology is a storage solution and\u00a0<strong>DOES NOT<\/strong>\u00a0replace backups.<\/p>\n<p>The principles explained in this guide apply to all RAID setups alike, as well as the concepts that we will cover in the next and final guide of this series (RAID management).<\/p>\n<h1 class=\"post-title\">How to Manage Software RAID\u2019s in Linux with \u2018Mdadm\u2019 Tool \u2013 Part 9<\/h1>\n<p>Regardless of your previous experience with RAID arrays, and whether you followed all of the tutorials in\u00a0<a href=\"https:\/\/www.tecmint.com\/understanding-raid-setup-in-linux\/\" target=\"_blank\" rel=\"noopener\">this RAID series<\/a>\u00a0or not, managing software RAIDs in Linux is not a very complicated task once you have become acquainted with\u00a0<code>mdadm --manage<\/code>\u00a0command.<\/p>\n<div id=\"attachment_16237\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Manage-Raid-with-Mdadm-Tool-in-Linux.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-16237\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Manage-Raid-with-Mdadm-Tool-in-Linux.png\" sizes=\"auto, (max-width: 620px) 100vw, 620px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Manage-Raid-with-Mdadm-Tool-in-Linux.png 716w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Manage-Raid-with-Mdadm-Tool-in-Linux-620x297.png 620w\" alt=\"Manage Raid Devices with Mdadm in Linux\" width=\"620\" height=\"297\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Manage Raid Devices with Mdadm in Linux \u2013 Part 9<\/p>\n<\/div>\n<p>In this tutorial we will review the functionality provided by this tool so that you can have it handy when you need it.<\/p>\n<h4>RAID Testing Scenario<\/h4>\n<p>As in the last article of this series, we will use for simplicity a\u00a0<strong>RAID 1<\/strong>\u00a0(mirror) array which consists of two\u00a0<strong>8 GB<\/strong>disks (<strong>\/dev\/sdb<\/strong>\u00a0and\u00a0<strong>\/dev\/sdc<\/strong>) and an initial spare device (<strong>\/dev\/sdd<\/strong>) to illustrate, but the commands and concepts listed herein apply to other types of setups as well. That said, feel free to go ahead and add this page to your browser\u2019s bookmarks, and let\u2019s get started.<\/p>\n<h3>Understanding mdadm Options and Usage<\/h3>\n<p>Fortunately,\u00a0<strong>mdadm<\/strong>\u00a0provides a\u00a0<code>built-in --help<\/code>\u00a0flag that provides explanations and documentation for each of the main options.<\/p>\n<p>Thus, let\u2019s start by typing:<\/p>\n<pre># mdadm --manage --help\r\n<\/pre>\n<p>to see what are the tasks that\u00a0<code>mdadm --manage<\/code>\u00a0will allow us to perform and how:<\/p>\n<div id=\"attachment_16015\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/mdadm-Usage-in-Linux.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-16015\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/mdadm-Usage-in-Linux.png\" alt=\"Manage RAID with mdadm Tool\" width=\"608\" height=\"431\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Manage RAID with mdadm Tool<\/p>\n<\/div>\n<p>As we can see in the above image, managing a RAID array involves performing the following tasks at one time or another:<\/p>\n<ol>\n<li>(Re)Adding a device to the array.<\/li>\n<li>Mark a device as faulty.<\/li>\n<li>Removing a faulty device from the array.<\/li>\n<li>Replacing the faulty device with a spare one.<\/li>\n<li>Start an array that\u2019s partially built.<\/li>\n<li>Stop an array.<\/li>\n<li>Mark an array as ro (read-only) or rw (read-write).<\/li>\n<\/ol>\n<h3>Managing RAID Devices with mdadm Tool<\/h3>\n<p>Note that if you omit the\u00a0<code>--manage<\/code>\u00a0option, mdadm assumes management mode anyway. Keep this fact in mind to avoid running into trouble further down the road.<\/p>\n<p>The highlighted text in the previous image shows the basic syntax to manage RAIDs:<\/p>\n<pre># mdadm --manage RAID options devices\r\n<\/pre>\n<p>Let\u2019s illustrate with a few examples.<\/p>\n<h6>\u200b Example 1: Add a device to the RAID array<\/h6>\n<p>You will typically add a new device when replacing a faulty one, or when you have a spare part that you want to have handy in case of a failure:<\/p>\n<pre># mdadm --manage \/dev\/md0 --add \/dev\/sdd1\r\n<\/pre>\n<div id=\"attachment_16016\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Add-Device-to-Raid-Array.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-16016\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Add-Device-to-Raid-Array-620x304.png\" sizes=\"auto, (max-width: 620px) 100vw, 620px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Add-Device-to-Raid-Array-620x304.png 620w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Add-Device-to-Raid-Array.png 861w\" alt=\"Add Device to Raid Array\" width=\"620\" height=\"304\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Add Device to Raid Array<\/p>\n<\/div>\n<h6>\u200bExample 2: Marking a RAID device as faulty and removing it from the array<\/h6>\n<p>This is a mandatory step before logically removing the device from the array, and later physically pulling it out from the machine \u2013 in that order (if you miss one of these steps you may end up causing actual damage to the device):<\/p>\n<pre># mdadm --manage \/dev\/md0 --fail \/dev\/sdb1\r\n<\/pre>\n<p>Note how the spare device added in the previous example is used to automatically replace the failed disk. Not only that, but the\u00a0<a href=\"https:\/\/www.tecmint.com\/recover-data-and-rebuild-failed-software-raid\/\" target=\"_blank\" rel=\"noopener\">recovery and rebuilding of raid data<\/a>\u00a0start immediately as well:<\/p>\n<div id=\"attachment_16021\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Recover-and-Rebuild-Raid-Data.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-16021\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Recover-and-Rebuild-Raid-Data-452x450.png\" sizes=\"auto, (max-width: 452px) 100vw, 452px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Recover-and-Rebuild-Raid-Data-452x450.png 452w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Recover-and-Rebuild-Raid-Data-150x150.png 150w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Recover-and-Rebuild-Raid-Data-160x160.png 160w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Recover-and-Rebuild-Raid-Data-320x320.png 320w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Recover-and-Rebuild-Raid-Data.png 554w\" alt=\"Recover and Rebuild Raid Data\" width=\"452\" height=\"450\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Recover and Rebuild Raid Data<\/p>\n<\/div>\n<p>Once the device has been indicated as failed manually, it can be safely removed from the array:<\/p>\n<pre># mdadm --manage \/dev\/md0 --remove \/dev\/sdb1\r\n<\/pre>\n<h6>\u200bExample 3: Re-adding a device that was part of the array which had been removed previously<\/h6>\n<p>Up to this point, we have a working\u00a0<strong>RAID 1<\/strong>\u00a0array that consists of 2 active devices:\u00a0<strong>\/dev\/sdc1<\/strong>\u00a0and\u00a0<strong>\/dev\/sdd1<\/strong>. If we attempt to re-add\u00a0<strong>\/dev\/sdb1<\/strong>\u00a0to\u00a0<strong>\/dev\/md0<\/strong>\u00a0right now:<\/p>\n<pre># mdadm --manage \/dev\/md0 --re-add \/dev\/sdb1\r\n<\/pre>\n<p>we will run into an error:<\/p>\n<pre><strong>mdadm: --re-add for \/dev\/sdb1 to \/dev\/md0 is not possible<\/strong>\r\n<\/pre>\n<p>because the array is already made up of the maximum possible number of drives. So we have 2 choices: a) add\u00a0<strong>\/dev\/sdb1<\/strong>\u00a0as a spare, as shown in Example #1, or b) remove\u00a0<strong>\/dev\/sdd1<\/strong>\u00a0from the array and then re-add\u00a0<strong>\/dev\/sdb1<\/strong>.<\/p>\n<p>We choose option\u00a0<strong>b)<\/strong>, and will start by stopping the array to later reassemble it:<\/p>\n<pre># mdadm --stop \/dev\/md0\r\n# mdadm --assemble \/dev\/md0 \/dev\/sdb1 \/dev\/sdc1\r\n<\/pre>\n<p>If the above command does not successfully add\u00a0<strong>\/dev\/sdb1<\/strong>\u00a0back to the array, use the command from\u00a0<strong>Example #1<\/strong>\u00a0to do it.<\/p>\n<p>Although\u00a0<strong>mdadm<\/strong>\u00a0will initially detect the newly added device as a spare, it will start rebuilding the data and when it\u2019s done doing so, it should recognize the device to be an active part of the RAID:<\/p>\n<div id=\"attachment_16022\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Raid-Rebuild-Status.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-16022\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Raid-Rebuild-Status.png\" alt=\"Raid Rebuild Status\" width=\"556\" height=\"282\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Raid Rebuild Status<\/p>\n<\/div>\n<h6>Example 4: Replace a Raid device with a specific disk<\/h6>\n<p>Replacing a disk in the array with a spare one is as easy as:<\/p>\n<pre># mdadm --manage \/dev\/md0 --replace \/dev\/sdb1 --with \/dev\/sdd1\r\n<\/pre>\n<div id=\"attachment_16023\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Replace-Raid-device.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-16023\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Replace-Raid-device-620x52.png\" sizes=\"auto, (max-width: 620px) 100vw, 620px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Replace-Raid-device-620x52.png 620w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Replace-Raid-device.png 636w\" alt=\"Replace Raid Device\" width=\"620\" height=\"52\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Replace Raid Device<\/p>\n<\/div>\n<p>This results in the device following the\u00a0<code>--with<\/code>\u00a0switch being added to the RAID while the disk indicated through\u00a0<code>--replace<\/code>\u00a0being marked as faulty:<\/p>\n<div id=\"attachment_16024\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Check-Raid-Rebuild-Status.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-16024\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Check-Raid-Rebuild-Status.png\" alt=\"Check Raid Rebuild Status\" width=\"505\" height=\"231\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Check Raid Rebuild Status<\/p>\n<\/div>\n<h6>\u200bExample 5: Marking an Raid array as ro or rw<\/h6>\n<p>After creating the array, you must have created a filesystem on top of it and mounted it on a directory in order to use it. What you probably didn\u2019t know then is that you can mark the RAID as\u00a0<strong>ro<\/strong>, thus allowing only read operations to be performed on it,\u00a0<strong>or rw<\/strong>, in order to write to the device as well.<\/p>\n<p>To mark the device as\u00a0<strong>ro<\/strong>, it needs to be unmounted first:<\/p>\n<pre># umount \/mnt\/raid1\r\n# mdadm --manage \/dev\/md0 --readonly\r\n# mount \/mnt\/raid1\r\n# touch \/mnt\/raid1\/test1\r\n<\/pre>\n<div id=\"attachment_16025\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Set-Permissions-on-Raid-Array.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-16025\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Set-Permissions-on-Raid-Array-620x168.png\" sizes=\"auto, (max-width: 620px) 100vw, 620px\" srcset=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Set-Permissions-on-Raid-Array-620x168.png 620w, https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Set-Permissions-on-Raid-Array.png 750w\" alt=\"Set Permissions on Raid Array\" width=\"620\" height=\"168\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Set Permissions on Raid Array<\/p>\n<\/div>\n<p>To configure the array to allow write operations as well, use the\u00a0<code>--readwrite<\/code>\u00a0option. Note that you will need to unmount the device and stop it before setting the\u00a0<strong>rw<\/strong>\u00a0flag:<\/p>\n<pre># umount \/mnt\/raid1\r\n# mdadm --manage \/dev\/md0 --stop\r\n# mdadm --assemble \/dev\/md0 \/dev\/sdc1 \/dev\/sdd1\r\n# mdadm --manage \/dev\/md0 --readwrite\r\n# touch \/mnt\/raid1\/test2\r\n<\/pre>\n<div id=\"attachment_16026\" class=\"wp-caption aligncenter\">\n<p><a href=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Allow-Write-Permission-on-Raid.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-16026\" src=\"https:\/\/www.tecmint.com\/wp-content\/uploads\/2015\/10\/Allow-Write-Permission-on-Raid.png\" alt=\"Allow Read Write Permission on Raid\" width=\"513\" height=\"181\" data-lazy-loaded=\"true\" \/><\/a><\/p>\n<p class=\"wp-caption-text\">Allow Read Write Permission on Raid<\/p>\n<\/div>\n<h3>Summary<\/h3>\n<p>Throughout this series we have explained how to set up a variety of software RAID arrays that are used in enterprise environments. If you followed through the articles and the examples provided in these articles you are prepared to leverage the power of software RAIDs in Linux.<\/p>\n<p><a href=\"https:\/\/www.tecmint.com\/understanding-raid-setup-in-linux\/\" target=\"_blank\" rel=\"noopener\">Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>RAID\u00a0is a Redundant Array of Inexpensive disks, but nowadays it is called Redundant Array of Independent drives. Earlier it is used to be very costly to buy even a smaller size of disk, but nowadays we can buy a large size of disk with the same amount like before. Raid is just a collection of &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/www.appservgrid.com\/paw92\/index.php\/2019\/03\/17\/introduction-to-raid-concepts-of-raid-and-raid-levels-in-linux\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Introduction to RAID, Concepts of RAID and RAID Levels in Linux&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-11850","post","type-post","status-publish","format-standard","hentry","category-linux"],"_links":{"self":[{"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/posts\/11850","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/comments?post=11850"}],"version-history":[{"count":1,"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/posts\/11850\/revisions"}],"predecessor-version":[{"id":11851,"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/posts\/11850\/revisions\/11851"}],"wp:attachment":[{"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/media?parent=11850"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/categories?post=11850"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/tags?post=11850"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}