{"id":9413,"date":"2019-02-10T04:09:20","date_gmt":"2019-02-10T04:09:20","guid":{"rendered":"http:\/\/www.appservgrid.com\/paw92\/?p=9413"},"modified":"2019-02-10T04:09:20","modified_gmt":"2019-02-10T04:09:20","slug":"how-to-create-raid-arrays-with-mdadm-on-ubuntu-16-04","status":"publish","type":"post","link":"https:\/\/www.appservgrid.com\/paw92\/index.php\/2019\/02\/10\/how-to-create-raid-arrays-with-mdadm-on-ubuntu-16-04\/","title":{"rendered":"How To Create RAID Arrays with mdadm on Ubuntu 16.04"},"content":{"rendered":"<h3 id=\"introduction\">Introduction<\/h3>\n<p>The\u00a0<code>mdadm<\/code>\u00a0utility can be used to create and manage storage arrays using Linux&#8217;s software RAID capabilities. Administrators have great flexibility in coordinating their individual storage devices and creating logical storage devices that have greater performance or redundancy characteristics.<\/p>\n<p>In this guide, we will go over a number of different RAID configurations that can be set up using an Ubuntu 16.04 server.<\/p>\n<div data-unique=\"prerequisites\"><\/div>\n<h2 id=\"prerequisites\">Prerequisites<\/h2>\n<p>In order to complete the steps in this guide, you should have:<\/p>\n<ul>\n<li><strong>A non-root user with\u00a0<code>sudo<\/code>\u00a0privileges on an Ubuntu 16.04 server<\/strong>: The steps in this guide will be completed with a\u00a0<code>sudo<\/code>\u00a0user. To learn how to set up an account with these privileges, follow our\u00a0<a href=\"https:\/\/www.digitalocean.com\/community\/tutorials\/initial-server-setup-with-ubuntu-16-04\">Ubuntu 16.04 initial server setup guide<\/a>.<\/li>\n<li><strong>A basic understanding of RAID terminology and concepts<\/strong>: While this guide will touch on some RAID terminology in passing, a more complete understanding is very useful. To learn more about RAID and to get a better understanding of what RAID level is right for you, read our\u00a0<a href=\"https:\/\/www.digitalocean.com\/community\/tutorials\/an-introduction-to-raid-terminology-and-concepts\">introduction to RAID article<\/a>.<\/li>\n<li><strong>Multiple raw storage devices available on your server<\/strong>: We will be demonstrating how to configure various types of arrays on the server. As such, you will need some drives to configure. If you are using DigitalOcean, you can use\u00a0<a href=\"https:\/\/www.digitalocean.com\/community\/tutorials\/how-to-use-block-storage-on-digitalocean\">Block Storage volumes<\/a>\u00a0to fill this role. Depending on the array type, you will need at minimum between\u00a0<strong>two to four storage devices<\/strong>.<\/li>\n<\/ul>\n<div data-unique=\"resetting-existing-raid-devices\"><\/div>\n<h2 id=\"resetting-existing-raid-devices\">Resetting Existing RAID Devices<\/h2>\n<p>Throughout this guide, we will be introducing the steps to create a number of different RAID levels. If you wish to follow along, you will likely want to reuse your storage devices after each section. This section can be referenced to learn how to quickly reset your component storage devices prior to testing a new RAID level. Skip this section for now if you have not yet set up any arrays.<\/p>\n<div class=\"code-label notes-and-warnings warning\" title=\"Warning\">Warning<\/div>\n<p><span class=\"warning\">This process will completely destroy the array and any data written to it. Make sure that you are operating on the correct array and that you have copied off any data you need to retain prior to destroying the array.<br \/>\n<\/span><\/p>\n<p>Find the active arrays in the\u00a0<code>\/proc\/mdstat<\/code>\u00a0file by typing:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">cat \/proc\/mdstat<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<pre class=\"code-pre \"><code><\/code><\/pre>\n<div class=\"secondary-code-label \" title=\"Output\">Output<\/div>\n<pre class=\"code-pre \"><code>Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10] \r\n<span class=\"highlight\">md0<\/span> : active raid0 sdc[1] sdd[0]\r\n      209584128 blocks super 1.2 512k chunks\r\n\r\n            unused devices: &lt;none&gt;\r\n<\/code><\/pre>\n<p>Unmount the array from the filesystem:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo umount \/dev\/<span class=\"highlight\">md0<\/span><\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>Then, stop and remove the array by typing:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo mdadm &#8211;stop \/dev\/<span class=\"highlight\">md0<\/span><\/li>\n<li class=\"line\">sudo mdadm &#8211;remove \/dev\/<span class=\"highlight\">md0<\/span><\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>Find the devices that were used to build the array with the following command:<\/p>\n<div class=\"code-label notes-and-warnings note\" title=\"Note\">Note<\/div>\n<p><span class=\"note\">Keep in mind that the\u00a0<code>\/dev\/sd*<\/code>\u00a0names can change any time you reboot! Check them every time to make sure you are operating on the correct devices.<br \/>\n<\/span><\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<pre class=\"code-pre \"><code><\/code><\/pre>\n<div class=\"secondary-code-label \" title=\"Output\">Output<\/div>\n<pre class=\"code-pre \"><code>NAME     SIZE FSTYPE            TYPE MOUNTPOINT\r\nsda      100G                   disk \r\nsdb      100G                   disk \r\n<span class=\"highlight\">sdc      100G linux_raid_member disk <\/span>\r\n<span class=\"highlight\">sdd      100G linux_raid_member disk <\/span>\r\nvda       20G                   disk \r\n\u251c\u2500vda1    20G ext4              part \/\r\n\u2514\u2500vda15    1M                   part \r\n<\/code><\/pre>\n<p>After discovering the devices used to create an array, zero their superblock to reset them to normal:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo mdadm &#8211;zero-superblock \/dev\/<span class=\"highlight\">sdc<\/span><\/li>\n<li class=\"line\">sudo mdadm &#8211;zero-superblock \/dev\/<span class=\"highlight\">sdd<\/span><\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>You should remove any of the persistent references to the array. Edit the\u00a0<code>\/etc\/fstab<\/code>\u00a0file and comment out or remove the reference to your array:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo nano \/etc\/fstab<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<div class=\"code-label \" title=\"\/etc\/fstab\">\/etc\/fstab<\/div>\n<pre class=\"code-pre \"><code>. . .\r\n<span class=\"highlight\">#<\/span> \/dev\/md0 \/mnt\/md0 ext4 defaults,nofail,discard 0 0\r\n<\/code><\/pre>\n<p>Also, comment out or remove the array definition from the\u00a0<code>\/etc\/mdadm\/mdadm.conf<\/code>\u00a0file:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo nano \/etc\/mdadm\/mdadm.conf<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<div class=\"code-label \" title=\"\/etc\/mdadm\/mdadm.conf\">\/etc\/mdadm\/mdadm.conf<\/div>\n<pre class=\"code-pre \"><code>. . .\r\n<span class=\"highlight\">#<\/span> ARRAY \/dev\/md0 metadata=1.2 name=mdadmwrite:0 UUID=7261fb9c:976d0d97:30bc63ce:85e76e91\r\n<\/code><\/pre>\n<p>Finally, update the\u00a0<code>initramfs<\/code>\u00a0again:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo update-initramfs -u<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>At this point, you should be ready to reuse the storage devices individually, or as components of a different array.<\/p>\n<div data-unique=\"creating-a-raid-0-array\"><\/div>\n<h2 id=\"creating-a-raid-0-array\">Creating a RAID 0 Array<\/h2>\n<p>The RAID 0 array works by breaking up data into chunks and striping it across the available disks. This means that each disk contains a portion of the data and that multiple disks will be referenced when retrieving information.<\/p>\n<ul>\n<li>Requirements: minimum of\u00a0<strong>2 storage devices<\/strong><\/li>\n<li>Primary benefit: Performance<\/li>\n<li>Things to keep in mind: Make sure that you have functional backups. A single device failure will destroy all data in the array.<\/li>\n<\/ul>\n<h3 id=\"identify-the-component-devices\">Identify the Component Devices<\/h3>\n<p>To get started, find the identifiers for the raw disks that you will be using:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<pre class=\"code-pre \"><code><\/code><\/pre>\n<div class=\"secondary-code-label \" title=\"Output\">Output<\/div>\n<pre class=\"code-pre \"><code>NAME     SIZE FSTYPE TYPE MOUNTPOINT\r\n<span class=\"highlight\">sda      100G        disk<\/span>\r\n<span class=\"highlight\">sdb      100G        disk<\/span>\r\nvda       20G        disk \r\n\u251c\u2500vda1    20G ext4   part \/\r\n\u2514\u2500vda15    1M        part\r\n<\/code><\/pre>\n<p>As you can see above, we have two disks without a filesystem, each 100G in size. In this example, these devices have been given the\u00a0<code>\/dev\/sda<\/code>\u00a0and\u00a0<code>\/dev\/sdb<\/code>\u00a0identifiers for this session. These will be the raw components we will use to build the array.<\/p>\n<h3 id=\"create-the-array\">Create the Array<\/h3>\n<p>To create a RAID 0 array with these components, pass them in to the\u00a0<code>mdadm --create<\/code>\u00a0command. You will have to specify the device name you wish to create (<code>\/dev\/md0<\/code>\u00a0in our case), the RAID level, and the number of devices:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo mdadm &#8211;create &#8211;verbose \/dev\/md0 &#8211;level=0 &#8211;raid-devices=2 \/dev\/<span class=\"highlight\">sda<\/span> \/dev\/<span class=\"highlight\">sdb<\/span><\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>You can ensure that the RAID was successfully created by checking the\u00a0<code>\/proc\/mdstat<\/code>\u00a0file:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">cat \/proc\/mdstat<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<pre class=\"code-pre \"><code><\/code><\/pre>\n<div class=\"secondary-code-label \" title=\"Output\">Output<\/div>\n<pre class=\"code-pre \"><code>Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] \r\n<span class=\"highlight\">md0 : active raid0 sdb[1] sda[0]<\/span>\r\n      209584128 blocks super 1.2 512k chunks\r\n\r\n            unused devices: &lt;none&gt;\r\n<\/code><\/pre>\n<p>As you can see in the highlighted line, the\u00a0<code>\/dev\/md0<\/code>\u00a0device has been created in the RAID 0 configuration using the\u00a0<code>\/dev\/sda<\/code>\u00a0and\u00a0<code>\/dev\/sdb<\/code>\u00a0devices.<\/p>\n<h3 id=\"create-and-mount-the-filesystem\">Create and Mount the Filesystem<\/h3>\n<p>Next, create a filesystem on the array:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo mkfs.ext4 -F \/dev\/md0<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>Create a mount point to attach the new filesystem:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo mkdir -p \/mnt\/md0<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>You can mount the filesystem by typing:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo mount \/dev\/md0 \/mnt\/md0<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>Check whether the new space is available by typing:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">df -h -x devtmpfs -x tmpfs<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<pre class=\"code-pre \"><code><\/code><\/pre>\n<div class=\"secondary-code-label \" title=\"Output\">Output<\/div>\n<pre class=\"code-pre \"><code>Filesystem      Size  Used Avail Use% Mounted on\r\n\/dev\/vda1        20G  1.1G   18G   6% \/\r\n<span class=\"highlight\">\/dev\/md0        197G   60M  187G   1% \/mnt\/md0<\/span>\r\n<\/code><\/pre>\n<p>The new filesystem is mounted and accessible.<\/p>\n<h3 id=\"save-the-array-layout\">Save the Array Layout<\/h3>\n<p>To make sure that the array is reassembled automatically at boot, we will have to adjust the\u00a0<code>\/etc\/mdadm\/mdadm.conf<\/code>\u00a0file. You can automatically scan the active array and append the file by typing:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo mdadm &#8211;detail &#8211;scan | sudo tee -a \/etc\/mdadm\/mdadm.conf<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>Afterwards, you can update the initramfs, or initial RAM file system, so that the array will be available during the early boot process:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo update-initramfs -u<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>Add the new filesystem mount options to the\u00a0<code>\/etc\/fstab<\/code>\u00a0file for automatic mounting at boot:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">echo &#8216;\/dev\/md0 \/mnt\/md0 ext4 defaults,nofail,discard 0 0&#8217; | sudo tee -a \/etc\/fstab<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>Your RAID 0 array should now automatically be assembled and mounted each boot.<\/p>\n<div data-unique=\"creating-a-raid-1-array\"><\/div>\n<h2 id=\"creating-a-raid-1-array\">Creating a RAID 1 Array<\/h2>\n<p>The RAID 1 array type is implemented by mirroring data across all available disks. Each disk in a RAID 1 array gets a full copy of the data, providing redundancy in the event of a device failure.<\/p>\n<ul>\n<li>Requirements: minimum of\u00a0<strong>2 storage devices<\/strong><\/li>\n<li>Primary benefit: Redundancy<\/li>\n<li>Things to keep in mind: Since two copies of the data are maintained, only half of the disk space will be usable<\/li>\n<\/ul>\n<h3 id=\"identify-the-component-devices\">Identify the Component Devices<\/h3>\n<p>To get started, find the identifiers for the raw disks that you will be using:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<pre class=\"code-pre \"><code><\/code><\/pre>\n<div class=\"secondary-code-label \" title=\"Output\">Output<\/div>\n<pre class=\"code-pre \"><code>NAME     SIZE FSTYPE TYPE MOUNTPOINT\r\n<span class=\"highlight\">sda      100G        disk<\/span>\r\n<span class=\"highlight\">sdb      100G        disk<\/span>\r\nvda       20G        disk \r\n\u251c\u2500vda1    20G ext4   part \/\r\n\u2514\u2500vda15    1M        part\r\n<\/code><\/pre>\n<p>As you can see above, we have two disks without a filesystem, each 100G in size. In this example, these devices have been given the\u00a0<code>\/dev\/sda<\/code>\u00a0and\u00a0<code>\/dev\/sdb<\/code>\u00a0identifiers for this session. These will be the raw components we will use to build the array.<\/p>\n<h3 id=\"create-the-array\">Create the Array<\/h3>\n<p>To create a RAID 1 array with these components, pass them in to the\u00a0<code>mdadm --create<\/code>\u00a0command. You will have to specify the device name you wish to create (<code>\/dev\/md0<\/code>\u00a0in our case), the RAID level, and the number of devices:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo mdadm &#8211;create &#8211;verbose \/dev\/md0 &#8211;level=1 &#8211;raid-devices=2 \/dev\/<span class=\"highlight\">sda<\/span> \/dev\/<span class=\"highlight\">sdb<\/span><\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>If the component devices you are using are not partitions with the\u00a0<code>boot<\/code>\u00a0flag enabled, you will likely be given the following warning. It is safe to type\u00a0<strong>y<\/strong>\u00a0to continue:<\/p>\n<pre class=\"code-pre \"><code><\/code><\/pre>\n<div class=\"secondary-code-label \" title=\"Output\">Output<\/div>\n<pre class=\"code-pre \"><code>mdadm: Note: this array has metadata at the start and\r\n    may not be suitable as a boot device.  If you plan to\r\n    store '\/boot' on this device please ensure that\r\n    your boot-loader understands md\/v1.x metadata, or use\r\n    --metadata=0.90\r\nmdadm: size set to 104792064K\r\nContinue creating array? <span class=\"highlight\">y<\/span>\r\n<\/code><\/pre>\n<p>The\u00a0<code>mdadm<\/code>\u00a0tool will start to mirror the drives. This can take some time to complete, but the array can be used during this time. You can monitor the progress of the mirroring by checking the\u00a0<code>\/proc\/mdstat<\/code>\u00a0file:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">cat \/proc\/mdstat<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<pre class=\"code-pre \"><code><\/code><\/pre>\n<div class=\"secondary-code-label \" title=\"Output\">Output<\/div>\n<pre class=\"code-pre \"><code>Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] \r\n<span class=\"highlight\">md0 : active raid1 sdb[1] sda[0]<\/span>\r\n      104792064 blocks super 1.2 [2\/2] [UU]\r\n      <span class=\"highlight\">[====&gt;................]  resync = 20.2% (21233216\/104792064) finish=6.9min speed=199507K\/sec<\/span>\r\n\r\nunused devices: &lt;none&gt;\r\n<\/code><\/pre>\n<p>As you can see in the first highlighted line, the\u00a0<code>\/dev\/md0<\/code>\u00a0device has been created in the RAID 1 configuration using the\u00a0<code>\/dev\/sda<\/code>\u00a0and\u00a0<code>\/dev\/sdb<\/code>\u00a0devices. The second highlighted line shows the progress on the mirroring. You can continue the guide while this process completes.<\/p>\n<h3 id=\"create-and-mount-the-filesystem\">Create and Mount the Filesystem<\/h3>\n<p>Next, create a filesystem on the array:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo mkfs.ext4 -F \/dev\/md0<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>Create a mount point to attach the new filesystem:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo mkdir -p \/mnt\/md0<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>You can mount the filesystem by typing:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo mount \/dev\/md0 \/mnt\/md0<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>Check whether the new space is available by typing:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">df -h -x devtmpfs -x tmpfs<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<pre class=\"code-pre \"><code><\/code><\/pre>\n<div class=\"secondary-code-label \" title=\"Output\">Output<\/div>\n<pre class=\"code-pre \"><code>Filesystem      Size  Used Avail Use% Mounted on\r\n\/dev\/vda1        20G  1.1G   18G   6% \/\r\n<span class=\"highlight\">\/dev\/md0         99G   60M   94G   1% \/mnt\/md0<\/span>\r\n<\/code><\/pre>\n<p>The new filesystem is mounted and accessible.<\/p>\n<h3 id=\"save-the-array-layout\">Save the Array Layout<\/h3>\n<p>To make sure that the array is reassembled automatically at boot, we will have to adjust the\u00a0<code>\/etc\/mdadm\/mdadm.conf<\/code>\u00a0file. You can automatically scan the active array and append the file by typing:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo mdadm &#8211;detail &#8211;scan | sudo tee -a \/etc\/mdadm\/mdadm.conf<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>Afterwards, you can update the initramfs, or initial RAM file system, so that the array will be available during the early boot process:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo update-initramfs -u<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>Add the new filesystem mount options to the\u00a0<code>\/etc\/fstab<\/code>\u00a0file for automatic mounting at boot:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">echo &#8216;\/dev\/md0 \/mnt\/md0 ext4 defaults,nofail,discard 0 0&#8217; | sudo tee -a \/etc\/fstab<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>Your RAID 1 array should now automatically be assembled and mounted each boot.<\/p>\n<div data-unique=\"creating-a-raid-5-array\"><\/div>\n<h2 id=\"creating-a-raid-5-array\">Creating a RAID 5 Array<\/h2>\n<p>The RAID 5 array type is implemented by striping data across the available devices. One component of each stripe is a calculated parity block. If a device fails, the parity block and the remaining blocks can be used to calculate the missing data. The device that receives the parity block is rotated so that each device has a balanced amount of parity information.<\/p>\n<ul>\n<li>Requirements: minimum of\u00a0<strong>3 storage devices<\/strong><\/li>\n<li>Primary benefit: Redundancy with more usable capacity.<\/li>\n<li>Things to keep in mind: While the parity information is distributed, one disk&#8217;s worth of capacity will be used for parity. RAID 5 can suffer from very poor performance when in a degraded state.<\/li>\n<\/ul>\n<h3 id=\"identify-the-component-devices\">Identify the Component Devices<\/h3>\n<p>To get started, find the identifiers for the raw disks that you will be using:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<pre class=\"code-pre \"><code><\/code><\/pre>\n<div class=\"secondary-code-label \" title=\"Output\">Output<\/div>\n<pre class=\"code-pre \"><code>NAME     SIZE FSTYPE TYPE MOUNTPOINT\r\n<span class=\"highlight\">sda      100G        disk<\/span>\r\n<span class=\"highlight\">sdb      100G        disk<\/span>\r\n<span class=\"highlight\">sdc      100G        disk<\/span>\r\nvda       20G        disk \r\n\u251c\u2500vda1    20G ext4   part \/\r\n\u2514\u2500vda15    1M        part\r\n<\/code><\/pre>\n<p>As you can see above, we have three disks without a filesystem, each 100G in size. In this example, these devices have been given the\u00a0<code>\/dev\/sda<\/code>,\u00a0<code>\/dev\/sdb<\/code>, and\u00a0<code>\/dev\/sdc<\/code>\u00a0identifiers for this session. These will be the raw components we will use to build the array.<\/p>\n<h3 id=\"create-the-array\">Create the Array<\/h3>\n<p>To create a RAID 5 array with these components, pass them in to the\u00a0<code>mdadm --create<\/code>\u00a0command. You will have to specify the device name you wish to create (<code>\/dev\/md0<\/code>\u00a0in our case), the RAID level, and the number of devices:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo mdadm &#8211;create &#8211;verbose \/dev\/md0 &#8211;level=5 &#8211;raid-devices=3 \/dev\/<span class=\"highlight\">sda<\/span> \/dev\/<span class=\"highlight\">sdb<\/span> \/dev\/<span class=\"highlight\">sdc<\/span><\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>The\u00a0<code>mdadm<\/code>\u00a0tool will start to configure the array (it actually uses the recovery process to build the array for performance reasons). This can take some time to complete, but the array can be used during this time. You can monitor the progress of the mirroring by checking the\u00a0<code>\/proc\/mdstat<\/code>\u00a0file:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">cat \/proc\/mdstat<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<pre class=\"code-pre \"><code><\/code><\/pre>\n<div class=\"secondary-code-label \" title=\"Output\">Output<\/div>\n<pre class=\"code-pre \"><code>Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] \r\n<span class=\"highlight\">md0 : active raid5 sdc[3] sdb[1] sda[0]<\/span>\r\n      209584128 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3\/2] [UU_]\r\n      <span class=\"highlight\">[===&gt;.................]  recovery = 15.6% (16362536\/104792064) finish=7.3min speed=200808K\/sec<\/span>\r\n\r\nunused devices: &lt;none&gt;\r\n<\/code><\/pre>\n<p>As you can see in the first highlighted line, the\u00a0<code>\/dev\/md0<\/code>\u00a0device has been created in the RAID 5 configuration using the\u00a0<code>\/dev\/sda<\/code>,\u00a0<code>\/dev\/sdb<\/code>\u00a0and\u00a0<code>\/dev\/sdc<\/code>\u00a0devices. The second highlighted line shows the progress on the build. You can continue the guide while this process completes.<\/p>\n<h3 id=\"create-and-mount-the-filesystem\">Create and Mount the Filesystem<\/h3>\n<p>Next, create a filesystem on the array:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo mkfs.ext4 -F \/dev\/md0<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>Create a mount point to attach the new filesystem:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo mkdir -p \/mnt\/md0<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>You can mount the filesystem by typing:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo mount \/dev\/md0 \/mnt\/md0<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>Check whether the new space is available by typing:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">df -h -x devtmpfs -x tmpfs<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<pre class=\"code-pre \"><code><\/code><\/pre>\n<div class=\"secondary-code-label \" title=\"Output\">Output<\/div>\n<pre class=\"code-pre \"><code>Filesystem      Size  Used Avail Use% Mounted on\r\n\/dev\/vda1        20G  1.1G   18G   6% \/\r\n<span class=\"highlight\">\/dev\/md0        197G   60M  187G   1% \/mnt\/md0<\/span>\r\n<\/code><\/pre>\n<p>The new filesystem is mounted and accessible.<\/p>\n<h3 id=\"save-the-array-layout\">Save the Array Layout<\/h3>\n<p>To make sure that the array is reassembled automatically at boot, we will have to adjust the\u00a0<code>\/etc\/mdadm\/mdadm.conf<\/code>\u00a0file.<\/p>\n<p>Before you adjust the configuration, check again to make sure the array has finished assembling. Because of the way that\u00a0<code>mdadm<\/code>\u00a0builds RAID 5 arrays, if the array is still building, the number of spares in the array will be inaccurately reported:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">cat \/proc\/mdstat<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<pre class=\"code-pre \"><code><\/code><\/pre>\n<div class=\"secondary-code-label \" title=\"Output\">Output<\/div>\n<pre class=\"code-pre \"><code>Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] \r\nmd0 : active raid5 sdc[3] sdb[1] sda[0]\r\n      209584128 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3\/3] [UUU]\r\n\r\nunused devices: &lt;none&gt;\r\n<\/code><\/pre>\n<p>The output above shows that the rebuild is complete. Now, we can automatically scan the active array and append the file by typing:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo mdadm &#8211;detail &#8211;scan | sudo tee -a \/etc\/mdadm\/mdadm.conf<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>Afterwards, you can update the initramfs, or initial RAM file system, so that the array will be available during the early boot process:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo update-initramfs -u<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>Add the new filesystem mount options to the\u00a0<code>\/etc\/fstab<\/code>\u00a0file for automatic mounting at boot:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">echo &#8216;\/dev\/md0 \/mnt\/md0 ext4 defaults,nofail,discard 0 0&#8217; | sudo tee -a \/etc\/fstab<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>Your RAID 5 array should now automatically be assembled and mounted each boot.<\/p>\n<div data-unique=\"creating-a-raid-6-array\"><\/div>\n<h2 id=\"creating-a-raid-6-array\">Creating a RAID 6 Array<\/h2>\n<p>The RAID 6 array type is implemented by striping data across the available devices. Two components of each stripe are calculated parity blocks. If one or two devices fail, the parity blocks and the remaining blocks can be used to calculate the missing data. The devices that receive the parity blocks are rotated so that each device has a balanced amount of parity information. This is similar to a RAID 5 array, but allows for the failure of two drives.<\/p>\n<ul>\n<li>Requirements: minimum of\u00a0<strong>4 storage devices<\/strong><\/li>\n<li>Primary benefit: Double redundancy with more usable capacity.<\/li>\n<li>Things to keep in mind: While the parity information is distributed, two disk&#8217;s worth of capacity will be used for parity. RAID 6 can suffer from very poor performance when in a degraded state.<\/li>\n<\/ul>\n<h3 id=\"identify-the-component-devices\">Identify the Component Devices<\/h3>\n<p>To get started, find the identifiers for the raw disks that you will be using:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<pre class=\"code-pre \"><code><\/code><\/pre>\n<div class=\"secondary-code-label \" title=\"Output\">Output<\/div>\n<pre class=\"code-pre \"><code>NAME     SIZE FSTYPE TYPE MOUNTPOINT\r\n<span class=\"highlight\">sda      100G        disk<\/span>\r\n<span class=\"highlight\">sdb      100G        disk<\/span>\r\n<span class=\"highlight\">sdc      100G        disk<\/span>\r\n<span class=\"highlight\">sdd      100G        disk<\/span>\r\nvda       20G        disk \r\n\u251c\u2500vda1    20G ext4   part \/\r\n\u2514\u2500vda15    1M        part\r\n<\/code><\/pre>\n<p>As you can see above, we have four disks without a filesystem, each 100G in size. In this example, these devices have been given the\u00a0<code>\/dev\/sda<\/code>,\u00a0<code>\/dev\/sdb<\/code>,\u00a0<code>\/dev\/sdc<\/code>, and\u00a0<code>\/dev\/sdd<\/code>\u00a0identifiers for this session. These will be the raw components we will use to build the array.<\/p>\n<h3 id=\"create-the-array\">Create the Array<\/h3>\n<p>To create a RAID 6 array with these components, pass them in to the\u00a0<code>mdadm --create<\/code>\u00a0command. You will have to specify the device name you wish to create (<code>\/dev\/md0<\/code>\u00a0in our case), the RAID level, and the number of devices:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo mdadm &#8211;create &#8211;verbose \/dev\/md0 &#8211;level=6 &#8211;raid-devices=4 \/dev\/<span class=\"highlight\">sda<\/span> \/dev\/<span class=\"highlight\">sdb<\/span> \/dev\/<span class=\"highlight\">sdc<\/span> \/dev\/<span class=\"highlight\">sdd<\/span><\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>The\u00a0<code>mdadm<\/code>\u00a0tool will start to configure the array (it actually uses the recovery process to build the array for performance reasons). This can take some time to complete, but the array can be used during this time. You can monitor the progress of the mirroring by checking the\u00a0<code>\/proc\/mdstat<\/code>\u00a0file:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">cat \/proc\/mdstat<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<pre class=\"code-pre \"><code><\/code><\/pre>\n<div class=\"secondary-code-label \" title=\"Output\">Output<\/div>\n<pre class=\"code-pre \"><code>Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] \r\n<span class=\"highlight\">md0 : active raid6 sdd[3] sdc[2] sdb[1] sda[0]<\/span>\r\n      209584128 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4\/4] [UUUU]\r\n      <span class=\"highlight\">[&gt;....................]  resync =  0.6% (668572\/104792064) finish=10.3min speed=167143K\/sec<\/span>\r\n\r\nunused devices: &lt;none&gt;\r\n<\/code><\/pre>\n<p>As you can see in the first highlighted line, the\u00a0<code>\/dev\/md0<\/code>\u00a0device has been created in the RAID 6 configuration using the\u00a0<code>\/dev\/sda<\/code>,\u00a0<code>\/dev\/sdb<\/code>,\u00a0<code>\/dev\/sdc<\/code>\u00a0and\u00a0<code>\/dev\/sdd<\/code>\u00a0devices. The second highlighted line shows the progress on the build. You can continue the guide while this process completes.<\/p>\n<h3 id=\"create-and-mount-the-filesystem\">Create and Mount the Filesystem<\/h3>\n<p>Next, create a filesystem on the array:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo mkfs.ext4 -F \/dev\/md0<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>Create a mount point to attach the new filesystem:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo mkdir -p \/mnt\/md0<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>You can mount the filesystem by typing:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo mount \/dev\/md0 \/mnt\/md0<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>Check whether the new space is available by typing:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">df -h -x devtmpfs -x tmpfs<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<pre class=\"code-pre \"><code><\/code><\/pre>\n<div class=\"secondary-code-label \" title=\"Output\">Output<\/div>\n<pre class=\"code-pre \"><code>Filesystem      Size  Used Avail Use% Mounted on\r\n\/dev\/vda1        20G  1.1G   18G   6% \/\r\n<span class=\"highlight\">\/dev\/md0        197G   60M  187G   1% \/mnt\/md0<\/span>\r\n<\/code><\/pre>\n<p>The new filesystem is mounted and accessible.<\/p>\n<h3 id=\"save-the-array-layout\">Save the Array Layout<\/h3>\n<p>To make sure that the array is reassembled automatically at boot, we will have to adjust the\u00a0<code>\/etc\/mdadm\/mdadm.conf<\/code>\u00a0file. We can automatically scan the active array and append the file by typing:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo mdadm &#8211;detail &#8211;scan | sudo tee -a \/etc\/mdadm\/mdadm.conf<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>Afterwards, you can update the initramfs, or initial RAM file system, so that the array will be available during the early boot process:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo update-initramfs -u<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>Add the new filesystem mount options to the\u00a0<code>\/etc\/fstab<\/code>\u00a0file for automatic mounting at boot:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">echo &#8216;\/dev\/md0 \/mnt\/md0 ext4 defaults,nofail,discard 0 0&#8217; | sudo tee -a \/etc\/fstab<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>Your RAID 6 array should now automatically be assembled and mounted each boot.<\/p>\n<div data-unique=\"creating-a-complex-raid-10-array\"><\/div>\n<h2 id=\"creating-a-complex-raid-10-array\">Creating a Complex RAID 10 Array<\/h2>\n<p>The RAID 10 array type is traditionally implemented by creating a striped RAID 0 array composed of sets of RAID 1 arrays. This nested array type gives both redundancy and high performance, at the expense of large amounts of disk space. The\u00a0<code>mdadm<\/code>\u00a0utility has its own RAID 10 type that provides the same type of benefits with increased flexibility. It is not created by nesting arrays, but has many of the same characteristics and guarantees. We will be using the\u00a0<code>mdadm<\/code>\u00a0RAID 10 here.<\/p>\n<ul>\n<li>Requirements: minimum of\u00a0<strong>3 storage devices<\/strong><\/li>\n<li>Primary benefit: Performance and redundancy<\/li>\n<li>Things to keep in mind: The amount of capacity reduction for the array is defined by the number of data copies you choose to keep. The number of copies that are stored with\u00a0<code>mdadm<\/code>\u00a0style RAID 10 is configurable.<\/li>\n<\/ul>\n<p>By default, two copies of each data block will be stored in what is called the &#8220;near&#8221; layout. The possible layouts that dictate how each data block is stored are:<\/p>\n<ul>\n<li><strong>near<\/strong>: The default arrangement. Copies of each chunk are written consecutively when striping, meaning that the copies of the data blocks will be written around the same part of multiple disks.<\/li>\n<li><strong>far<\/strong>: The first and subsequent copies are written to different parts the storage devices in the array. For instance, the first chunk might be written near the beginning of a disk, while the second chunk would be written half way down on a different disk. This can give some read performance gains for traditional spinning disks at the expense of write performance.<\/li>\n<li><strong>offset<\/strong>: Each stripe is copied, offset by one drive. This means that the copies are offset from one another, but still close together on the disk. This helps minimize excessive seeking during some workloads.<\/li>\n<\/ul>\n<p>You can find out more about these layouts by checking out the &#8220;RAID10&#8221; section of this\u00a0<code>man<\/code>\u00a0page:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">man 4 md<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>You can also find this\u00a0<code>man<\/code>\u00a0page online\u00a0<a href=\"http:\/\/manpages.ubuntu.com\/manpages\/xenial\/man4\/md.4.html\">here<\/a>.<\/p>\n<h3 id=\"identify-the-component-devices\">Identify the Component Devices<\/h3>\n<p>To get started, find the identifiers for the raw disks that you will be using:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<pre class=\"code-pre \"><code><\/code><\/pre>\n<div class=\"secondary-code-label \" title=\"Output\">Output<\/div>\n<pre class=\"code-pre \"><code>NAME     SIZE FSTYPE TYPE MOUNTPOINT\r\n<span class=\"highlight\">sda      100G        disk<\/span>\r\n<span class=\"highlight\">sdb      100G        disk<\/span>\r\n<span class=\"highlight\">sdc      100G        disk<\/span>\r\n<span class=\"highlight\">sdd      100G        disk<\/span>\r\nvda       20G        disk \r\n\u251c\u2500vda1    20G ext4   part \/\r\n\u2514\u2500vda15    1M        part\r\n<\/code><\/pre>\n<p>As you can see above, we have four disks without a filesystem, each 100G in size. In this example, these devices have been given the\u00a0<code>\/dev\/sda<\/code>,\u00a0<code>\/dev\/sdb<\/code>,\u00a0<code>\/dev\/sdc<\/code>, and\u00a0<code>\/dev\/sdd<\/code>\u00a0identifiers for this session. These will be the raw components we will use to build the array.<\/p>\n<h3 id=\"create-the-array\">Create the Array<\/h3>\n<p>To create a RAID 10 array with these components, pass them in to the\u00a0<code>mdadm --create<\/code>\u00a0command. You will have to specify the device name you wish to create (<code>\/dev\/md0<\/code>\u00a0in our case), the RAID level, and the number of devices.<\/p>\n<p>You can set up two copies using the near layout by not specifying a layout and copy number:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo mdadm &#8211;create &#8211;verbose \/dev\/md0 &#8211;level=10 &#8211;raid-devices=4 \/dev\/<span class=\"highlight\">sda<\/span> \/dev\/<span class=\"highlight\">sdb<\/span> \/dev\/<span class=\"highlight\">sdc<\/span> \/dev\/<span class=\"highlight\">sdd<\/span><\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>If you want to use a different layout, or change the number of copies, you will have to use the\u00a0<code>--layout=<\/code>option, which takes a layout and copy identifier. The layouts are\u00a0<strong>n<\/strong>\u00a0for near,\u00a0<strong>f<\/strong>\u00a0for far, and\u00a0<strong>o<\/strong>\u00a0for offset. The number of copies to store is appended afterwards.<\/p>\n<p>For instance, to create an array that has 3 copies in the offset layout, the command would look like this:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo mdadm &#8211;create &#8211;verbose \/dev\/md0 &#8211;level=10 &#8211;layout=o3 &#8211;raid-devices=4 \/dev\/<span class=\"highlight\">sda<\/span> \/dev\/<span class=\"highlight\">sdb<\/span> \/dev\/<span class=\"highlight\">sdc<\/span> \/dev\/<span class=\"highlight\">sdd<\/span><\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>The\u00a0<code>mdadm<\/code>\u00a0tool will start to configure the array (it actually uses the recovery process to build the array for performance reasons). This can take some time to complete, but the array can be used during this time. You can monitor the progress of the mirroring by checking the\u00a0<code>\/proc\/mdstat<\/code>\u00a0file:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">cat \/proc\/mdstat<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<pre class=\"code-pre \"><code><\/code><\/pre>\n<div class=\"secondary-code-label \" title=\"Output\">Output<\/div>\n<pre class=\"code-pre \"><code>Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] \r\n<span class=\"highlight\">md0 : active raid10 sdd[3] sdc[2] sdb[1] sda[0]<\/span>\r\n      209584128 blocks super 1.2 512K chunks <span class=\"highlight\">2 near-copies<\/span> [4\/4] [UUUU]\r\n      <span class=\"highlight\">[===&gt;.................]  resync = 18.1% (37959424\/209584128) finish=13.8min speed=206120K\/sec<\/span>\r\n\r\nunused devices: &lt;none&gt;\r\n<\/code><\/pre>\n<p>As you can see in the first highlighted line, the\u00a0<code>\/dev\/md0<\/code>\u00a0device has been created in the RAID 10 configuration using the\u00a0<code>\/dev\/sda<\/code>,\u00a0<code>\/dev\/sdb<\/code>,\u00a0<code>\/dev\/sdc<\/code>\u00a0and\u00a0<code>\/dev\/sdd<\/code>\u00a0devices. The second highlighted area shows the layout that was used for this example (2 copies in the near configuration). The third highlighted area shows the progress on the build. You can continue the guide while this process completes.<\/p>\n<h3 id=\"create-and-mount-the-filesystem\">Create and Mount the Filesystem<\/h3>\n<p>Next, create a filesystem on the array:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo mkfs.ext4 -F \/dev\/md0<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>Create a mount point to attach the new filesystem:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo mkdir -p \/mnt\/md0<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>You can mount the filesystem by typing:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo mount \/dev\/md0 \/mnt\/md0<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>Check whether the new space is available by typing:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">df -h -x devtmpfs -x tmpfs<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<pre class=\"code-pre \"><code><\/code><\/pre>\n<div class=\"secondary-code-label \" title=\"Output\">Output<\/div>\n<pre class=\"code-pre \"><code>Filesystem      Size  Used Avail Use% Mounted on\r\n\/dev\/vda1        20G  1.1G   18G   6% \/\r\n<span class=\"highlight\">\/dev\/md0        197G   60M  187G   1% \/mnt\/md0<\/span>\r\n<\/code><\/pre>\n<p>The new filesystem is mounted and accessible.<\/p>\n<h3 id=\"save-the-array-layout\">Save the Array Layout<\/h3>\n<p>To make sure that the array is reassembled automatically at boot, we will have to adjust the\u00a0<code>\/etc\/mdadm\/mdadm.conf<\/code>\u00a0file. We can automatically scan the active array and append the file by typing:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo mdadm &#8211;detail &#8211;scan | sudo tee -a \/etc\/mdadm\/mdadm.conf<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>Afterwards, you can update the initramfs, or initial RAM file system, so that the array will be available during the early boot process:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">sudo update-initramfs -u<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>Add the new filesystem mount options to the\u00a0<code>\/etc\/fstab<\/code>\u00a0file for automatic mounting at boot:<\/p>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<ul class=\"prefixed\">\n<li class=\"line\">echo &#8216;\/dev\/md0 \/mnt\/md0 ext4 defaults,nofail,discard 0 0&#8217; | sudo tee -a \/etc\/fstab<\/li>\n<\/ul>\n<pre class=\"code-pre command\"><code><\/code><\/pre>\n<p>Your RAID 10 array should now automatically be assembled and mounted each boot.<\/p>\n<div data-unique=\"conclusion\"><\/div>\n<h2 id=\"conclusion\">Conclusion<\/h2>\n<p>In this guide, we demonstrated how to create various types of arrays using Linux&#8217;s\u00a0<code>mdadm<\/code>\u00a0software RAID utility. RAID arrays offer some compelling redundancy and performance enhancements over using multiple disks individually.<\/p>\n<p>Once you have settled on the type of array needed for your environment and created the device, you will need to learn how to perform day-to-day management with\u00a0<code>mdadm<\/code>. Our guide on\u00a0<a href=\"https:\/\/www.digitalocean.com\/community\/tutorials\/how-to-manage-raid-arrays-with-mdadm-on-ubuntu-16-04\">how to manage RAID arrays with\u00a0<code>mdadm<\/code>\u00a0on Ubuntu 16.04<\/a>\u00a0can help get you started.<\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Introduction The\u00a0mdadm\u00a0utility can be used to create and manage storage arrays using Linux&#8217;s software RAID capabilities. Administrators have great flexibility in coordinating their individual storage devices and creating logical storage devices that have greater performance or redundancy characteristics. In this guide, we will go over a number of different RAID configurations that can be set &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/www.appservgrid.com\/paw92\/index.php\/2019\/02\/10\/how-to-create-raid-arrays-with-mdadm-on-ubuntu-16-04\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;How To Create RAID Arrays with mdadm on Ubuntu 16.04&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-9413","post","type-post","status-publish","format-standard","hentry","category-linux"],"_links":{"self":[{"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/posts\/9413","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/comments?post=9413"}],"version-history":[{"count":1,"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/posts\/9413\/revisions"}],"predecessor-version":[{"id":9414,"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/posts\/9413\/revisions\/9414"}],"wp:attachment":[{"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/media?parent=9413"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/categories?post=9413"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.appservgrid.com\/paw92\/index.php\/wp-json\/wp\/v2\/tags?post=9413"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}