This document is a WORK IN PROGRESS.
This is just a quick personal cheat sheet: treat its contents with caution!
Mdadm¶
Mdadm is a tool for managing/monitoring Linux Software RAID.
Reference(s)
Table of contents¶
Install¶
If you are using Gentoo (or some Gentoo based distros), make sure you configured your kernel appropriately:
A correct kernel config is needed:
# cd /usr/src/linux
# make nconfig # or `# make menuconfig`
# Raid support
# Double check here: <https://wiki.gentoo.org/wiki/Complete_Handbook/Software_RAID#Installing_the_Tools>
#
> Device Drivers --->
> <*> Multiple devices driver support (RAID and LVM) # Symbol: MD [=y]
> <*> RAID support # Symbol: BLK_DEV_MD [=y]
> [*] Autodetect RAID arrays during kernel boot # Symbol: MD_AUTODETECT [=y]
>
> # If you want to combine multiple disks or partitions to
> # one (bigger) device:
> <*> Linear (append) mode # Symbol: MD_LINEAR [=y]
>
> # If you want to increase I/O performance by striping data
> # across multiple disks (at the expense of reliability):
> <*> RAID-0 (striping) mode # Symbol: MD_RAID0 [=y]
>
> # If you want to increase reliability by mirroring data
> # across multiple disks (at the expense of storage
> # capacity):
> <*> RAID-1 (mirroring) mode # Symbol: MD_RAID1 [=y]
>
> # If you want to combine the previous two options (for
> # whatever reason):
> <*> RAID-10 (mirrored striping) mode # Symbol: MD_RAID10 [=n]
>
> # If you want to combine 3 or more disks for reliability
> # and performance:
> <*> RAID-4/RAID-5/RAID-6 mode # Symbol: MD_RAID456 [=n]
Warning
After configuring the kernel don't forget to do a kernel make and rebuild!
Install Mdadm:
Config¶
Reference(s)
Software RAID for Root File System¶
WIP
When the root file system is located on a software RAID, an initramfs
is necessary for automatic
assembly...
Use¶
-
Check status:
-
More detailed check (e.g. on
/dev/md1
RAID volume): -
Create a simple RAID 1 mirrored disk (make sure you have two partitions that have the same size):
-
Create a RAID 5 array with 4 active devices and 1 spare device:
-
Add a disk, e.g.
/dev/sdd1
(if the RAID has healthy members, the new disk will be added as spare disk): -
Remove a failed disk, e.g.
/dev/sdd1
: -
Remove a healthy disk, e.g.
/dev/sdd1
(by marking it as failed, then removing it): -
Remove device permanently (for example, to use it individually from now on):
-
Stop using an array:
- Unmount target array
- Stop the array with:
$ mdadm --stop /dev/md0
- Repeat the "Remove device permanently" commands on each device.
- Remove the corresponding line from
/etc/mdadm.conf
.
Scrubbing¶
It is good practice to regularly run data scrubbing to check for and fix errors. Depending on the size/configuration of the array, a scrub may take multiple hours to complete.
-
To initiate a data scrub:
-
To stop a currently running data scrub safely:
Note: If the system is rebooted after a partial scrub has been suspended, the scrub will start over.
The check operation scans the drives for bad sectors and automatically repairs them. If it finds good sectors that contain bad data, then no action is taken, but the event is logged. This "no action taken" allows admins to inspect the data in the sector and the data that would be produced by rebuilding the sectors from redundant information and pick the correct data to keep.
As with many tasks/items relating to Mdadm, the status of the scrub can be queried by reading
/proc/mdstat
. Example:
$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid1]
md0 : active raid1 sdb1[0] sdc1[1]
3906778112 blocks super 1.2 [2/2] [UU]
[>....................] check = 4.0% (158288320/3906778112) finish=386.5min speed=161604K/sec
bitmap: 0/30 pages [0KB], 65536KB chunk
When the scrub is complete, admins may check how many blocks (if any) have been flagged as bad:
If this cheat sheet has been useful to you, then please consider leaving a star here.