RAID cheat cheat.

Posted: May 29, 2010 in FOSS, Systems/Network Administration

RAID stands for Redundant Array of independent Disks. The basic idea behind RAID is to combine multiple small, independent disk drives into an array which yields performance exceeding that of one large drive thus providing increased storage reliability through redundancy, combining multiple disk drive components into a logical unit where all drives in the array are interdependent

There are various levels of RAID

a) Level 0: RAID level 0, often called “striping,” is a performance- oriented striped data mapping technique. That means the data being written to the array is broken down into strips and written across the member disks of the array. This allows high I/O performance at low inherent cost but provides no redundancy. Storage capacity of the array is equal to the total capacity of the member disks.

To create a RAID 0 array of 700 MB the following command can be used

mdadm -C /dev/md0 –level=0 –raid-devices=2 /dev/sda8 /dev/sda9.

b) Level 1 — RAID level 1, or “mirroring,” has been used longer than any other form of RAID. Level 1 provides redundancy by writing identical data to each member disk of the array, leaving a “mirrored” copy on each disk. Mirroring remains popular due to its simplicity and high level of data availability. Level 1 operates with two or more disks that may use parallel access for high data-transfer rates when reading, but more commonly operate independently to provide high I/O transaction rates. Level 1 provides very good data reliability and improves performance for read-intensive applications but at a relatively high cost[2]. Array capacity is equal to the capacity of one member disk.

The command to create a RAID 1 is similar to the one for RAID 0 except for the
change in level from 0 to 1.

mdadm -c /dev/md0 –level=1 –raid-devices=2 /dev/sda8 /dev/sda9.

c) Level 4 — Level 4 uses parity concentrated on a single disk drive to protect data. It’s better suited to transaction I/O rather than large file transfers. Because the dedicated parity disk represents an inherent bottleneck, level 4 is seldom used without accompanying technologies such as write-back caching. Although RAID level 4 is an option in some RAID partitioning schemes, it is not allowed on some GNU/Linux Distros like RHEL. Array capacity is equal to the capacity of member disks, minus capacity of one member disk.

d) Level 5 — The most common type of RAID. By distributing parity across some or all of an array’s member disk drives, RAID level 5 eliminates the write bottleneck inherent in level 4. The only bottleneck is the parity calculation process. With modern CPUs and software RAID, that isn’t a very big bottleneck. As with level 4, the result is asymmetrical performance, with reads substantially outperforming writes. Level 5 is often used with write-back caching to reduce the asymmetry. Array capacity is equal to the capacity of member disks, minus capacity of one member disk.

The command to create a Level 5 RAID array is as follows.

mdadm -c /dev/md0 –level=5 –raid-devices=3 /dev/sda8 /dev/sda9 /dev/sda10

Note that the –level is 5 and the number of RAID Devices is 3.

e) Linear RAID — Linear RAID is a simple grouping of drives to create a larger virtual drive. In linear RAID, the chunks are allocated sequentially from one member drive, going to the next drive only when the first is completely filled. This grouping provides no performance benefit, as it is unlikely that any I/O operations will be split between member drives. Linear RAID also offers no redundancy, and in fact decreases reliability — if any one member drive fails, the entire array cannot be used. The capacity is total of all member disks.

Now let’s learn the quick dirty way to remove a RAID array.

Removing a RAID array

a) Check the status of the RAID array
mdadm –detail /dev/md0

[root@zion /]# mdadm –detail /dev/md0
Version : 0.90
Creation Time : Sat May 29 22:41:38 2010
Raid Level : raid0
Array Size : 706560 (690.12 MiB 723.52 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Sat May 29 22:41:38 2010
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Chunk Size : 64K

UUID : 6ed60341:3b22dacb:2dc30c58:84109ac1
Events : 0.1

Number Major Minor RaidDevice State
0 8 12 0 active sync /dev/sda12
1 8 13 1 active sync /dev/sda13
[root@zion /]#

b) Unmount the RAID ARRAY

umount /dev/md0

[root@zion /]# umount /dev/md0

c) Stop the RAID Array

mdadm –stop /dev/md0


[root@zion /]# mdadm –stop /dev/md0
mdadm: stopped /dev/md0
[root@zion /]#

[root@zion /]# mdadm –detail /dev/md0
mdadm: md device /dev/md0 does not appear to be active.
[root@zion /]#

d) Remove the Array
mdadm –remove /dev/md0

[root@zion /]# mdadm –remove /dev/md0
[root@zion /]# mdadm –detail /dev/md0
mdadm: md device /dev/md0 does not appear to be active.
[root@zion /]#

e) Remove(Zero) the superblock

[root@zion /]# mdadm –zero-superblock /dev/sda12
[root@zion /]# mdadm –zero-superblock /dev/sda13

Now lets learn how to remove a disk from an existing RAID Array

1) First fail the disk

mdadm –fail /dev/md0 /dev/sda1

2) Remove the disk from the RAID Array.

mdadm –remove /dev/md0 /dev/sda1


This can be done in a single step using:
mdadm /dev/md0 –fail /dev/sda1 –remove /dev/sda1

Now to add a disk to an existing RAID array or say we need to add a new disk to an
array (replacing a failed one probably):

mdadm –add /dev/md0 /dev/sdb1

Reference URLs:


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s