LVMRAID(7) | LVMRAID(7) |
NAME
lvmraid — LVM RAIDDESCRIPTION
lvm(8) RAID is a way to create a Logical Volume (LV) that uses multiple physical devices to improve performance or tolerate device failures. In LVM, the physical devices are Physical Volumes (PVs) in a single Volume Group (VG).Create a RAID LV
To create a RAID LV, use lvcreate and specify an LV type. The LV type corresponds to a RAID level. The basic RAID levels that can be used are: raid0, raid1, raid4, raid5, raid6, raid10.raid0
--stripes
specifies the number of devices to spread the LV across.
--stripesize
specifies the size of each stripe in kilobytes. This is the amount of data
that is written to one device before moving to the next.
PVs specifies the devices to use. If not specified, lvm will choose
Number devices, one for each stripe based on the number of PVs
available or supplied.
raid1
--mirrors
specifies the number of mirror images in addition to the original LV image,
e.g. --mirrors 1 means there are two images of the data, the original and one
mirror image.
PVs specifies the devices to use. If not specified, lvm will choose
Number devices, one for each image.
raid4
--stripes
specifies the number of devices to use for LV data. This does not include the
extra device lvm adds for storing parity blocks. A raid4 LV with Number
stripes requires Number+1 devices. Number must be 2 or more.
--stripesize
specifies the size of each stripe in kilobytes. This is the amount of data
that is written to one device before moving to the next.
PVs specifies the devices to use. If not specified, lvm will choose
Number+1 separate devices.
raid5
--stripes
specifies the number of devices to use for LV data. This does not include the
extra device lvm adds for storing parity blocks. A raid5 LV with Number
stripes requires Number+1 devices. Number must be 2 or more.
--stripesize
specifies the size of each stripe in kilobytes. This is the amount of data
that is written to one device before moving to the next.
PVs specifies the devices to use. If not specified, lvm will choose
Number+1 separate devices.
raid6
--stripes
specifies the number of devices to use for LV data. This does not include the
extra two devices lvm adds for storing parity blocks. A raid6 LV with
Number stripes requires Number+2 devices. Number must be
3 or more.
--stripesize
specifies the size of each stripe in kilobytes. This is the amount of data
that is written to one device before moving to the next.
PVs specifies the devices to use. If not specified, lvm will choose
Number+2 separate devices.
raid10
[--mirrors NumberMirrors]
[ --stripes NumberStripes --stripesize Size]
VG [ PVs]
--mirrors
specifies the number of mirror images within each stripe. e.g. --mirrors 1
means there are two images of the data, the original and one mirror image.
--stripes
specifies the total number of devices to use in all raid1 images (not the
number of raid1 devices to spread the LV across, even though that is the
effective result). The number of devices in each raid1 mirror will be
NumberStripes/(NumberMirrors+1), e.g. mirrors 1 and stripes 4 will stripe data
across two raid1 mirrors, where each mirror is devices.
--stripesize
specifies the size of each stripe in kilobytes. This is the amount of data
that is written to one device before moving to the next.
PVs specifies the devices to use. If not specified, lvm will choose the
necessary devices. Devices are used to create mirrors in the order listed,
e.g. for mirrors 1, stripes 2, listing PV1 PV2 PV3 PV4 results in mirrors
PV1/PV2 and PV3/PV4.
Synchronization
Synchronization is the process that makes all the devices in a RAID LV consistent with each other.Scrubbing
Scrubbing is a full scan of the RAID LV requested by a user. Scrubbing can find problems that are missed by partial synchronization.check
Check mode is read-only and only detects inconsistent areas in the RAID LV, it
does not correct them.
repair
Repair mode checks and writes corrected blocks to synchronize any inconsistent
areas.
Scrubbing can consume a lot of bandwidth and slow down application I/O on the
RAID LV. To control the I/O rate used for scrubbing, use:
--maxrecoveryrate
Size[k|UNIT]
Sets the maximum recovery rate for a RAID LV. Size is specified as an
amount per second for each device in the array. If no suffix is given, then
KiB/sec/device is used. Setting the recovery rate to 0 means it will be
unbounded.
--minrecoveryrate
Size[k|UNIT]
Sets the minimum recovery rate for a RAID LV. Size is specified as an
amount per second for each device in the array. If no suffix is given, then
KiB/sec/device is used. Setting the recovery rate to 0 means it will be
unbounded.
To display the current scrubbing in progress on an LV, including the syncaction
mode and percent complete, run:
# lvs -o name,vgname,segtype,attr vg/lv LV VG Type Attr lv vg raid1 Rwi-a-r-m-
Scrubbing Limitations
The check mode can only report the number of inconsistent blocks, it cannot report which blocks are inconsistent. This makes it impossible to know which device has errors, or if the errors affect file system data, metadata or nothing at all.SubLVs
An LV is often a combination of other hidden LVs called SubLVs. The SubLVs either use physical devices, or are built from other SubLVs themselves. SubLVs hold LV data blocks, RAID parity blocks, and RAID metadata. SubLVs are generally hidden, so the lvs -a option is required to display them:- •
- SubLVs holding LV data or parity blocks have the suffix
_rimage_#. These SubLVs are sometimes referred to as DataLVs.
- •
- SubLVs holding RAID metadata have the suffix _rmeta_#. RAID
metadata includes superblock information, RAID type, bitmap, and device
health information. These SubLVs are sometimes referred to as MetaLVs.
Examples
raid0# lvcreate --type raid0 --stripes 2 --name lvr0 ... # lvs -a -o name,segtype,devices lvr0 raid0 lvr0_rimage_0(0),lvr0_rimage_1(0) [lvr0_rimage_0] linear /dev/sda(...) [lvr0_rimage_1] linear /dev/sdb(...)
# lvcreate --type raid1 --mirrors 1 --name lvr1 ... # lvs -a -o name,segtype,devices lvr1 raid1 lvr1_rimage_0(0),lvr1_rimage_1(0) [lvr1_rimage_0] linear /dev/sda(...) [lvr1_rimage_1] linear /dev/sdb(...) [lvr1_rmeta_0] linear /dev/sda(...) [lvr1_rmeta_1] linear /dev/sdb(...)
# lvcreate --type raid4 --stripes 2 --name lvr4 ... # lvs -a -o name,segtype,devices lvr4 raid4 lvr4_rimage_0(0),\ lvr4_rimage_1(0),\ lvr4_rimage_2(0) [lvr4_rimage_0] linear /dev/sda(...) [lvr4_rimage_1] linear /dev/sdb(...) [lvr4_rimage_2] linear /dev/sdc(...) [lvr4_rmeta_0] linear /dev/sda(...) [lvr4_rmeta_1] linear /dev/sdb(...) [lvr4_rmeta_2] linear /dev/sdc(...)
# lvcreate --type raid5 --stripes 2 --name lvr5 ... # lvs -a -o name,segtype,devices lvr5 raid5 lvr5_rimage_0(0),\ lvr5_rimage_1(0),\ lvr5_rimage_2(0) [lvr5_rimage_0] linear /dev/sda(...) [lvr5_rimage_1] linear /dev/sdb(...) [lvr5_rimage_2] linear /dev/sdc(...) [lvr5_rmeta_0] linear /dev/sda(...) [lvr5_rmeta_1] linear /dev/sdb(...) [lvr5_rmeta_2] linear /dev/sdc(...)
# lvcreate --type raid6 --stripes 3 --name lvr6 # lvs -a -o name,segtype,devices lvr6 raid6 lvr6_rimage_0(0),\ lvr6_rimage_1(0),\ lvr6_rimage_2(0),\ lvr6_rimage_3(0),\ lvr6_rimage_4(0),\ lvr6_rimage_5(0) [lvr6_rimage_0] linear /dev/sda(...) [lvr6_rimage_1] linear /dev/sdb(...) [lvr6_rimage_2] linear /dev/sdc(...) [lvr6_rimage_3] linear /dev/sdd(...) [lvr6_rimage_4] linear /dev/sde(...) [lvr6_rimage_5] linear /dev/sdf(...) [lvr6_rmeta_0] linear /dev/sda(...) [lvr6_rmeta_1] linear /dev/sdb(...) [lvr6_rmeta_2] linear /dev/sdc(...) [lvr6_rmeta_3] linear /dev/sdd(...) [lvr6_rmeta_4] linear /dev/sde(...) [lvr6_rmeta_5] linear /dev/sdf(...)
# lvcreate --type raid10 --stripes 2 --mirrors 1 --name lvr10 # lvs -a -o name,segtype,devices lvr10 raid10 lvr10_rimage_0(0),\ lvr10_rimage_1(0),\ lvr10_rimage_2(0),\ lvr10_rimage_3(0) [lvr10_rimage_0] linear /dev/sda(...) [lvr10_rimage_1] linear /dev/sdb(...) [lvr10_rimage_2] linear /dev/sdc(...) [lvr10_rimage_3] linear /dev/sdd(...) [lvr10_rmeta_0] linear /dev/sda(...) [lvr10_rmeta_1] linear /dev/sdb(...) [lvr10_rmeta_2] linear /dev/sdc(...) [lvr10_rmeta_3] linear /dev/sdd(...)
Device Failure
Physical devices in a RAID LV can fail or be lost for multiple reasons. A device could be disconnected, permanently failed, or temporarily disconnected. The purpose of RAID LVs (levels 1 and higher) is to continue operating in a degraded mode, without losing LV data, even after a device fails. The number of devices that can fail without the loss of LV data depends on the RAID level:- •
- RAID0 (striped) LVs cannot tolerate losing any devices. LV
data will be lost if any devices fail.
- •
- RAID1 LVs can tolerate losing all but one device without LV
data loss.
- •
- RAID4 and RAID5 LVs can tolerate losing one device without
LV data loss.
- •
- RAID6 LVs can tolerate losing two devices without LV data
loss.
- •
- RAID10 is variable, and depends on which devices are lost.
It stripes across multiple mirror groups with raid1 layout thus it can
tolerate losing all but one device in each of these groups without LV data
loss.
WARNING: Device for PV uItL3Z-wBME-DQy0-... not found or rejected ...
Activating an LV with missing devices
A RAID LV that is missing devices may be activated or not, depending on the "activation mode" used in lvchange:lvmconfig --type default activation/activation_mode
Replacing Devices
Devices in a RAID LV can be replaced by other devices in the VG. When replacing devices that are no longer visible on the system, use lvconvert --repair. When replacing devices that are still visible, use lvconvert --replace. The repair command will attempt to restore the same number of data LVs that were previously in the LV. The replace option can be repeated to replace multiple PVs. Replacement devices can be optionally listed with either option.Refreshing an LV
Refreshing a RAID LV clears any transient device failures (device was temporarily disconnected) and returns the LV to its fully redundant mode. Restoring a device will usually require at least partial synchronization (see Synchronization). Failure to clear a transient failure results in the RAID LV operating in degraded mode until it is reactivated. Use the lvchange command to refresh an LV:# lvs -o name,vgname,segtype,attr,size vg LV VG Type Attr LSize lv vg raid1 Rwi-a-r-r- 100.00g # lvchange --refresh vg/lv # lvs -o name,vgname,segtype,attr,size vg LV VG Type Attr LSize lv vg raid1 Rwi-a-r--- 100.00g
Automatic repair
If a device in a RAID LV fails, device-mapper in the kernel notifies the dmeventd(8) monitoring process (see Monitoring). dmeventd can be configured to automatically respond using:Corrupted Data
Data on a device can be corrupted due to hardware errors without the device ever being disconnected or there being any fault in the software. This should be rare, and can be detected (see Scrubbing).Rebuild specific PVs
If specific PVs in a RAID LV are known to have corrupt data, the data on those PVs can be reconstructed with:Monitoring
When a RAID LV is activated the dmeventd(8) process is started to monitor the health of the LV. Various events detected in the kernel can cause a notification to be sent from device-mapper to the monitoring process, including device failures and synchronization completion (e.g. for initialization or scrubbing).Configuration Options
There are a number of options in the LVM configuration file that affect the behavior of RAID LVs. The tunable options are listed below. A detailed description of each can be found in the LVM configuration file itself.mirror_segtype_default
raid10_segtype_default
raid_region_size
raid_fault_policy
activation_mode
RAID1 Tuning
A RAID1 LV can be tuned so that certain devices are avoided for reading while all devices are still written to.RAID Takeover
RAID takeover is converting a RAID LV from one RAID level to another, e.g. raid5 to raid6. Changing the RAID level is usually done to increase or decrease resilience to device failures or to restripe LVs. This is done using lvconvert and specifying the new RAID level as the LV type:linear
to raid1
Linear is a single image of LV data, and converting it to raid1 adds a mirror
image which is a direct copy of the original linear image.
striped/raid0
to raid4/5/6
Adding parity devices to a striped volume results in raid4/5/6.
Unnatural conversions that are not recommended include converting between
striped and non-striped types. This is because file systems often optimize I/O
patterns based on device striping values. If those values change, it can
decrease performance.
- •
- between striped and raid0.
- •
- between linear and raid1.
- •
- between mirror and raid1.
- •
- between raid1 with two images and raid4/5.
- •
- between striped/raid0 and raid4.
- •
- between striped/raid0 and raid5.
- •
- between striped/raid0 and raid6.
- •
- between raid4 and raid5.
- •
- between raid4/raid5 and raid6.
- •
- between striped/raid0 and raid10.
- •
- between striped and raid4.
Indirect conversions
Converting from one raid level to another may require multiple steps, converting first to intermediate raid levels.Examples
Converting an LV from linear to raid1.# lvs -a -o name,segtype,size vg LV Type LSize lv linear 300.00g # lvconvert --type raid1 --mirrors 1 vg/lv # lvs -a -o name,segtype,size vg LV Type LSize lv raid1 300.00g [lv_rimage_0] linear 300.00g [lv_rimage_1] linear 300.00g [lv_rmeta_0] linear 3.00m [lv_rmeta_1] linear 3.00m
# lvs -a -o name,segtype,size vg LV Type LSize lv mirror 100.00g [lv_mimage_0] linear 100.00g [lv_mimage_1] linear 100.00g [lv_mlog] linear 3.00m # lvconvert --type raid1 vg/lv # lvs -a -o name,segtype,size vg LV Type LSize lv raid1 100.00g [lv_rimage_0] linear 100.00g [lv_rimage_1] linear 100.00g [lv_rmeta_0] linear 3.00m [lv_rmeta_1] linear 3.00m
# lvconvert --type raid1 --mirrors 2 vg/lv
# lvcreate --stripes 4 -L64M -n lv vg # lvconvert --type raid6 vg/lv # lvs -a -o lv_name,segtype,sync_percent,data_copies LV Type Cpy%Sync #Cpy lv raid6_n_6 100.00 3 [lv_rimage_0] linear [lv_rimage_1] linear [lv_rimage_2] linear [lv_rimage_3] linear [lv_rimage_4] linear [lv_rimage_5] linear [lv_rmeta_0] linear [lv_rmeta_1] linear [lv_rmeta_2] linear [lv_rmeta_3] linear [lv_rmeta_4] linear [lv_rmeta_5] linear
RAID Reshaping
RAID reshaping is changing attributes of a RAID LV while keeping the same RAID level. This includes changing RAID layout, stripe size, or number of stripes.Examples
(Command output shown in examples may change.)# lvs -o lv_name,segtype,sync_percent,data_copies LV Type Cpy%Sync #Cpy lv raid6_n_6 100.00 3 [lv_rimage_0] linear [lv_rimage_1] linear [lv_rimage_2] linear [lv_rimage_3] linear [lv_rimage_4] linear [lv_rimage_5] linear [lv_rmeta_0] linear [lv_rmeta_1] linear [lv_rmeta_2] linear [lv_rmeta_3] linear [lv_rmeta_4] linear [lv_rmeta_5] linear # lvconvert --type raid6_nr vg/lv # lvs -a -o lv_name,segtype,sync_percent,data_copies LV Type Cpy%Sync #Cpy lv raid6_nr 100.00 3 [lv_rimage_0] linear [lv_rimage_0] linear [lv_rimage_1] linear [lv_rimage_1] linear [lv_rimage_2] linear [lv_rimage_2] linear [lv_rimage_3] linear [lv_rimage_3] linear [lv_rimage_4] linear [lv_rimage_5] linear [lv_rmeta_0] linear [lv_rmeta_1] linear [lv_rmeta_2] linear [lv_rmeta_3] linear [lv_rmeta_4] linear [lv_rmeta_5] linear
# lvs -a -o lv_name,segtype,seg_pe_ranges,dataoffset LV Type PE Ranges DOff lv raid6_nr lv_rimage_0:0-32 \ lv_rimage_1:0-32 \ lv_rimage_2:0-32 \ lv_rimage_3:0-32 [lv_rimage_0] linear /dev/sda:0-31 2048 [lv_rimage_0] linear /dev/sda:33-33 [lv_rimage_1] linear /dev/sdaa:0-31 2048 [lv_rimage_1] linear /dev/sdaa:33-33 [lv_rimage_2] linear /dev/sdab:1-33 2048 [lv_rimage_3] linear /dev/sdac:1-33 2048 [lv_rmeta_0] linear /dev/sda:32-32 [lv_rmeta_1] linear /dev/sdaa:32-32 [lv_rmeta_2] linear /dev/sdab:0-0 [lv_rmeta_3] linear /dev/sdac:0-0
# lvconvert --stripes 5 vg/lv Using default stripesize 64.00 KiB. WARNING: Adding stripes to active logical volume vg/lv will \ grow it from 99 to 165 extents! Run "lvresize -l99 vg/lv" to shrink it or use the additional \ capacity. Logical volume vg/lv successfully converted. # lvs vg/lv LV VG Attr LSize Cpy%Sync lv vg rwi-a-r-s- 652.00m 52.94 # lvs -a -o lv_name,attr,segtype,seg_pe_ranges,dataoffset vg LV Attr Type PE Ranges DOff lv rwi-a-r--- raid6_nr lv_rimage_0:0-33 \ lv_rimage_1:0-33 \ lv_rimage_2:0-33 ... \ lv_rimage_5:0-33 \ lv_rimage_6:0-33 0 [lv_rimage_0] iwi-aor--- linear /dev/sda:0-32 0 [lv_rimage_0] iwi-aor--- linear /dev/sda:34-34 [lv_rimage_1] iwi-aor--- linear /dev/sdaa:0-32 0 [lv_rimage_1] iwi-aor--- linear /dev/sdaa:34-34 [lv_rimage_2] iwi-aor--- linear /dev/sdab:0-32 0 [lv_rimage_2] iwi-aor--- linear /dev/sdab:34-34 [lv_rimage_3] iwi-aor--- linear /dev/sdac:1-34 0 [lv_rimage_4] iwi-aor--- linear /dev/sdad:1-34 0 [lv_rimage_5] iwi-aor--- linear /dev/sdae:1-34 0 [lv_rimage_6] iwi-aor--- linear /dev/sdaf:1-34 0 [lv_rmeta_0] ewi-aor--- linear /dev/sda:33-33 [lv_rmeta_1] ewi-aor--- linear /dev/sdaa:33-33 [lv_rmeta_2] ewi-aor--- linear /dev/sdab:33-33 [lv_rmeta_3] ewi-aor--- linear /dev/sdac:0-0 [lv_rmeta_4] ewi-aor--- linear /dev/sdad:0-0 [lv_rmeta_5] ewi-aor--- linear /dev/sdae:0-0 [lv_rmeta_6] ewi-aor--- linear /dev/sdaf:0-0
# lvconvert --stripes 4 vg/lv Using default stripesize 64.00 KiB. WARNING: Removing stripes from active logical volume vg/lv will \ shrink it from 660.00 MiB to 528.00 MiB! THIS MAY DESTROY (PARTS OF) YOUR DATA! If that leaves the logical volume larger than 206 extents due \ to stripe rounding, you may want to grow the content afterwards (filesystem etc.) WARNING: to remove freed stripes after the conversion has finished,\ you have to run "lvconvert --stripes 4 vg/lv" Logical volume vg/lv successfully converted. # lvs -a -o lv_name,attr,segtype,seg_pe_ranges,dataoffset vg LV Attr Type PE Ranges DOff lv rwi-a-r-s- raid6_nr lv_rimage_0:0-33 \ lv_rimage_1:0-33 \ lv_rimage_2:0-33 ... \ lv_rimage_5:0-33 \ lv_rimage_6:0-33 0 [lv_rimage_0] Iwi-aor--- linear /dev/sda:0-32 0 [lv_rimage_0] Iwi-aor--- linear /dev/sda:34-34 [lv_rimage_1] Iwi-aor--- linear /dev/sdaa:0-32 0 [lv_rimage_1] Iwi-aor--- linear /dev/sdaa:34-34 [lv_rimage_2] Iwi-aor--- linear /dev/sdab:0-32 0 [lv_rimage_2] Iwi-aor--- linear /dev/sdab:34-34 [lv_rimage_3] Iwi-aor--- linear /dev/sdac:1-34 0 [lv_rimage_4] Iwi-aor--- linear /dev/sdad:1-34 0 [lv_rimage_5] Iwi-aor--- linear /dev/sdae:1-34 0 [lv_rimage_6] Iwi-aor-R- linear /dev/sdaf:1-34 0 [lv_rmeta_0] ewi-aor--- linear /dev/sda:33-33 [lv_rmeta_1] ewi-aor--- linear /dev/sdaa:33-33 [lv_rmeta_2] ewi-aor--- linear /dev/sdab:33-33 [lv_rmeta_3] ewi-aor--- linear /dev/sdac:0-0 [lv_rmeta_4] ewi-aor--- linear /dev/sdad:0-0 [lv_rmeta_5] ewi-aor--- linear /dev/sdae:0-0 [lv_rmeta_6] ewi-aor-R- linear /dev/sdaf:0-0
# lvs -o lv_name,attr,segtype,seg_pe_ranges,dataoffset vg LV Attr Type PE Ranges DOff lv rwi-a-r-R- raid6_nr lv_rimage_0:0-33 \ lv_rimage_1:0-33 \ lv_rimage_2:0-33 ... \ lv_rimage_5:0-33 \ lv_rimage_6:0-33 8192
# lvs -o lv_name,attr,segtype,seg_pe_ranges,dataoffset vg LV Attr Type PE Ranges DOff lv rwi-a-r-R- raid6_nr lv_rimage_0:0-33 \ lv_rimage_1:0-33 \ lv_rimage_2:0-33 ... \ lv_rimage_5:0-33 \ lv_rimage_6:0-33 8192
# lvconvert --stripes 4 vg/lv Using default stripesize 64.00 KiB. Logical volume vg/lv successfully converted. # lvs -a -o lv_name,attr,segtype,seg_pe_ranges,dataoffset vg LV Attr Type PE Ranges DOff lv rwi-a-r--- raid6_nr lv_rimage_0:0-33 \ lv_rimage_1:0-33 \ lv_rimage_2:0-33 ... \ lv_rimage_5:0-33 8192 [lv_rimage_0] iwi-aor--- linear /dev/sda:0-32 8192 [lv_rimage_0] iwi-aor--- linear /dev/sda:34-34 [lv_rimage_1] iwi-aor--- linear /dev/sdaa:0-32 8192 [lv_rimage_1] iwi-aor--- linear /dev/sdaa:34-34 [lv_rimage_2] iwi-aor--- linear /dev/sdab:0-32 8192 [lv_rimage_2] iwi-aor--- linear /dev/sdab:34-34 [lv_rimage_3] iwi-aor--- linear /dev/sdac:1-34 8192 [lv_rimage_4] iwi-aor--- linear /dev/sdad:1-34 8192 [lv_rimage_5] iwi-aor--- linear /dev/sdae:1-34 8192 [lv_rmeta_0] ewi-aor--- linear /dev/sda:33-33 [lv_rmeta_1] ewi-aor--- linear /dev/sdaa:33-33 [lv_rmeta_2] ewi-aor--- linear /dev/sdab:33-33 [lv_rmeta_3] ewi-aor--- linear /dev/sdac:0-0 [lv_rmeta_4] ewi-aor--- linear /dev/sdad:0-0 [lv_rmeta_5] ewi-aor--- linear /dev/sdae:0-0 # lvs -a -o lv_name,attr,segtype,reshapelen vg LV Attr Type RSize lv rwi-a-r--- raid6_nr 24.00m [lv_rimage_0] iwi-aor--- linear 4.00m [lv_rimage_0] iwi-aor--- linear [lv_rimage_1] iwi-aor--- linear 4.00m [lv_rimage_1] iwi-aor--- linear [lv_rimage_2] iwi-aor--- linear 4.00m [lv_rimage_2] iwi-aor--- linear [lv_rimage_3] iwi-aor--- linear 4.00m [lv_rimage_4] iwi-aor--- linear 4.00m [lv_rimage_5] iwi-aor--- linear 4.00m [lv_rmeta_0] ewi-aor--- linear [lv_rmeta_1] ewi-aor--- linear [lv_rmeta_2] ewi-aor--- linear [lv_rmeta_3] ewi-aor--- linear [lv_rmeta_4] ewi-aor--- linear [lv_rmeta_5] ewi-aor--- linear
# lvconvert --stripes 4 vg/lv Using default stripesize 64.00 KiB. No change in RAID LV vg/lv layout, freeing reshape space. Logical volume vg/lv successfully converted. # lvs -a -o lv_name,attr,segtype,reshapelen vg LV Attr Type RSize lv rwi-a-r--- raid6_nr 0 [lv_rimage_0] iwi-aor--- linear 0 [lv_rimage_0] iwi-aor--- linear [lv_rimage_1] iwi-aor--- linear 0 [lv_rimage_1] iwi-aor--- linear [lv_rimage_2] iwi-aor--- linear 0 [lv_rimage_2] iwi-aor--- linear [lv_rimage_3] iwi-aor--- linear 0 [lv_rimage_4] iwi-aor--- linear 0 [lv_rimage_5] iwi-aor--- linear 0 [lv_rmeta_0] ewi-aor--- linear [lv_rmeta_1] ewi-aor--- linear [lv_rmeta_2] ewi-aor--- linear [lv_rmeta_3] ewi-aor--- linear [lv_rmeta_4] ewi-aor--- linear [lv_rmeta_5] ewi-aor--- linear
# lvconvert --type striped vg/lv Unable to convert LV vg/lv from raid6_nr to striped. Converting vg/lv from raid6_nr is directly possible to the \ following layouts: raid6_nc raid6_zr raid6_la_6 raid6_ls_6 raid6_ra_6 raid6_rs_6 raid6_n_6
# lvconvert --type raid6_n_6 Using default stripesize 64.00 KiB. Converting raid6_nr LV vg/lv to raid6_n_6. Are you sure you want to convert raid6_nr LV vg/lv? [y/n]: y Logical volume vg/lv successfully converted.
# lvconvert --type striped vg/lv Logical volume vg/lv successfully converted. # lvs -o lv_name,attr,segtype,seg_pe_ranges,dataoffset vg LV Attr Type PE Ranges DOff lv -wi-a----- striped /dev/sda:2-32 \ /dev/sdaa:2-32 \ /dev/sdab:2-32 \ /dev/sdac:3-33 lv -wi-a----- striped /dev/sda:34-35 \ /dev/sdaa:34-35 \ /dev/sdab:34-35 \ /dev/sdac:34-35
# lvconvert --type raid10 vg/lv Using default stripesize 64.00 KiB. Logical volume vg/lv successfully converted. # lvs -o lv_name,attr,segtype,seg_pe_ranges,dataoffset vg LV Attr Type PE Ranges DOff lv rwi-a-r--- raid10 lv_rimage_0:0-32 \ lv_rimage_4:0-32 \ lv_rimage_1:0-32 ... \ lv_rimage_3:0-32 \ lv_rimage_7:0-32 0 # lvs -a -o lv_name,attr,segtype,seg_pe_ranges,dataoffset vg WARNING: Cannot find matching striped segment for vg/lv_rimage_3. LV Attr Type PE Ranges DOff lv rwi-a-r--- raid10 lv_rimage_0:0-32 \ lv_rimage_4:0-32 \ lv_rimage_1:0-32 ... \ lv_rimage_3:0-32 \ lv_rimage_7:0-32 0 [lv_rimage_0] iwi-aor--- linear /dev/sda:2-32 0 [lv_rimage_0] iwi-aor--- linear /dev/sda:34-35 [lv_rimage_1] iwi-aor--- linear /dev/sdaa:2-32 0 [lv_rimage_1] iwi-aor--- linear /dev/sdaa:34-35 [lv_rimage_2] iwi-aor--- linear /dev/sdab:2-32 0 [lv_rimage_2] iwi-aor--- linear /dev/sdab:34-35 [lv_rimage_3] iwi-XXr--- linear /dev/sdac:3-35 0 [lv_rimage_4] iwi-aor--- linear /dev/sdad:1-33 0 [lv_rimage_5] iwi-aor--- linear /dev/sdae:1-33 0 [lv_rimage_6] iwi-aor--- linear /dev/sdaf:1-33 0 [lv_rimage_7] iwi-aor--- linear /dev/sdag:1-33 0 [lv_rmeta_0] ewi-aor--- linear /dev/sda:0-0 [lv_rmeta_1] ewi-aor--- linear /dev/sdaa:0-0 [lv_rmeta_2] ewi-aor--- linear /dev/sdab:0-0 [lv_rmeta_3] ewi-aor--- linear /dev/sdac:0-0 [lv_rmeta_4] ewi-aor--- linear /dev/sdad:0-0 [lv_rmeta_5] ewi-aor--- linear /dev/sdae:0-0 [lv_rmeta_6] ewi-aor--- linear /dev/sdaf:0-0 [lv_rmeta_7] ewi-aor--- linear /dev/sdag:0-0
# lvs -a -o name,size,segtype,syncpercent,datastripes,\ stripesize,reshapelenle,devices vg LV LSize Type Cpy%Sync #DStr Stripe RSize Devices lv 128.00m linear 1 0 /dev/sda(0)
# lvconvert --mirrors 1 vg/lv Logical volume vg/lv successfully converted. # lvs -a -o name,size,segtype,datastripes,\ stripesize,reshapelenle,devices vg LV LSize Type #DStr Stripe RSize Devices lv 128.00m raid1 2 0 lv_rimage_0(0),\ lv_rimage_1(0) [lv_rimage_0] 128.00m linear 1 0 /dev/sda(0) [lv_rimage_1] 128.00m linear 1 0 /dev/sdhx(1) [lv_rmeta_0] 4.00m linear 1 0 /dev/sda(32) [lv_rmeta_1] 4.00m linear 1 0 /dev/sdhx(0)
# lvconvert --type raid5_n vg/lv Using default stripesize 64.00 KiB. Logical volume vg/lv successfully converted. # lvs -a -o name,size,segtype,syncpercent,datastripes,\ stripesize,reshapelenle,devices vg LV LSize Type #DStr Stripe RSize Devices lv 128.00m raid5_n 1 64.00k 0 lv_rimage_0(0),\ lv_rimage_1(0) [lv_rimage_0] 128.00m linear 1 0 0 /dev/sda(0) [lv_rimage_1] 128.00m linear 1 0 0 /dev/sdhx(1) [lv_rmeta_0] 4.00m linear 1 0 /dev/sda(32) [lv_rmeta_1] 4.00m linear 1 0 /dev/sdhx(0)
# lvconvert --stripesize 128k --stripes 5 vg/lv Converting stripesize 64.00 KiB of raid5_n LV vg/lv to 128.00 KiB. WARNING: Adding stripes to active logical volume vg/lv will grow \ it from 32 to 160 extents! Run "lvresize -l32 vg/lv" to shrink it or use the additional capacity. Logical volume vg/lv successfully converted. # lvs -a -o name,size,segtype,datastripes,\ stripesize,reshapelenle,devices LV LSize Type #DStr Stripe RSize Devices lv 640.00m raid5_n 5 128.00k 6 lv_rimage_0(0),\ lv_rimage_1(0),\ lv_rimage_2(0),\ lv_rimage_3(0),\ lv_rimage_4(0),\ lv_rimage_5(0) [lv_rimage_0] 132.00m linear 1 0 1 /dev/sda(33) [lv_rimage_0] 132.00m linear 1 0 /dev/sda(0) [lv_rimage_1] 132.00m linear 1 0 1 /dev/sdhx(33) [lv_rimage_1] 132.00m linear 1 0 /dev/sdhx(1) [lv_rimage_2] 132.00m linear 1 0 1 /dev/sdhw(33) [lv_rimage_2] 132.00m linear 1 0 /dev/sdhw(1) [lv_rimage_3] 132.00m linear 1 0 1 /dev/sdhv(33) [lv_rimage_3] 132.00m linear 1 0 /dev/sdhv(1) [lv_rimage_4] 132.00m linear 1 0 1 /dev/sdhu(33) [lv_rimage_4] 132.00m linear 1 0 /dev/sdhu(1) [lv_rimage_5] 132.00m linear 1 0 1 /dev/sdht(33) [lv_rimage_5] 132.00m linear 1 0 /dev/sdht(1) [lv_rmeta_0] 4.00m linear 1 0 /dev/sda(32) [lv_rmeta_1] 4.00m linear 1 0 /dev/sdhx(0) [lv_rmeta_2] 4.00m linear 1 0 /dev/sdhw(0) [lv_rmeta_3] 4.00m linear 1 0 /dev/sdhv(0) [lv_rmeta_4] 4.00m linear 1 0 /dev/sdhu(0) [lv_rmeta_5] 4.00m linear 1 0 /dev/sdht(0)
# lvconvert --type striped vg/lv Logical volume vg/lv successfully converted. # lvs -a -o name,size,segtype,datastripes,\ stripesize,reshapelenle,devices vg LV LSize Type #DStr Stripe RSize Devices lv 640.00m striped 5 128.00k /dev/sda(33),\ /dev/sdhx(33),\ /dev/sdhw(33),\ /dev/sdhv(33),\ /dev/sdhu(33) lv 640.00m striped 5 128.00k /dev/sda(0),\ /dev/sdhx(1),\ /dev/sdhw(1),\ /dev/sdhv(1),\ /dev/sdhu(1)
RAID5 Variants
raid5_lsRAID6 Variants
raid6with striped data used for striped/raid0
conversions
raid5_{ls,rs,la,ra}
History
The 2.6.38-rc1 version of the Linux kernel introduced a device-mapper target to interface with the software RAID (MD) personalities. This provided device-mapper with RAID 4/5/6 capabilities and a larger development community. Later, support for RAID1, RAID10, and RAID1E (RAID 10 variants) were added. Support for these new kernel RAID targets was added to LVM version 2.02.87. The capabilities of the LVM raid1 type have surpassed the old mirror type. raid1 is now recommended instead of mirror. raid1 became the default for mirroring in LVM version 2.02.100.LVM TOOLS 2.02.177(2) (2017-12-18) | Red Hat, Inc |