Diff: RaidLvmMigrate

Differences between version 5 and previous revision of RaidLvmMigrate.

Other diffs: Previous Major Revision, Previous Author

Newer page: version 5 Last edited on January 20, 2009 11:14 pm by PhilHollenback Revert
Older page: version 4 Last edited on January 20, 2009 10:27 pm by PhilHollenback Revert
@@ -61,9 +61,10 @@
 var idcomments_acct = '011e5665a1128cdbe79c8077f0f04353'; 
 var idcomments_post_id; 
 var idcomments_post_url; 
 </script> 
-<script type=" text/javascript" src=" http://www.intensedebate.com/js/genericLinkWrapperV2 .js" ></script> 
+<span id="IDCommentsPostTitle" style="display:none"></span>  
+ <script type=' text/javascript' src=' http://www.intensedebate.com/js/genericCommentWrapperV2 .js' ></script> 
 ?> 
  
 ----- 
  

version 5

Using LVM to migrate filesystems to a RAID device

Introduction

I administer a CentOS 4 server for a small company. The system was originally set up using multiple filesystems on LVM on a single SATA hard drive. This disk ran out of space and I wanted more reliability so I decided to install two additional large drives in the system and mirror them using RAID 1. Since I had originally configured the machine with LVM I was able to do most of this while the system was in production.

This procedure relies on several LVM features which are not terribly well-documented so I decided it would be a good idea to write down my procedure here to help others who are contemplating a similar upgrade.

Note that after all of this I left the original drive in the machine as the boot drive as it wasn't worth the extra effort to make the new raid array the boot drive as well. That could be done as a further optimization to increase reliability if desired.

Procedure

  • Shut down the system and install two new SATA drives (this would work with PATA drives, SCSI drives, etc although the drive names might be different).
  • The new drives will appear as sdb and sdc when system comes back up (the original drive is sda).
  • The system can be in production use from this point forward, although it will be sluggish due to all the extra disk activity.
  • Use fdisk to create one large partition on each drive (sdb1, sdc1) and set the filesystem type to "linux raid auto" type fd.

    • You may want to create sdb1 and sdc1 as 100MB normal linux partitions (type 83) if you plan in the future to boot off the raid drives.
    • In that case, use the rest of each disk for sdb2 and sdc2 and adjust instructions below accordingly.
  • Create a new mirrored raid on the two drives with /sbin/mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
  • Check the raid array status with /sbin/mdadm --detail /dev/md0
  • Note that if system was never previously configured with raid, initrd won't include raid drivers and the system will fail to boot. Thus you need to build a new initrd.
  • To build a new initrd:
mv /boot/initrd-$(uname -r).img /boot/initrd-$(uname -r).img.bak
mkinitrd -v /boot/initrd-$(uname -r).img $(uname -r)
  • Verify from the output of mkinitrd that md-mirror driver was included.
  • Set up /etc/mdadm.conf (not required, but a good idea for future reference):
echo "DEVICE /dev/sdb1 /dev/sdc1" >/etc/mdadm.conf
mdadm --detail --scan >>/etc/mdadm.conf
echo "MAILADDR [email protected]" >>/etc/mdadm.conf
  • Initialize a new lvm physical volume on the new raid device with pvcreate -v /dev/md0
  • You can continue working at this point but you want to be cautious you should wait until the raid rebuild finishes mirroring the drives.

    • Watch /proc/mdstat for the status of this operation.
  • Extend the existing lvm volume group on the original /dev/sda2 drive onto /dev/md0 with vgextend vg /dev/md0
  • Use pvdisplay to verify that both /dev/sda2 and /dev/md0 are part of the newly expanded volume group.
  • Use vgdisplay to verify the volume group (vg) is now the size of /dev/sda2 and /dev/md0 combined.
  • Move the /home logical volume from the old physical device to the new raid device with pvmove -v -n /dev/vg/home /dev/sda2 /dev/md0

    • This can be repeated for any or all logical volumes on old drive.
  • Wait for pvmove to finish (this may take a few hours).

    • The -v output from pvmove will update the percentage done every 15 seconds.
  • Verify the logical valume has been been moved to the new physical device with /usr/sbin/lvdisplay -m /dev/vg/home
  • Resize the logical volume holding /home (/dev/vg/home), while forcing it to remain on raid device /dev/md0 with lvresize -L+425G /dev/vg/home /dev/md0
  • Use lvdisplay after this completes to verify that /home is now the new larger size.
  • At this point we have to increase the size of the ext3 filesystem of /home but this requires unmounting /home.

    • This can be done on a running system if you can boot everyone off to unmount /home.
    • Otherwise, go to single user mode with /sbin/init 1.
  • Check the integrity of /home with e2fsck -f /dev/vg/home.

    • This will take a while, maybe up to an hour...
  • Resize /home with /sbin/resize2fs /dev/vg/home.

    • This will resize the filesystem to fill the newly expanded logical volume /dev/vg/home.
  • Verify new /home works by mounting it, checking df output, and reading a few files.
  • Reboot the machine to verify that everything comes up properly after reboot.




Our Founder
ToolboxClick to hide/show