View Source:
RaidLvmMigrate
Note:
This page has been locked and cannot be edited.
!! Using LVM to migrate filesystems to a RAID device ! Introduction I administer a CentOS 4 server for a small company. The system was originally set up using multiple filesystems on LVM on a single SATA hard drive. This disk ran out of space and I wanted more reliability so I decided to install two additional large drives in the system and mirror them using RAID 1. Since I had originally configured the machine with LVM I was able to do most of this while the system was in production. This procedure relies on several LVM features which are not terribly well-documented so I decided it would be a good idea to write down my procedure here to help others who are contemplating a similar upgrade. Note that after all of this I left the original drive in the machine as the boot drive as it wasn't worth the extra effort to make the new raid array the boot drive as well. That could be done as a further optimization to increase reliability if desired. ! Procedure * Shut down the system and install two new SATA drives (this would work with PATA drives, SCSI drives, etc although the drive names might be different). * The new drives will appear as sdb and sdc when system comes back up (the original drive is sda). * The system can be in production use from this point forward, although it will be sluggish due to all the extra disk activity. * Use fdisk to create one large partition on each drive (sdb1, sdc1) and set the filesystem type to "linux raid auto" type fd. * You may want to create sdb1 and sdc1 as 100MB normal linux partitions (type 83) if you plan in the future to boot off the raid drives. * In that case, use the rest of each disk for sdb2 and sdc2 and adjust instructions below accordingly. * Create a new mirrored raid on the two drives with <code>/sbin/mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1</code> * Check the raid array status with <code>/sbin/mdadm --detail /dev/md0</code> * Note that if system was never previously configured with raid, initrd won't include raid drivers and the system will fail to boot. Thus you need to build a new initrd. * To build a new initrd: <verbatim> mv /boot/initrd-$(uname -r).img /boot/initrd-$(uname -r).img.bak mkinitrd -v /boot/initrd-$(uname -r).img $(uname -r) </verbatim> * Verify from the output of mkinitrd that md-mirror driver was included. * Set up <code>/etc/mdadm.conf</code> (not required, but a good idea for future reference): <verbatim> echo "DEVICE /dev/sdb1 /dev/sdc1" >/etc/mdadm.conf mdadm --detail --scan >>/etc/mdadm.conf echo "MAILADDR root@example.com" >>/etc/mdadm.conf </verbatim> * Initialize a new lvm physical volume on the new raid device with <code>pvcreate -v /dev/md0</code> * You can continue working at this point but you want to be cautious you should wait until the raid rebuild finishes mirroring the drives. * Watch <code>/proc/mdstat</code> for the status of this operation. * Extend the existing lvm volume group on the original /dev/sda2 drive onto /dev/md0 with <code>vgextend vg /dev/md0</code> * Use pvdisplay to verify that both /dev/sda2 and /dev/md0 are part of the newly expanded volume group. * Use vgdisplay to verify the volume group (vg) is now the size of /dev/sda2 and /dev/md0 combined. * Move the /home logical volume from the old physical device to the new raid device with <code>pvmove -v -n /dev/vg/home /dev/sda2 /dev/md0</code> * This can be repeated for any or all logical volumes on old drive. * Wait for pvmove to finish (this may take a few hours). * The <code>-v</code> output from pvmove will update the percentage done every 15 seconds. * Verify the logical valume has been been moved to the new physical device with <code>/usr/sbin/lvdisplay -m /dev/vg/home</code> * Resize the logical volume holding /home (/dev/vg/home), while forcing it to remain on raid device /dev/md0 with <code>lvresize -L+425G /dev/vg/home /dev/md0</code> * Use lvdisplay after this completes to verify that /home is now the new larger size. * At this point we have to increase the size of the ext3 filesystem of /home but this requires unmounting /home. * This can be done on a running system if you can boot everyone off to unmount /home. * Otherwise, go to single user mode with <code>/sbin/init 1</code>. * Check the integrity of /home with <code>e2fsck -f /dev/vg/home</code>. * This will take a while, maybe up to an hour... * Resize /home with <code>/sbin/resize2fs /dev/vg/home</code>. * This will resize the filesystem to fill the newly expanded logical volume <code>/dev/vg/home</code>. * Verify new /home works by mounting it, checking df output, and reading a few files. * Reboot the machine to verify that everything comes up properly after reboot. CategoryGeekStuff
Please enable JavaScript to view the
comments powered by Disqus.
HollenbackDotNet
Home Page
Popular Pages
All Categories
Main Categories
General Interest
Geek Stuff
DevOps
Linux Stuff
Pictures
Search
Toolbox
RecentChanges
RecentNewPages
What links here
Printable version
AllPages
RecentChanges
Recent Changes Cached
No changes found
Favorite Categories
ActionPage
(150)
WikiPlugin
(149)
GeekStuff
(137)
PhpWikiAdministration
(102)
Help/PageList
(75)
Help/MagicPhpWikiURLs
(75)
Blog
(69)
Pictures
(60)
GeneralInterest
(44)
LinuxStuff
(38)
Views
View Page
View Source
History
Diff
Sign In