My server was running 2x 1TB disks in a mirror, and I needed more space, so I purchased 2x 2TB disks.
The following steps demonstrate how I expanded the RAID volume to 2TB without data loss.
My RAID array
/dev/md1 consisted of two RAID members,
First step was to remove one of the disks from the array (in this example
Fail the first disk you plan to remove:
mdadm --manage /dev/md1 --fail /dev/sdd1
/dev/sdd1 from the RAID array:
mdadm --manage /dev/md1 --remove /dev/sdd1
Before powering down the server, get the serial number for
/dev/sdd which can be found using
udevadm info --query=all --name=/dev/sdd | grep ID_SERIAL
Power down the server, and remove
/dev/sdd – install the new disk and restart.
Verify that the new disk is installed correctly using
fdisk -l, it should identify as
/dev/sdd (the same device id as the disk you removed).
Now partition the new disk to the required (in this case maximum) size:
parted /dev/sdd (parted) mklabel gpt (parted) unit gb (parted) mkpart primary ext4 1049kb 2TB (parted) set 1 raid on (parted) align-check alignment type(min/opt) [optimal]/minimal? optimal Partition number? 1 1 aligned (parted) quit
See this post for more info on parted.
Your new disk should now be partitioned to the maximum size (in this case 2TB)
Now add the (new) disk to the existing RAID array.
mdadm --manage /dev/md1 --add /dev/sdd1
mdadm: re-added /dev/sdd1
Now wait for the array to resync, progress can be checked with:
(*)Once the resync is complete, repeat the above steps but this time with
Confirm that your data is still present and not corrupted.
Following completion of the step above (*), wait for the RAID to resync for the 2nd time.
Quick list of steps for
1) Fail the disk.
mdadm --manage /dev/md1 --fail /dev/sdc1
2) Remove the disk.
mdadm --manage /dev/md1 --remove /dev/sdc1
3) Before powering down the server, get the serial number for
/dev/sdc which can be found using udevadm
udevadm info --query=all --name=/dev/sdc | grep ID_SERIAL
4) Physically remove the old disk and fit the new one, restart the server.
5) Partition the disk. (same steps as above but ensure you select the *new* disk!
parted /dev/sdc <---- *ensure you select the new disk* (parted) mklabel gpt (parted) unit gb (parted) mkpart primary ext4 1049kb 2TB (parted) set 1 raid on (parted) align-check alignment type(min/opt) [optimal]/minimal? optimal Partition number? 1 1 aligned (parted) quit
6) Add the disk to the array.
mdadm --manage /dev/md1 --add /dev/sdc1
At this stage the RAID should now be resyncing.
You can view the available space for expansion with
mdadm –-examine /dev/sdd1
You will see something like this:
Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 1953382400 (1862.89 GiB 2000.26 GB) <-- new size (2TB) Array Size : 976629760 (931.39 GiB 1000.07 GB) <-- size of original RAID (1TB) Used Dev Size : 976629760 (931.39 GiB 1000.07 GB) <-- used space Super Offset : xxxxxxxx sectors State : clean
The above shows there is 1TB available for RAID growth.
Disable write intent bitmap on
mdadm --grow /dev/md1 --bitmap=none
And then expand the md1 device to the maximum size.
mdadm --grow /dev/md1 --size=max
Re-enable write intent bitmap
mdadm --grow /dev/md1 --bitmap=internal
Unmount the RAID device and check the filesystem.
umount /dev/md1 e2fsck -f /dev/md1
Begin the resize process.
Remount the RAID device.
df -h should show the new size.
root@backup:~# df -h /dev/md1 Filesystem Size Used Avail Use% Mounted on /dev/md1 1.8T 1.7T 179G 91% /storage/vol-2
Is this is not a system volume, you can also reduce the reserved space to 0% yield a little more space.
tune2fs -m 0 /dev/md1
Getting the serial number for the disk assists in identifying the physical disk if you have two disks of the same make and model.
I seem to think I had to format the new disks after partitioning but I can't remember (I'm writing this a few days after doing the upgrade) so if you have any issues try formatting the new disk.
Other than that, I had no issues.