RAID 6 --> ZFS pool
Today I completed the transition of my NAS machine from RAID 6 to a ZFS pool. To document the process for my future self and any semi-interested rando who stumbles across this, here is an outline of how I did it.
Part 1 - Research
I found this 'ZFS for Dummies' post really useful to re-familiarise myself with the core concepts. Once I'd done that, the Ubuntu examples here were really useful as well as this post with more detailed information than I would ever need.
My RAID was nowhere near full, which meant I could manage the process without having to restore everything from backups (although I made damn sure I had working backups before I messed with it).
Part 2 - Degrading the RAID and pooling the freed-up drives
First, I marked two drives from the RAID as failed so I could use them in the zfs pool. I changed their partition type with cfdisk but I had trouble making a pool out of them because they kept on being picked up and re-started as a raid device. In the end, I stopped this happening by using dd to nuke the start of each partition:
Then zfs would finally shut up and make a mirror out of those two partitions. I chose to use partitions instead of whole disks because I want more flexibility with the size/brand of the actual disks I use. I also used partition IDs rather than device names because names like /dev/sdb can get swapped around if they end up cabled into different SATA ports, but partition IDs should remain constant.
An example command for this is this:
zpool create -f sharepool mirror /dev/disk/by-partuuid/0008fc4d-01 /dev/disk/by-partuuid/000b5420-01
Part 3 - Copying the RAID data across
As I've already mentioned, this was only possible because I was using a tiny fraction of the RAID. However, it made this step as simple as copying from the degraded RAID to the new zfs pool:
cp -rp /raid /sharepool
Part 4 - Euthanise the RAID
With the data safely copied across, I didn't need to mark drives as failed any more. I just stopped the raid, changed the partition types and nuked the start of each partition as in Part 2.
Part 5 - Add further drives to the zfs pool
Again, I used partition IDs rather than device names, but this was as simple as using 'zpool add' instead of 'zpool create'. Now the pool is made up of two vdevs, each made up of two mirrored drives so I have redundancy and still more than enough space.
Part 6 - Sorting the backup drive
My RAID setup was really quite weird: 4 x 1.5TB drives and a 2TB drive (with a 1.5TB partition on it). The remaining space on the 2TB drive was previously used as a temporary backup whilst monkeying around with the other drives.
I wanted to set up the 2TB drive to hold a rolling backup of some of the stuff on the zfs pool, but I didn't want to lose the stuff in the last 0.5TB of it.
Doing this was not strictly necessary because I already had all the RAID data safely on the new zfs pool, but I am paranoid.
So I made a new filesystem on the 1.5TB partition of the 2TB drive. Then I copied the stuff backed up in the last 0.5TB down to the new filesystem. I could then delete the old 0.5TB partition, expand the 1.5TB partition to the full 2TB and resize2fs to expand the file system to fill the partition.
Part 7 - Setting up regular backups
Most of the data on my NAS was music ripped from CDs that I don't want to have to rip again (gods that would take ages :S). Another whole bunch of files is years and years of irreplaceable photos. I don't need to rsync these because the only way their contents will change is if the entire file has changed (rsync only copies the updated parts of each file).
Instead, I'm using cp -R -u -p. The -u bit is the important one: it is the 'update' flag which makes it only copy files that either don't exist in the target directory, or are newer than the target directory's version.
So I tested the command works, then added it via crontab -e and added a test file to the pool so I can check tomorrow that it got copied across!
Comments
Post a Comment