Running Proxmox VE 3.3 on a software RAID

During installation of Proxmox VE 3.3 (available here), you are only given the choice of what disk you want to use for the installation process, but not what disk layout you want to use. Now, there are several ways of installing Proxmox on a software RAID. One of them is to run through the standard installation process and add a software RAID afterwards, using mdadm.
This blog post is just to show how setting up a software RAID on a fresh Proxmox installation worked for me! Feel free to try the following steps on a clean installation – however, I recommend not to perform these actions on a live production environment! I do not take responsibility for any data loss that might occur! Also, keep in mind that running Proxmox VE on a software RAID is neither supported nor recommended by Proxmox!
Now, that we got the disclaimer out of the way, let’s get started:
Installing Proxmox
Follow through the installation, using the provided ISO installer. Keep track of what disk you use as the installation disk – in my case, this would be /dev/sda. Once you complete the installation, run the following steps to get an up-to-date Proxmox VE Version to work with.

root@[server]:/# apt-get update
root@[server]:/# apt-get dist-upgrade
root@[server]:/# apt-get upgrade

Next step, if you haven’t done that already, is to install mdadm

root@[server]:/# apt-get install mdadm

Setting up the software RAID mirror
After finishing the installation, this is the partitioning scheme I had to work with:

root@[server]:/# parted
GNU Parted 2.3
Using /dev/sda
Welcome to GNU Parted! Type ‘help’ to view a list of commands.
(parted) print
Model: ATA HGST HUS724020AL (scsi)
Disk /dev/sda: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
1 1049kB 2097kB 1049kB primary bios_grub
2 2097kB 537MB 535MB ext3 primary boot
3 537MB 2000GB 2000GB primary lvm

Partitions 2 and 3 are the ones that need to be mirrored. Let’s prepare the second disk by copying the partition table from sda to sdb and changing the partition types to RAID.

root@[server]:/# sgdisk -R=/dev/sdb /dev/sda
root@[server]:/# sgdisk -t 2:fd00 /dev/sdb
root@[server]:/# sgdisk -t 3:fd00 /dev/sdb

Checking the partition table on sdb, it should now look like this:

root@[server]:/# parted
GNU Parted 2.3
Using /dev/sda
Welcome to GNU Parted! Type ‘help’ to view a list of commands.
(parted) select /dev/sdb
Using /dev/sdb
(parted) print
Model: ATA HGST HUS724020AL (scsi)
Disk /dev/sdb: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
1 1049kB 2097kB 1049kB primary bios_grub
2 2097kB 537MB 535MB primary raid
3 537MB 2000GB 2000GB primary raid

We now need to create the needed RAID arrays. Because we booted from a non-RAID partition, we are going to add a missing disk into the RAID for now. This missing disk will later be replaced by sda.

root@[server]:/# mdadm –create /dev/md0 –level=1 –raid-disks=2 missing /dev/sdb2
root@[server]:/# mdadm –create /dev/md1 –level=1 –raid-disks=2 missing /dev/sdb3

You might run into an error, saying “mdadm: cannot open /dev/sdb2: Device or resource busy”. In this case, make sure you don’t have running RAID sets already, which could happen if your disks on this machine have been used in a RAID array before. If this is the case, you want to wipe out the first 512 bytes of your disk to clear any existing RAID metadata – this was a big headache for me, took me a while to find out what the problem was.

root@[server]:/# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb2[1]
523968 blocks super 1.2 [2/1] [_U]
[…]
unused devices:
root@[server]:/# mdadm –stop /dev/md0
mdadm: stopped /dev/md0
root@[server]:/# mdadm –stop /dev/md1
mdadm: stopped /dev/md1
root@[server]:/# dd if=/dev/zero of=/dev/sda bs=512 count=1
root@[server]:/# dd if=/dev/zero of=/dev/sdb bs=512 count=1

We need to format md0, copy grub onto it and set it up as our /boot mountpoint.

root@[server]:/# mkfs.ext3 /dev/md0
root@[server]:/# mkdir /mnt/tmp
root@[server]:/# mount /dev/md0 /mnt/tmp
root@[server]:/# cp -ax /boot/* /mnt/tmp
root@[server]:/# umount /mnt/tmp
root@[server]:/# rmdir /mnt/tmp

Let’s not forget to edit the fstab, so it will mount /boot from md0 from now on. I commented out the previous /boot entry, just in case…

/dev/pve/root / ext3 errors=remount-ro 0 1
/dev/pve/data /var/lib/vz ext3 defaults 0 1
#UUID=1e2982a5-75b8-4e62-a5dc-7a49226d842f /boot ext3 defaults 0 1
/dev/md0 /boot ext3 defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0

Also, you need to edit /etc/default/mdadm

-AUTOSTART=false
+AUTOSTART=true
-INITRDSTART=’none’
+INITRDSTART=’all’

Now we need to change a few grub entries and finally reinstall grub.

root@[server]:/# echo ‘GRUB_DISABLE_LINUX_UUID=true’ >> /etc/default/grub
root@[server]:/# echo ‘GRUB_PRELOAD_MODULES=”raid dmraid”‘ >> /etc/default/grub
root@[server]:/# echo raid1 >> /etc/modules
root@[server]:/# echo raid1 >> /etc/initramfs-tools/modules
root@[server]:/# grub-install /dev/sda
root@[server]:/# grub-install /dev/sdb
root@[server]:/# update-grub
root@[server]:/# update-initramfs -u

Once you rebooted the server, you can check if everything went as expected

root@[server]:/# mount | grep boot
/dev/md0 on /boot type ext3 (rw,relatime,errors=continue,user_xattr,acl,barrier=0,data=ordered)

Next, let’s change /dev/sda2 to a RAID partition and add it into the array.

root@[server]:/# sgdisk -t 2:fd00 /dev/sda
root@[server]:/# mdadm –add /dev/md0 /dev/sda2

The root partition for Proxmox is an LVM partition, so we need to create a logical volume on /dev/md1 and move the LVM partition over.

root@[server]:/# pvcreate /dev/md1
root@[server]:/# vgextend pve /dev/md1
root@[server]:/# pvmove /dev/sda3 /dev/md1

Note that this will take a looooooooong time – it took several hours for me! After you sucessfully moved the partition, you can remove /dev/sda3 from the volume and add /dev/sda3 to the RAID array.

root@[server]:/# vgreduce pve /dev/sda3
root@[server]:/# pvremove /dev/sda3
root@[server]:/# sgdisk -t 3:fd00 /dev/sda
root@[server]:/# mdadm –add /dev/md1 /dev/sda3

Finally, you can check your RAID and should see the 2 partitions healing. This is it, you are now running your Proxmox VE on a software RAID!

Kommentare

  1. Thank you very much for writing the article. It really saved me hours of solving how to do this in Proxmox VE 3.4.
    However, you forgot to mention that we also need mdadm to auto-assemble RAID after each boot with:
    mdadm -Es >>/etc/mdadm/mdadm.conf
    That¡s all, the rest is perfect.
    ;)