Installing CentOS on a ZFS root filesystem on virtual server with one disk


When updating or installing your CentOS system with kmod-zfs, please update also your zfs.repo. Make sure you are using the correct version. It's a more fail proof if you use zfs-dkms instead of kmod-zfs.

Do NOT run "zpool upgrade rpool" as it could cause unbootable system

Using the installer we will make two partitions.

The first partition will be reserved for the ZFS pool (mounted on /mnt/for-zfs and formatted to xfs because the installer does not support ZFS).

The second partition will be the root partition and it will be formatted with ext4 filesystem.

After installation we will create ZFS pool on the first partition, install the system on it, reboot, remove partitions and create new partition with maximal size (this is equivalent to removing the second partition and growing the first partition).

This method of installation is useful when installing on a VPS.

Most providers sell VPS services with the system installed on one partition.

Some of them allow users to use VNC in order to make a custom install, but still provide only one virtual disk per VPS.

Please note that the VNC is not a secure protocol. If you want protection against traffic sniffing you should buy another VPS on the same datacenter and use it as a secure proxy server (you may implement it with ssh).

If your virtual server is exposed to the Internet, you should set a strong password when installing, because ssh is enabled by default.

It is a good practice to disable the ability to login with a password, and enable login with a key.

If you don't want to read about the unsuccessful login attempts in your /var/log/secure, you should change the default SSH port (and don't forget do change your SELinux and firewall setting - you don't want to be cut off from your own VPS).

Here is the transcript of the commands used in the video:

Assuming we have system installed on the second partition and the first partition is mounted on /mnt/for-zfs/.

$ ssh root@

Updating, installing mc and rsync:

# yum update -y ; yum install mc rsync -y

Checking partitions:

# ls /dev/disk/by-id/* -la

# fdisk -l

# mount | grep zfs

Unmounting the partition we will use for ZFS:

# umount  /mnt/for-zfs/

Removing the above mentioned partition from fstab:

# mcedit /etc/fstab

Installing ZFS:

# yum install -y

# gpg --quiet --with-fingerprint /etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux

Disabling zfs, enabling zfs-kmod:

# mcedit /etc/yum.repos.d/zfs.repo



Installing packages

# yum install zfs zfs-dracut -y

Loading the zfs module

# modprobe zfs

Creating the pool on first partition

# zpool create -d -o feature@async_destroy=enabled -o feature@empty_bpobj=enabled -o feature@lz4_compress=enabled -o ashift=12 -O compression=lz4 -O copies=2 -O acltype=posixacl -O xattr=sa -O utf8only=on -O atime=off -O relatime=on rpool /dev/disk/by-id/ata-VBOX_HARDDISK_VBf518d145-451b386d-part1

It is important to use "xattr=sa" if you use "acltype=posixacl" (because it works faster this way).

Because we use only one partition we set "copies=2" (data will be copied twice). "relatime=on" will be in effect only when "atime=on" (we set "relatime=on" just in case).

Checking the status of pools:

# zpool status

Creating root filesystem

# zfs create rpool/ROOT

Creating a tmp directory and mounting / to it:

# mkdir /mnt/tmp
# mount --bind / /mnt/tmp

Copying the content of / to the new root filesystem on the ZFS pool:

# rsync -avPX /mnt/tmp/. /rpool/ROOT/.

Unmounting / from tmp

# umount /mnt/tmp

Removing / from the fstab on the new root filesystem:

# mcedit /rpool/ROOT/etc/fstab 

Creating symlinks on /dev to partitions (needed for grub):

# cd /dev/
# ln -s /dev/disk/by-id/* . -i

Mounting the proc, sys and dev on the new root filesystem:

# for dir in proc sys dev;do mount --bind /$dir /rpool/ROOT/$dir;done

Chroot-ing to the new root filesystem:

# chroot /rpool/ROOT/

Creating Grub's config:

# grub2-mkconfig -o /boot/grub2/grub.cfg

Removing the zpool.cache from the new root filesystem

# rm /etc/zfs/zpool.cache

Creating /boot/initramfs-3.10.0-514.el7.x86_64.img (the second element from the Grub menu)

# dracut -f -v /boot/initramfs-$(uname -r).img $(uname -r)

Creating /boot/initramfs-3.10.0-514.10.2.el7.x86_64.img (the first element from the Grub menu):

# dracut -f -v /boot/initramfs-3.10.0-514.10.2.el7.x86_64.img 3.10.0-514.10.2.el7.x86_64

The above is needed in order system to be bootable when the first element of the menu (default) is selected.

The system will not boot if only "/boot/initramfs-3.10.0-514.el7.x86_64.img" is created and we select the first element of the menu:

Installing the Grub on /dev/sda:

# grub2-install --boot-directory=/boot /dev/sda

Exiting from the new root:

# exit

Unmounting the proc, sys and dev on the new root filesystem:

# for dir in proc sys dev;do umount /rpool/ROOT/$dir;done

Rebooting the system:

# reboot

Login again

$ ssh root@

Deleting the second partition (old root), deleting the first partition and creating one big partition (the partition must start where the previous first partition started - at 2048):

# fdisk /dev/sda
 (default p)
 (default 2048)
 (default - end of the disk)


# reboot

Login again:

$ ssh root@

Checking if we are inside ZFS root filesystem:

# df -h

Checking the status of the ZFS pool and ZFS filesystems:

# zpool status
# zpool list
# zfs list

Expanding the pool:

# zpool online -e rpool ata-VBOX_HARDDISK_VBf518d145-451b386d-part1

Checking if the filesystem is expanded:

# df -h
# zpool list
# zfs list

Rebooting to test if it will work:

# reboot

Login again:

$ ssh root@


  1. After following these instructions on CentOS 7.4 (updated to use the 7.4 package from, and with the appropriate kernel version for the second dracut command), booting is failing. I'm dropping to a dracut:/# prompt and a message saying "cannot import ‘rpool’: pool was previously in use from another system". If I do "zpool import -f rpool" and reboot, the system boots fine, but the next time I boot I need to do it again. Any thoughts on what I should check?

  2. With reference to my previous comment, I was able to avoid it by creating /etc/hostid (dd if=/dev/urandom of=/etc/hostid bs=4 count=1). Now, though, after the fdisk step, the pool won't import at all--status is FAULTED with "corrupted data". It had worked before I created the hostid file. Strange.

  3. I think there is a better guide on github, this look like a simply copy/paste with missing parts, quite a lot actually.

    1. This article is with "missing parts" intentionally, because the goal was to not include any non-relevant information, just simple instructions for only one particular use case.

      I also made some corrections to the guide on github.

      The "missing parts" are not needed for this particular use case, as you see on the video.

  4. Hi everyone, maybe this will be useful to someone. If you lost your pool after `zpool export rpool`, try import it with command `zpool import -d /dev/disk/by-id -R /mnt/zfs rpool`, set right mountpoint right away you create zpool like `zfs create -o mountpoint=legacy zroot && zfs create -o mountpoint=/ zroot/default ` and set root mountpoint bootable `zpool set bootfs=zroot/default zroot` after reboot zroot/default automaticaly mount properly to root directory. I ran into a lot of problems in UEFI mode, the tips above helped me solve them. This guide whery kool, thank you author for this article.


Post a Comment

Shop Amazon - Used Textbooks - Save up to 90%

*Valentin Stoykov is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to

Popular posts from this blog