Ubuntu 16.04 (Xenial) ZFS native root install

With the recent advertisement about Ubuntu 1604 (Xenial) natively supporting ZFS, I set about playing with the (pre)Beta-1 to find out how to install to a native ZFS root file system. Below are the notes that I created for myself and have used on about a dozen VMs and half a dozen physical boxes.

Word to the wise, when using VMs, make sure to use a disk type that you can specify a SCSI ID, i.e. SCSI Disk in KVM vs VIO which doesn't support IDs.

I did all of my installs by booting into the Xubuntu (my Ubuntu variant of choice) live / demo mode and opened a terminal to do the install.

These directions are literally an adaptateion of notes to myself. I've put some additional formatting and descriptions on this page to make it easier for others to follow. If you find any problems with this page, please click the eMail Grant link in the footer and tell me about it.

I like to have some clear visual display that I'm working in the live boot environment.

export PS1="(LIVE) $PS1"

Install the minimum number of utilities needed to boot strap the install. I use Sendmail as my MTA of choice, adjust to your liking.

apt-get update -y && apt-get install -y zfsutils-linux debootstrap vim sendmail

Load the ZFS kernel module.

modprobe zfs

Create the /etc/udev/rules.d/90-zfs.rules file to cause udev to create the sym-links needed by GRUB.

KERNEL=="sd*[!0-9]", IMPORT{parent}=="ID_*", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}"
KERNEL=="sd*[0-9]", IMPORT{parent}=="ID_*", SYMLINK+="$env{ID_BUS}-$env{ID_SERIAL}-part%n"

Kick udev in the pants to create the new links based on the new 90-zfs.rules file.

udevadm trigger

Make sure that the new sym-links appear. Note: You should have similar links in /dev to those found in /dev/disk/by-id.

ls -l /dev/disk/by-id /dev

Use sgdisk to create a new GPT partition table and three new partitions.

sgdisk -Z -n9:-8M:0 -t9:bf07 -c9:Reserved -n2:-8M:0 -t2:ef02 -c2:GRUB -n1:0:0 -t1:bf01 -c1:ZFS /dev/sda
sgdisk -Z -n9:-8M:0 -t9:bf07 -c9:Reserved -n2:-8M:0 -t2:ef02 -c2:GRUB -n1:0:0 -t1:bf01 -c1:ZFS /dev/sdb

The above commands create the following partitions, in the following order:

  1. Partition #9 is created at the end of the disk and is 8 MB * in size. The partition uses the special "bf07" hex type code to indicate that it's "Reserved". (This was copied from what zpool creates when using whole disks.) Similarly, the partition name is set to "Reserved".
  2. Partition #2 is created at in the last available 8 MB * of the disk, just before partition #9. Partition #2 uses the special "ef02" hex type code to indicate that it's a GRUB boot partition. Similarly partition #2's name is set to "GRUB".
  3. Finally the rest of the available space on the disk is assigned to partition #1, which is used by ZFS. Partition #1 uses the "bf01" hex type code (again copied from ZFS whole disk). As you would expect, partition #1's name is set to "ZFS'.

* sgdisk will expand the partition enough to be properly aligned.

Note: GRUB only needs enough space to hold the core.img file, which is currently < 1 MB. However drive space is plentiful, so I'm allocating 8 MB.

Now it's time to create the actual zpool.

zpool create -m none -o ashift=12 -O compression=lz4 rpool mirror /dev/sda1 /dev/sdb1

Here's what the pertinent parts of the zpool command are doing:

I'm naming my pool "rpool" for the root pool.
Note: You can name your pool anything you want.
-m none
Set the pool mount point to none. - I'll set mount points for individual file systems later.
-o ashift=12
Set the alignment shift.
-O compression=lz4
Enable LZ4 compression.
Other documentation says that GRUB doesn't support compression, but my tests show that it does.
mirror /dev/sda1 /dev/sdb1
Create a mirrored VDEV made up of /dev/sda1 and /dev/sdb1
Note: I used the standard /dev/sd* names for easy. We will export / import the pool later to switch to the preferred longer names.

Create the root file system.

zfs create -o mountpoint=/ rpool/root
-o mountpoint=/
Set the mountpoint to / (root) for the new file system.
Create the "root" file system in the "rpool" pool.
Note: You can name your file system anything you want. I chose to simply use "root" and not bother with the structure that Solaris uses.

Now export the pool so that we can re-import it on the proper device names.

zpool export rpool

Now re-import the pool using the longer device names.

zpool import -d /dev/disk/by-id -R /mnt rpool
-d /dev/disk/by-id
ZFS prefers to use the longer device names found in /dev/disk/by-id as they are more stable and much less likely to change than /dev/sd* names.
-R /mnt
Set a temporary root for the pool.
Note: This does not change what we set the mountpoint to at pool / file system creation time.

Now it's time to debootstrap the target. Note: This will take a good while, so it's a good thing to start before your coffee break.

debootstrap xenial /mnt

Now copy the apt sources list from the live environment into the new environment.

cp /etc/apt/sources.list /mnt/etc/apt/sources.list

Note: I actually cheat and leverage some Bash expansions to make some of these steps easier.

# cp {,/mnt}/etc/apt/sources.list

This makes a number of the commands that work with the live environment and chroot environment more friendly to type.

Update the /mnt/etc/apt/sources.list file and remark / remove the CD-ROM listing.

vim /mnt/etc/apt/sources.list

Copy the custom 90-zfs.rules file from the live environment to the chroot environment.

cp /etc/udev/rules.d/90-zfs.rules /mnt/etc/udev/rules.d/90-zfs.rules

Prepare and enter the chroot environment.

mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
chroot /mnt /bin/bash --login

Note: I think these commands really benefit from the Bash expansions "mount --rbind {,/mnt}/dev". You can simply hit the up arrow and change the last directory rather than having to make multiple edits.

Set up some clear visual display that I'm working in the chroot environment.

export PS1="(CHROOT) $PS1"

Set up some locale files to keep things happy.

locale-gen en_US.UTF-8

Note: You may want to install additional locales to suit your preferences, but make sure you have at least en_US.UTF-8 to keep things happy.

Now it's time to install the longer list of packages to make the chroot environment able to support ZFS it's self.

apt-get update -y && apt-get install -y zfsutils-linux zfs-initramfs zfs-dkms zfs-zed linux-image-$(uname -r) linux-image-extra-$(uname -r) linux-headers-$(uname -r) grub2-common grub-pc acpi-support vim sendmail

Go CICO while things install...

apt-get update -y
Update the chroot environment without prompting.
apt-get install -y
Install the packages without prompting.
zfsutils-linux zfs-initramfs zfs-dkms zfs-zed
These package are required for ZFS support.
linux-image-$(uname -r) linux-image-extra-$(uname -r) linux-headers-$(uname -r)
Typical Linux packages. linux-image-extras is required for USB keyboard support.
Note: I had an issue where the versions for my running kernel $(uname -r) couldn't be found. If this happens to you, point a web browser to your favorite local package repo and find the current version and substitute that in place of $(uname -r).
grub2-common grub-pc
GRUB will likely prompt you for the device(s) to install onto. Please select your two boot devices.
Note: GRUB will likely fail to install the boot sector. (This is expected and fixed later.)
I like for my machines to respond and shut down when I press the power button.
vim sendmail

If all went well (including GRUB error), you should have most of a working install.

Check to make sure the GRUB modules for ZFS got installed.

find /boot -iname '*zfs*'

If the ZFS modules are missing (i.e. if you forgot to select devices), do a grub-install.

Set root's password

passwd root

Exit the chroot environment


Check to make sure that GRUB can identify the root file system type is ZFS.

grub-probe /mnt

Run grub-install to actually install the boot sectors.

grub-install --root-directory=/mnt /dev/sda
grub-install --root-directory=/mnt /dev/sdb

Note: Both grub-install commands should report the following:

Installing for $ARCH platform.
Installation finished. No error reported.

Now it's time to reboot the machine. Note: You may need to force (-f) to deal with /sys mount issues.

shutdown -rf now

GRUB will likely fail to find it's config files and fail to a "grub rescue >" prompt. Here's what you need to do to boot the system.

linux (hd0,gpt1)/root/@/boot/vmlinuz-4.4.0-9-generic root=ZFS=rpool/root ro boot=zfs
initrd (hd0,gpt1)/root/@/boot/initrd.img-4.4.0-9-generic

Update the device, partitions, kernel version, pool name, and file system if necessary.

If all goes well, you will boot further. However you may get dropped into an initramfs prompt because the ZFS pool wasn't cleanly exported before rebooting. (This is expected and because we didn't cleanly unmount & export the pool.)

zpool import -f -R / -n rpool

Again, update the pool name if necessary.

You should now be booted into your new Xenial installed onto a ZFS native root.

Log in and edit the /etc/default/grub file

  2. Add "boot=zfs" to GRUB_CMDLINE_LINUX.
  3. Optionally remark GRUB_CMDLINE_LINUX_DEFAULT
  4. Optionally add GRUB_DISABLE_OS_PROBER="true" to silence spurious grub-probe errors

Run grub-probe and make sure that it returns ZFS.

grub-probe /

Update grub to associate everything properly.


If you didn't disable grub prober (#4 above), ignore the spurious output about UFS.

Reboot again to make sure everything works properly.

shutdown -r now

The system should have rebooted and come up cleanly. Go ahead and log in and continue configuring it like you normally would a debootstrap install.