Difference between pages "PXE network boot server" and "ZFS Install Guide"

(Difference between pages)
 
m (updated pool)
 
Line 1: Line 1:
== ''Howto: Turn your Funtoo machine into a Network Boot Server'' ==
+
== Introduction ==
This guide helps explain how to set up a PXE server using in.tftpd and dnsmasq.
+
This may be useful for installing an operating system on a machine that has no optical drive and/or an older BIOS which doesn't support booting from USB.
+
  
This guide will cover the basics of getting your server set up to allow clients to boot from the network to a pxelinux/syslinux menu and choose an option of installing / running your preferred distribution or installing a MS Windows operating system - the possibilities are endless and you are free to use it as you wish!! The funtoo way!
+
This tutorial will show you how to install Funtoo on ZFS (rootfs). This tutorial is meant to be an "overlay" over the [[Funtoo_Linux_Installation|Regular Funtoo Installation]]. Follow the normal installation and only use this guide for steps 2, 3, and 8.
  
== Dependencies ==
+
=== Introduction to ZFS ===
The following packages are required:
+
  
* {{Package|net-dns/dnsmasq}}
+
Since ZFS is a new technology for Linux, it can be helpful to understand some of its benefits, particularly in comparison to BTRFS, another popular next-generation Linux filesystem:
* {{Package|net-ftp/tftp-hpa}}
+
* {{Package|sys-boot/syslinux}}
+
For Windows PXE Booting:
+
* {{Package|net-fs/cifs-utils}}
+
* {{Package|net-fs/samba}}
+
*A simple fileserving protocol configured and working properly. Both FTP and HTTP work fine. You can use either one.
+
{{fancynote| This guide will use System Rescue CD as an example of the PXE Boot Process. }}
+
* Download System Rescue CD[http://www.sysresccd.org]
+
{{fancynote| The following packages are only required if you intend to install Microsoft Windows via Network Boot (NOT REQUIRED IN THIS HOWTO)}}
+
*NFS support - Kernel configuration : CONFIG_NFS_FS=y||m
+
  
== Understanding the PXE/Network Boot process ==
+
* On Linux, the ZFS code can be updated independently of the kernel to obtain the latest fixes. btrfs is exclusive to Linux and you need to build the latest kernel sources to get the latest fixes.
A PXE Network boot isn't much different than a traditional boot from your hard drive, in fact you will probably find the boot loader to be very similar to what you are already familiar with!  In a nut shell here is what happens:  You set your BIOS / Boot options to boot from Network. The client will obtain an IP address from the PXE server (via DNSMasq), after the IP address is obtained it simply looks for the tftp daemon running on the server (tftp-hpa). The DHCP Server sends the PXE information to the NIC and it loads up a menu that you define in your pxelinux configuration (syslinux), or depending on your configuration it may go straight into the OS / Installation that you configure.
+
  
Sounds easy huh? For the most part it is very simple to set up. However if you plan to set up a MS Windows install via the network, it gets a bit more tricky, mainly due to MS not using a case sensitive file system, and requiring files to be located using drive letters and back slashes "\" instead of slashes "/"  What this requires is a remapping file. With a remapping file, tftp daemon will remap the characters, symbols, and/or drive letters to suit the needs. This is why I recommend tftp-hpa in this guide.
+
* ZFS is supported on multiple platforms. The platforms with the best support are Solaris, FreeBSD and Linux. Other platforms with varying degrees of support are NetBSD, Mac OS X and Windows. btrfs is exclusive to Linux.
  
DNSMasq actually provides a tftp server if you want to use it, however I recommend the use of tftp-hpa as it allows remapping in the event you ever intend to boot a Windows environment over your network.  
+
* ZFS has the Adaptive Replacement Cache replacement algorithm while btrfs uses the Linux kernel's Last Recently Used replacement algorithm. The former often has an overwhelmingly superior hit rate, which means fewer disk accesses.
  
This guide will cover the basics of getting your PXE server up and running for a linux based client, in this guide we will be using System Rescue CD (which is a Gentoo Live CD Image).  
+
* ZFS has the ZFS Intent Log and SLOG devices, which accelerates small synchronous write performance.
  
In the event that you want to install Microsoft Windows over the network, you will have your server already configured, and you will only need to do some minor config changes, host your Windows installation files / Preinstallation Environment and set your tftp remapping configuration. (This can become a headache if you plan to host several releases of MS Windows over the network - due to conflicting remappings)
+
* ZFS handles internal fragmentation gracefully, such that you can fill it until 100%. Internal fragmentation in btrfs can make btrfs think it is full at 10%. Btrfs has no automatic rebalancing code, so it requires a manual rebalance to correct it.
  
It is also very important to understand that the PXE network is only responsible for giving you network access up until the operating system is loaded. What this means is that your kernel you are loading will need to have support for the network card on your client(s). You may want to consider a generic kernel that supports several different NICs. For Windows, this means you will probably want to include all NIC drivers in your installation files and ensure that they are loaded during installation.
+
* ZFS has raidz, which is like RAID 5/6 (or a hypothetical RAID 7 that supports 3 parity disks), except it does not suffer from the RAID write hole issue thanks to its use of CoW and a variable stripe size. btrfs gained integrated RAID 5/6 functionality in Linux 3.9. However, its implementation uses a stripe cache that can only partially mitigate the effect of the RAID write hole.
 +
 
 +
* ZFS send/receive implementation supports incremental update when doing backups. btrfs' send/receive implementation requires sending the entire snapshot.
 +
 
 +
* ZFS supports data deduplication, which is a memory hog and only works well for specialized workloads. btrfs has no equivalent.
 +
 
 +
* ZFS datasets have a hierarchical namespace while btrfs subvolumes have a flat namespace.
 +
 
 +
* ZFS has the ability to create virtual block devices called zvols in its namespace. btrfs has no equivalent and must rely on the loop device for this functionality, which is cumbersome.
 +
 
 +
The only area where btrfs is ahead of ZFS is in the area of small file
 +
efficiency. btrfs supports a feature called block suballocation, which
 +
enables it to store small files far more efficiently than ZFS. It is
 +
possible to use another filesystem (e.g. reiserfs) on top of a ZFS zvol
 +
to obtain similar benefits (with arguably better data integrity) when
 +
dealing with many small files (e.g. the portage tree).
 +
 
 +
=== Disclaimers ===
 +
 
 +
{{fancywarning|This guide is a work in progress. Expect some quirks.}}
 +
{{fancyimportant|'''Since ZFS was really designed for 64 bit systems, we are only recommending and supporting 64 bit platforms and installations. We will not be supporting 32 bit platforms'''!}}
 +
 
 +
== Video Tutorial ==
 +
 
 +
As a companion to the install instructions below, a YouTube video ZFS install tutorial is now available:
 +
 
 +
{{#widget:YouTube|id=kxEdSXwU0ZI|width=640|height=360}}
 +
 
 +
== Downloading the ISO (With ZFS) ==
 +
In order for us to install Funtoo on ZFS, you will need an environment that provides the ZFS tools. Therefore we will download a customized version of System Rescue CD with ZFS already included. When booting, use the "alternate"-kernel. The ZFS-module won't work with the default kernel.
 +
 
 +
<pre>
 +
Name: sysresccd-3.8.1_zfs_0.6.2.iso  (510 MB)
 +
Release Date: 2013-11-03
 +
md5sum aa33ef61c5d85ad564372327940498c3
 +
</pre>
 +
 
 +
 
 +
'''[http://ftp.osuosl.org/pub/funtoo/distfiles/sysresccd/ Download System Rescue CD with ZFS]'''<br />
 +
 
 +
== Creating a bootable USB from ISO ==
 +
After you download the iso, you can do the following steps to create a bootable USB:
  
== Installing and Configuring tftp-hpa (in.tftpd) for serving your network boot files ==
 
Install {{Package|net-ftp/tftp-hpa}} using portage:
 
 
<console>
 
<console>
###i## emerge net-ftp/tftp-hpa
+
Make a temporary directory
 +
# ##i##mkdir /tmp/loop
 +
 
 +
Mount the iso
 +
# ##i##mount -o ro,loop /root/sysresccd-3.7.1_zfs_0.6.2.iso /tmp/loop
 +
 
 +
Run the usb installer
 +
# ##i##/tmp/loop/usb_inst.sh
 
</console>
 
</console>
  
Create a directory for your tftp server - this is where your pxe configuration files and any files that will be accessed directly from the PXE boot process will be located (You can put it anywhere you have access to, I will be using <code>/tftproot</code>):
+
That should be all you need to do to get your flash drive working.
 +
 
 +
When you are booting into system rescue cd, make sure you select the '''alternative 64 bit kernel'''. ZFS support was specifically added to the alternative 64 bit kernel rather than the standard 64 bit kernel.
 +
 
 +
== Creating partitions ==
 +
There are two ways to partition your disk: You can use your entire drive and let ZFS automatically partition it for you, or you can do it manually.
 +
 
 +
We will be showing you how to partition it '''manually''' because if you partition it manually you get to create your own layout, you get to have your own separate /boot partition (Which is nice since not every bootloader supports booting from ZFS pools), and you get to boot into RAID10, RAID5 (RAIDZ) pools and any other layouts due to you having a separate /boot partition.
 +
 
 +
==== gdisk (GPT Style) ====
 +
 
 +
'''A Fresh Start''':
 +
 
 +
First lets make sure that the disk is completely wiped from any previous disk labels and partitions.
 +
We will also assume that <tt>/dev/sda</tt> is the target drive.<br />
 +
 
 
<console>
 
<console>
###i## mkdir /tftproot
+
# ##i##gdisk /dev/sda
 +
 
 +
Command: ##i##x ↵
 +
Expert command: ##i##z ↵
 +
About to wipe out GPT on /dev/sda. Proceed?: ##i##y ↵
 +
GPT data structures destroyed! You may now partition the disk using fdisk or other utilities.
 +
Blank out MBR?: ##i##y ↵
 
</console>
 
</console>
  
Edit your PXE configuration:
+
{{fancywarning|This is a destructive operation. Make sure you really don't want anything on this disk.}}
Set the path to <tt>/tftproot</tt> or your preferred directory created above. We are going to also go ahead and add a remapping file just in case you intend to use it later it will be <code>${INTFTPD_PATH}tftpd.remap.</code> Edit <code>/etc/conf.d/in.tftpd</code>:
+
  
 +
Now that we have a clean drive, we will create the new layout.
  
<pre>
+
'''Create Partition 1''' (boot):
# Path to server files from
+
<console>
# Depending on your application you may have to change this.
+
Command: ##i##n ↵
# This is commented out to force you to look at the file!
+
Partition Number: ##i##↵
#INTFTPD_PATH="/var/tftp/"
+
First sector: ##i##↵
#INTFTPD_PATH="/tftpboot/"
+
Last sector: ##i##+250M ↵
INTFTPD_PATH="/tftproot/"
+
Hex Code: ##i##↵
 +
</console>
  
# For more options, see in.tftpd(8)
+
'''Create Partition 2''' (BIOS Boot Partition):
# -R 4096:32767 solves problems with ARC firmware, and obsoletes
+
<console>Command: ##i##n ↵
# the /proc/sys/net/ipv4/ip_local_port_range hack.
+
Partition Number: ##i##↵
# -s causes $INTFTPD_PATH to be the root of the TFTP tree.
+
First sector: ##i##↵
# -l is passed by the init script in addition to these options.
+
Last sector: ##i##+32M ↵
INTFTPD_OPTS="-m ${INTFTPD_PATH}tftpd.remap -R 4096:32767 -s ${INTFTPD_PATH}"
+
Hex Code: ##i##EF02 ↵
</pre>
+
</console>
  
No need to worry about the the contents of the tftpd.remap file for now, but to prevent the daemon from panicking on a missing file, just create an empty one like so:  
+
'''Create Partition 3''' (ZFS):
 +
<console>Command: ##i##n ↵
 +
Partition Number: ##i##↵
 +
First sector: ##i##↵
 +
Last sector: ##i##↵
 +
Hex Code: ##i##bf00 ↵
 +
 
 +
Command: ##i##p ↵
 +
 
 +
Number  Start (sector)    End (sector)  Size      Code  Name
 +
  1            2048          514047  250.0 MiB  8300  Linux filesystem
 +
  2          514048          579583  32.0 MiB    EF02  BIOS boot partition
 +
  3          579584      1953525134  931.2 GiB  BF00  Solaris root
 +
 
 +
Command: ##i##w ↵
 +
</console>
 +
 
 +
 
 +
=== Format your boot volume ===
 +
Format your separate /boot partition:
 +
<console># ##i##mkfs.ext2 /dev/sda1</console>
 +
 
 +
=== Encryption (Optional) ===
 +
If you want encryption, then create your encrypted vault(s) now by doing the following:
  
 
<console>
 
<console>
###i## touch /tftproot/tftpd.remap
+
# ##i##cryptsetup luksFormat /dev/sda3
 +
# ##i##cryptsetup luksOpen /dev/sda3 vault_1
 
</console>
 
</console>
  
== Installing and Configuring DNSMasq for DHCP / PXE Booting ==
+
=== Create the zpool ===
Install {{Package|net-dns/dnsmasq}} if you don't already have it installed (use the tftp useflag):
+
We will first create the pool. The pool will be named `tank` and the disk will be aligned to 4096 (using ashift=12)
Even though we won't be using the built-in tftp server for dnsmasq, we will still need it to be tftp-aware:
+
<console># ##i##zpool create -f -o ashift=12 -o cachefile= -O compression=on -m none -R /mnt/funtoo tank /dev/sda3</console>
 +
 
 +
{{fancyimportant|If you are using encrypted root, change '''/dev/sda3 to /dev/mapper/vault_1'''.}}
 +
 
 +
{{fancynote|'''ashift<nowiki>=</nowiki>12''' should be use if you have a newer, advanced format disk that has a sector size of 4096 bytes. If you have an older disk with 512 byte sectors, you should use '''ashift<nowiki>=</nowiki>9''' or don't add the option for auto detection}}
 +
 
 +
{{fancynote|If you have a previous pool that you would like to import, you can do a: '''zpool import -f -R /mnt/funtoo <pool_name>'''}}
 +
 
 +
=== Create the zfs datasets ===
 +
We will now create some datasets. For this installation, we will create a small but future proof amount of datasets. We will have a dataset for the OS (/), and your swap. We will also show you how to create some optional datasets: /home, /var, /usr/src, and /usr/portage.
 +
 
 
<console>
 
<console>
###i## echo "net-dns/dnsmasq tftp" >> /etc/portage/package.use/dnsmasq
+
Create some empty containers for organization purposes, and make the dataset that will hold /
###i## emerge net-dns/dnsmasq
+
# ##i##zfs create -p tank/os/funtoo
 +
# ##i##zfs create -o mountpoint=/ tank/os/funtoo/root
 +
 
 +
Optional, but recommended datasets: /home
 +
# ##i##zfs create -o mountpoint=/home tank/os/funtoo/home
 +
 
 +
Optional datasets: /usr/src, /usr/portage/{distfiles,packages}
 +
# ##i##zfs create -o mountpoint=/usr/src tank/os/funtoo/src
 +
# ##i##zfs create -o mountpoint=/usr/portage -o compression=off tank/os/funtoo/portage
 +
# ##i##zfs create -o mountpoint=/usr/portage/distfiles tank/os/funtoo/portage/distfiles
 +
# ##i##zfs create -o mountpoint=/usr/portage/packages tank/os/funtoo/portage/packages
 
</console>
 
</console>
DNSMasq is a powerful daemon that has the capability of functioning as a DNS cacheing server, DHCP Server, TFTPD Server, and more. For now we will be focusing on one thing in the configuration, the DHCP Server.
 
  
The DNSMasq configuration file is located at:  /etc/dnsmasq.conf and it is a very large file however there are only 3 options we need for this to work, you can later enable DNS and custom dhcp mappings if needed. Those 3 configuration options are:
+
=== Create your swap zvol ===
 +
'''Make your swap +1G greater than your RAM. An 8G machine would have 9G of SWAP (This is kinda big though). For machines with this much memory, You could just make it 2G if you don't have any problems.'''
 +
<console>
 +
# ##i##zfs create -o sync=always -o primarycache=metadata -o secondarycache=none -o volblocksize=4K -V 1G rpool/swap
 +
</console>
  
#dhcp-boot=pxelinux.0  #Tells the filename to grab from the tftp server for booting This is provided by the syslinux package we will be configuring in the next step
+
=== Format your swap zvol ===
#dhcp-range=192.168.0.100,192.168.0.250,72h #customize this range to suite your network needs.
+
<console>
#interface=eth0 #The interface that will be acting as a DHCP server. If you want the DHCP server to run on a different interface be sure to change this option
+
# ##i##mkswap -f /dev/zvol/rpool/swap
 +
# ##i##swapon /dev/zvol/rpool/swap
 +
</console>
  
== Configuring PXELinux (based on syslinux) ==
+
 
Install {{Package|sys-boot/syslinux}}:
+
=== Last minute checks and touches ===
 +
Check to make sure everything appears fine. Your output may differ depending on the choices you made above:
 
<console>
 
<console>
###i## emerge sys-boot/syslinux
+
# ##i##zpool status
 +
  pool: rpool
 +
state: ONLINE
 +
  scan: none requested
 +
config:
 +
 
 +
        NAME        STATE    READ WRITE CKSUM
 +
        rpool      ONLINE      0    0    0
 +
          sda2      ONLINE      0    0    0
 +
 
 +
errors: No known data errors
 +
 
 +
# ##i##zfs list
 +
rpool              3.10G  15.5G  136K  none
 +
rpool/HOME          136K  15.5G  136K  /mnt/funtoo/home
 +
rpool/ROOT          308K  15.5G  136K  none
 +
rpool/ROOT/funtoo  172K  15.5G  172K  /mnt/funtoo
 +
rpool/swap        3.09G  18.6G    76K  -
 
</console>
 
</console>
  
PXE booting only requires one file that is installed by syslinux, however you will probably want to use more later on. For now we will use the pxelinux.0 file as we mentioned earlier while setting up DNSMasq, as well as a basic menu using the <code>menu.c32</code> and a graphical menu using the <code>vesamenu.c32</code>.
+
Now we will continue to install funtoo.
 +
 
 +
== Installing Funtoo ==
 +
[[Funtoo_Linux_Installation|Download and extract the Funtoo stage3 and continue installation as normal.]]
 +
 
 +
Then once you've extracted the stage3, chroot into your new funtoo environment:
 
<console>
 
<console>
###i## cd /usr/share/syslinux
+
Go into the directory that you will chroot into
###i## cp menu.c32 vesamenu.c32 pxelinux.0 /tftproot
+
# ##i##cd /mnt/funtoo
###i## cd /tftproot
+
 
 +
Mount your boot drive
 +
# ##i##mount /dev/sda1 /mnt/funtoo/boot
 +
 
 +
Bind the kernel related directories
 +
# ##i##mount -t proc none /mnt/funtoo/proc
 +
# ##i##mount --rbind /dev /mnt/funtoo/dev
 +
# ##i##mount --rbind /sys /mnt/funtoo/sys
 +
 
 +
Copy network settings
 +
# ##i##cp /etc/resolv.conf /mnt/funtoo/etc/
 +
 
 +
chroot into your new funtoo environment
 +
# ##i##env -i HOME=/root TERM=$TERM chroot /mnt/funtoo /bin/bash --login
 +
 
 +
Place your mountpoints into your /etc/mtab file
 +
# ##i##cat /proc/mounts > /etc/mtab
 +
 
 +
Sync your tree
 +
# ##i##emerge --sync
 
</console>
 
</console>
  
PXELinux can boot a different option for each device's MAC address on your network, or it can also boot a default for all nic's on the network if a MAC address config isn't found. I will be covering the default method as it works for most simple setups. If you prefer a different boot configuration for each MAC address on your NICs then you can google for "pxelinux.cfg MAC config" and find tons of documentation for doing so
+
=== Add filesystems to /etc/fstab ===
To set up the default config, first create the following directory:
+
 
 +
Before we continue to compile and or install our kernel in the next step, we will edit the /etc/fstab file because if we decide to install our kernel through portage, portage will need to know where is your /boot so that it can place the files in there. We also need to update /etc/mtab so our system knows what is mounted
 +
 
 
<console>
 
<console>
###i## mkdir /tftproot/pxelinux.cfg
+
# ##i##nano /etc/fstab
</console>
+
  
Inside this directory is where the "default" config as well as any other custom configurations by MAC will reside. Here is an example of a graphical menu used to boot System Rescue CD, the file should be located at <code>/tftproot/pxelinux.cfg/default</code>:
+
# <fs>                 <mountpoint>    <type>          <opts>          <dump/pass>
 +
# Do not add the /boot line below if you are using whole-disk zfs
 +
/dev/sda1              /boot          ext2            defaults        0 2
 +
/dev/zvol/rpool/swap    none            swap            sw              0 0
 +
</console>
  
<pre>
+
== Kernel Configuration ==
# The default menu style - using vesa menu in this example
+
To speed up this step, you can install "bliss-kernel" since it's already properly configured for ZFS and a lot of other configurations. The kernel is also compiled and ready to go. To install 'bliss-kernel' type the following:
DEFAULT vesamenu.c32
+
# If you have a png image in the /tftproot directory you can specify it here like so:
+
Menu Background netboot-1.png
+
# Prompt user for selection
+
prompt 0
+
  
#Global label identifier
+
<console>
label System Rescue CD
+
# ##i##emerge bliss-kernel
        # Set this entry as the default selection
+
</console>
        menu default
+
        # Actual viewable label text
+
MENU LABEL System Rescue CD
+
        # The timeout for the entry is a bit unclear, but 10000 is equivalent to 10 Seconds.
+
        TIMEOUT 10000
+
        TOTALTIMEOUT 10000
+
        # The kernel image to load.  This entry would actually reside at /tftproot/srcd/isolinux/rescue64  The path is relative to /tftproot or your tftp directory
+
kernel srcd/isolinux/rescue64
+
        # The initrd relative to tftproot directory and specifying the netboot server, protocol, and file
+
        # In this example the http protocol is used on server 192.168.0.1. The file is sysrcd.dat
+
        # If you have your http server set up to host files at /var/www/localhost/htdocs then this file would be located in that directory
+
append initrd=srcd/isolinux/initram.igz netboot=http://192.168.0.1/sysrcd.dat
+
</pre>
+
  
== Mounting the ISO Image and Hosting the Compressed File System ==
+
Now make sure that your /usr/src/linux symlink is pointing to this kernel by typing the following:
In the above configuration example I was using a mounted System Rescue CD image at /tftproot/srcd  The kernel and initrd are located inside the isolinux directory of the ISO, the compressed filesystem is located at the top level of the ISO (i.e. /tftproot/srcd/sysrcd.dat)
+
  
In order to replicate the exact settings I used in this config you may do the following:
 
 
<console>
 
<console>
###i## cd /tftproot
+
# ##i##eselect kernel list
###i## mkdir srcd
+
Available kernel symlink targets:
###i## mount -o loop /path/to/systemrescuecd.iso srcd/
+
[1]  linux-3.10.10-FB.01 *
 
</console>
 
</console>
Be sure to replace the "/path/to/systemrescuecd.iso" with the actual path you downloaded the System Rescue CD to and the actual filename.
 
  
Now you need to be sure that 2 files reside on your HTTP or FTP server, whichever you prefer to use for the netboot process is fine, but the System Rescue CD Netboot process will do 3 things:
+
You should see a star next to the bliss-kernel version you installed. In this case it was 3.10.10-FB.01. If it's not set, you can type '''eselect kernel set #'''.
#Load Kernel
+
 
#Load Initrd
+
== Installing the ZFS userspace tools and kernel modules ==
#Request the compressed filesystem from the network
+
 
The files needed for the 3rd step are located in the srcd/ directory if you mounted it with the above command. System Rescue CD uses a .dat file for the compressed filesystem, and it is verified during boot with a md5sum using the .md5 file in the srcd/ directory. The filenames are sysrcd.dat and sysrcd.md5.  They need to be hosted on your fileserver/http server that you specify for the netboot argument in the pxelinux.cfg/default file. If you have a basic Apache/Lighttpd server set up you can do the following:
+
<console># ##i##emerge -av zfs spl zfs-kmod</console>
 +
 
 +
(spl = Solaris Porting Layer)
 +
 
 +
Check to make sure that the zfs tools are working, the zpool.cache file that you copied before should be displayed.
 +
 
 
<console>
 
<console>
###i## ln -s /tftproot/srcd/sysrcd.dat /var/www/localhost/htdocs/
+
# ##i##zpool status
###i## ln -s /tftproot/srcd/sysrcd.md5 /var/www/localhost/htdocs/
+
# ##i##zfs list
 
</console>
 
</console>
  
== Starting the services and preparing for use ==  
+
If everything worked, continue.
First we want to start the PXE server:
+
 
 +
== Install the bootloader ==
 +
=== GRUB 2 ===
 +
Before you do this, make sure this checklist is followed:
 +
* Installed kernel and kernel modules
 +
* Installed zfs package from the tree
 +
* /dev, /proc, /sys are mounted in the chroot environment
 +
 
 +
Once all this is checked, let's install grub2. First we need to enable the "libzfs" use flag so zfs support is compiled for grub2.
 +
 
 +
<console># ##i##echo "sys-boot/grub libzfs" >> /etc/portage/package.use</console>
 +
 
 +
Then we will compile grub2:
 +
 
 +
<console># ##i##emerge -av grub</console>
 +
 
 +
Once this is done, you can check that grub is version 2.00 by doing the following command:
 
<console>
 
<console>
###i## /etc/init.d/in.tftpd start
+
# ##i##grub-install --version
 +
grub-install (GRUB) 2.00
 
</console>
 
</console>
And now DNSMasq:
+
 
 +
Now try to install grub2:
 +
<console># ##i##grub-install --no-floppy /dev/sda</console>
 +
 
 +
You should receive the following message
 +
<console>Installation finished. No error reported.</console>
 +
 
 +
If not, then go back to the above checklist.
 +
 
 +
=== LILO ===
 +
Before you do this, make sure the following checklist is followed:
 +
* /dev/, /proc and /sys are mounted.
 +
* Installed the sys-fs/zfs package from the tree.
 +
Once the above requirements are met, LILO can be installed.
 +
 
 +
Now we will install LILO.
 +
<console># ##i##emerge -av sys-boot/lilo</console>
 +
Once the installation of LILO is complete we will need to edit the lilo.conf file.
 +
<console># ##i##nano /etc/lilo.conf
 +
boot=/dev/sda
 +
prompt
 +
timeout=4
 +
default=Funtoo
 +
 
 +
image=/boot/bzImage
 +
      label=Funtoo
 +
      read-only
 +
      append="root=rpool/ROOT/funtoo"
 +
      initrd=/boot/initramfs
 +
</console>
 +
All that is left now is to install the bootcode to the MBR.
 +
 
 +
This can be accomplished by running:
 +
<console># ##i##/sbin/lilo</console>
 +
If it is successful you should see:
 
<console>
 
<console>
###i## /etc/init.d/dnsmasq start
+
Warning: LBA32 addressing assumed
 +
Added Funtoo + *
 +
One warning was issued
 
</console>
 
</console>
If you are using Apache ensure it is running (If you use Lighttpd or Nginx replace this step with the appropriate service)
+
 
 +
== Create the initramfs ==
 +
There are two ways to do this, you can use genkernel, or you can use my bliss initramfs creator. I will show you both.
 +
 
 +
=== genkernel ===
 
<console>
 
<console>
###i## /etc/init.d/apache2 status
+
# ##i##emerge -av sys-kernel/genkernel
 +
# You only need to add --luks if you used encryption
 +
# ##i##genkernel --zfs --luks initramfs
 
</console>
 
</console>
If the service is not running, you should start it:
+
 
 +
=== Bliss Initramfs Creator ===
 +
If you are encrypting your drives, then add the "luks" use flag to your package.use before emerging:
 +
 
 
<console>
 
<console>
###i## /etc/init.d/apache2 start
+
# ##i##echo "sys-kernel/bliss-initramfs luks" >> /etc/portage/package.use
 
</console>
 
</console>
If all your configuration options are correct and you have your HTTP/FTP server running and hosting the files properly, your configuration should be done on the server side for hosting ''System Rescue CD''!!  Don't get carried away just yet, we still have to test things are working :D
 
  
== Testing your first network boot ==  
+
Now install the creator:
The first thing you want to do now is set up your client to boot from the network. This may vary on different machines / bios, common methods are:
+
 
*Pressing F12 at boot to select boot method
+
<console>
*Pressing F1, F10, or DEL at boot to enter BIOS Setup
+
# ##i##emerge bliss-initramfs
*Consult your motherboard documentation for the appropriate method of selecting boot device if the above don't work
+
</console>
 +
 
 +
 
 +
Then go into the install directory, run the script as root, and place it into /boot:
 +
<console># ##i##cd /opt/bliss-initramfs
 +
# ##i##./createInit
 +
# ##i##mv initrd-<kernel_name> /boot
 +
</console>
 +
'''<kernel_name>''' is the name of what you selected in the initramfs creator, and the name of the outputted file.
 +
 
 +
== Using boot-update ==
 +
=== /boot on separate partition ===
 +
If you created a separate non-zfs partition for boot then configuring boot-update is almost exactly the same as a normal install except that auto detection for root does not work. You must tell boot-update what your root is.
 +
==== Genkernel ====
 +
If your using genkernel you must add 'real_root=ZFS=<root>' and 'dozfs' to your params.
 +
Example entry for boot.conf:
 +
<console>
 +
"Funtoo ZFS" {
 +
        kernel vmlinuz[-v]
 +
        initrd initramfs-genkernel-x86_64[-v]
 +
        params real_root=ZFS=rpool/ROOT/funtoo
 +
        params += dozfs=force
 +
        # Also add 'params += crypt_root=/dev/sda3' if you used encryption
 +
        # Adjust the above setting to your system if needed
 +
}
 +
</console>
 +
 
 +
==== Bliss Initramfs Creator ====
 +
If you used the Bliss Initramfs Creator then all you need to do is add 'root=<root>' to your params.
 +
Example entry for boot.conf:
 +
<console>
 +
"Funtoo ZFS" {
 +
        kernel vmlinuz[-v]
 +
        initrd initrd[-v]
 +
        params root=rpool/ROOT/funtoo quiet
 +
        # If you have an encrypted device with a regular passphrase,
 +
        # you can add the following line
 +
        params += enc_root=/dev/sda3 enc_type=pass
 +
}
 +
</console>
 +
 
 +
After editing /etc/boot.conf, you just need to run boot-update to update grub.cfg
 +
<console># ##i##boot-update</console>
 +
 
 +
=== /boot on ZFS ===
 +
TBC - pending update to boot-update to support this
 +
 
 +
== Final configuration ==
 +
=== Add the zfs tools to openrc ===
 +
<console># ##i##rc-update add zfs boot</console>
 +
 
 +
=== Clean up and reboot ===
 +
We are almost done, we are just going to clean up, '''set our root password''', and unmount whatever we mounted and get out.
 +
 
 +
<console>
 +
Delete the stage3 tarball that you downloaded earlier so it doesn't take up space.
 +
# ##i##cd /
 +
# ##i##rm stage3-latest.tar.xz
 +
 
 +
Set your root password
 +
# ##i##passwd
 +
>> Enter your password, you won't see what you are writing (for security reasons), but it is there!
 +
 
 +
Get out of the chroot environment
 +
# ##i##exit
 +
 
 +
Unmount all the kernel filesystem stuff and boot (if you have a separate /boot)
 +
# ##i##umount -l proc dev sys boot
 +
 
 +
Turn off the swap
 +
# ##i##swapoff /dev/zvol/rpool/swap
 +
 
 +
Export the zpool
 +
# ##i##cd /
 +
# ##i##zpool export rpool
 +
 
 +
Reboot
 +
# ##i##reboot
 +
</console>
 +
 
 +
{{fancyimportant|'''Don't forget to set your root password as stated above before exiting chroot and rebooting. If you don't set the root password, you won't be able to log into your new system.'''}}
 +
 
 +
and that should be enough to get your system to boot on ZFS.
 +
 
 +
== After reboot ==
 +
=== Create initial ZFS Snapshot ===
 +
Continue to set up anything you need in terms of /etc configurations. Once you have everything the way you like it, take a snapshot of your system. You will be using this snapshot to revert back to this state if anything ever happens to your system down the road. The snapshots are cheap, and almost instant.
 +
 
 +
To take the snapshot of your system, type the following:
 +
<console># ##i##zfs snapshot -r rpool@install</console>
 +
 
 +
To see if your snapshot was taken, type:
 +
<console># ##i##zfs list -t snapshot</console>
  
You will want to choose a method to boot from Network as the first boot device. It may also be called "Boot From Lan" "Network Boot" "PXE Boot"  Once you have selected the appropriate method you may need to save the settings, proceed on to booting.  If you chose the right method you should be seeing some text on your screen, such as:  PXE Boot.. Obtaining DHCP....  If all is well you will be presented with your PXELinux Boot menu.  If your client system is still booting from the hard drive, or you see a failure related to obtaining DHCP IP address, please verify your settings in the above section "Installing and Configuring DNSMasq for DHCP / PXE Booting"[http://www.funtoo.org/index.php?title=Installing_and_Configuring_DNSMasq_for_DHCP_/_PXE_Booting&action=submit#Installing_and_Configuring_DNSMasq_for_DHCP_.2F_PXE_Booting] -make sure that your interface is set correctly, and that you are offering a DHCP range on the same internal network range as the IP address your server has.  If you have any error relating to unable to find PXE boot, please verify that you have the pxelinux.0 file in your /tftproot  and that your /etc/dnsmasq.conf  has the ""dhcp-boot=pxelinux.0"" configuration option.. **note that the 0 is a zero and not an o.
+
If your machine ever fails and you need to get back to this state, just type (This will only revert your / dataset while keeping the rest of your data intact):
 +
<console># ##i##zfs rollback rpool/ROOT/funtoo@install</console>
  
Upon a successful PXE configuration you will be presented with the network boot menu, with the option to boot System Rescue CD.  If you have the appropriate files in the correct locations and your http/ftp server is working properly, you should be able to select the System Rescue CD menu entry and successfully boot via network. Congratulations!!
+
{{fancyimportant|'''For a detailed overview, presentation of ZFS' capabilities, as well as usage examples, please refer to the [[ZFS_Fun|ZFS Fun]] page.'''}}
  
== Adding more operating systems / installations to your working PXE setup ==
+
[[Category:HOWTO]]
I know that by example, a lot of people probably want to use something other than the System Rescue CD. The main things have been outlined above for most linux distributions. MS Windows, see [http://www.funtoo.org/wiki/PXE_Network_Windows_Installation PXE Network Windows Installation] is quiet a bit more difficult than any linux install. I will try to cover the most important steps to serving a Windows Installation from the network soon.
+
[[Category:Filesystems]]
 +
[[Category:Featured]]
  
If you are wondering how you go about hosting a different Linux install other than the System Rescue CD, the main things to look at are the pxelinux.cfg/default file to edit the kernel and initrd lines. You also need to be sure that those files are accessible by the PXE loader, and if your initrd requires a compressed filesystem, be sure that you have a working ftp/http server hosting the compressed filesystem (Remember once boot process has been handed over to your kernel that you are no longer accessing the network via tftp but instead by the core services provided by the initrd + drivers provided by your kernel) I will add that you may use the fetch=tftp:// protocol in the kernel cmdline, however it doesn't seem to work as stable as using http/ftp method.  Each distro is different you may need to consult the documentation for the specific  distro's needed boot cmdline.  For the most part you will find it to be very similar(i.e. kernel+initrd+compressed-filesystem) Ubuntu doesn't even use a compressed filesystem on their ISO's it basically just uses a kernel and an initrd.  I am currently working on getting a Funtoo netboot image developed, tested, and providing information on how to host a Funtoo Base system over the network via your Funtoo PXE Netboot Server.
+
__NOTITLE__
[[Category:HOWTO]]
+

Revision as of 21:59, 11 January 2014

Introduction

This tutorial will show you how to install Funtoo on ZFS (rootfs). This tutorial is meant to be an "overlay" over the Regular Funtoo Installation. Follow the normal installation and only use this guide for steps 2, 3, and 8.

Introduction to ZFS

Since ZFS is a new technology for Linux, it can be helpful to understand some of its benefits, particularly in comparison to BTRFS, another popular next-generation Linux filesystem:

  • On Linux, the ZFS code can be updated independently of the kernel to obtain the latest fixes. btrfs is exclusive to Linux and you need to build the latest kernel sources to get the latest fixes.
  • ZFS is supported on multiple platforms. The platforms with the best support are Solaris, FreeBSD and Linux. Other platforms with varying degrees of support are NetBSD, Mac OS X and Windows. btrfs is exclusive to Linux.
  • ZFS has the Adaptive Replacement Cache replacement algorithm while btrfs uses the Linux kernel's Last Recently Used replacement algorithm. The former often has an overwhelmingly superior hit rate, which means fewer disk accesses.
  • ZFS has the ZFS Intent Log and SLOG devices, which accelerates small synchronous write performance.
  • ZFS handles internal fragmentation gracefully, such that you can fill it until 100%. Internal fragmentation in btrfs can make btrfs think it is full at 10%. Btrfs has no automatic rebalancing code, so it requires a manual rebalance to correct it.
  • ZFS has raidz, which is like RAID 5/6 (or a hypothetical RAID 7 that supports 3 parity disks), except it does not suffer from the RAID write hole issue thanks to its use of CoW and a variable stripe size. btrfs gained integrated RAID 5/6 functionality in Linux 3.9. However, its implementation uses a stripe cache that can only partially mitigate the effect of the RAID write hole.
  • ZFS send/receive implementation supports incremental update when doing backups. btrfs' send/receive implementation requires sending the entire snapshot.
  • ZFS supports data deduplication, which is a memory hog and only works well for specialized workloads. btrfs has no equivalent.
  • ZFS datasets have a hierarchical namespace while btrfs subvolumes have a flat namespace.
  • ZFS has the ability to create virtual block devices called zvols in its namespace. btrfs has no equivalent and must rely on the loop device for this functionality, which is cumbersome.

The only area where btrfs is ahead of ZFS is in the area of small file efficiency. btrfs supports a feature called block suballocation, which enables it to store small files far more efficiently than ZFS. It is possible to use another filesystem (e.g. reiserfs) on top of a ZFS zvol to obtain similar benefits (with arguably better data integrity) when dealing with many small files (e.g. the portage tree).

Disclaimers

Warning

This guide is a work in progress. Expect some quirks.

Important

Since ZFS was really designed for 64 bit systems, we are only recommending and supporting 64 bit platforms and installations. We will not be supporting 32 bit platforms!

Video Tutorial

As a companion to the install instructions below, a YouTube video ZFS install tutorial is now available:

Downloading the ISO (With ZFS)

In order for us to install Funtoo on ZFS, you will need an environment that provides the ZFS tools. Therefore we will download a customized version of System Rescue CD with ZFS already included. When booting, use the "alternate"-kernel. The ZFS-module won't work with the default kernel.

Name: sysresccd-3.8.1_zfs_0.6.2.iso   (510 MB)
Release Date: 2013-11-03
md5sum aa33ef61c5d85ad564372327940498c3


Download System Rescue CD with ZFS

Creating a bootable USB from ISO

After you download the iso, you can do the following steps to create a bootable USB:

Make a temporary directory
# mkdir /tmp/loop

Mount the iso
# mount -o ro,loop /root/sysresccd-3.7.1_zfs_0.6.2.iso /tmp/loop

Run the usb installer
# /tmp/loop/usb_inst.sh

That should be all you need to do to get your flash drive working.

When you are booting into system rescue cd, make sure you select the alternative 64 bit kernel. ZFS support was specifically added to the alternative 64 bit kernel rather than the standard 64 bit kernel.

Creating partitions

There are two ways to partition your disk: You can use your entire drive and let ZFS automatically partition it for you, or you can do it manually.

We will be showing you how to partition it manually because if you partition it manually you get to create your own layout, you get to have your own separate /boot partition (Which is nice since not every bootloader supports booting from ZFS pools), and you get to boot into RAID10, RAID5 (RAIDZ) pools and any other layouts due to you having a separate /boot partition.

gdisk (GPT Style)

A Fresh Start:

First lets make sure that the disk is completely wiped from any previous disk labels and partitions. We will also assume that /dev/sda is the target drive.

# gdisk /dev/sda

Command: x ↵
Expert command: z ↵
About to wipe out GPT on /dev/sda. Proceed?: y ↵
GPT data structures destroyed! You may now partition the disk using fdisk or other utilities.
Blank out MBR?: y ↵

Warning

This is a destructive operation. Make sure you really don't want anything on this disk.

Now that we have a clean drive, we will create the new layout.

Create Partition 1 (boot):

Command: n ↵
Partition Number: 
First sector: 
Last sector: +250M ↵
Hex Code: 

Create Partition 2 (BIOS Boot Partition):

Command: n ↵
Partition Number: 
First sector: 
Last sector: +32M ↵
Hex Code: EF02 ↵

Create Partition 3 (ZFS):

Command: n ↵
Partition Number: 
First sector: 
Last sector: 
Hex Code: bf00 ↵

Command: p ↵

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048          514047   250.0 MiB   8300  Linux filesystem
   2          514048          579583   32.0 MiB    EF02  BIOS boot partition
   3          579584      1953525134   931.2 GiB   BF00  Solaris root

Command: w ↵


Format your boot volume

Format your separate /boot partition:

# mkfs.ext2 /dev/sda1

Encryption (Optional)

If you want encryption, then create your encrypted vault(s) now by doing the following:

# cryptsetup luksFormat /dev/sda3
# cryptsetup luksOpen /dev/sda3 vault_1

Create the zpool

We will first create the pool. The pool will be named `tank` and the disk will be aligned to 4096 (using ashift=12)

# zpool create -f -o ashift=12 -o cachefile= -O compression=on -m none -R /mnt/funtoo tank /dev/sda3

Important

If you are using encrypted root, change /dev/sda3 to /dev/mapper/vault_1.

Note

ashift=12 should be use if you have a newer, advanced format disk that has a sector size of 4096 bytes. If you have an older disk with 512 byte sectors, you should use ashift=9 or don't add the option for auto detection

Note

If you have a previous pool that you would like to import, you can do a: zpool import -f -R /mnt/funtoo <pool_name>

Create the zfs datasets

We will now create some datasets. For this installation, we will create a small but future proof amount of datasets. We will have a dataset for the OS (/), and your swap. We will also show you how to create some optional datasets: /home, /var, /usr/src, and /usr/portage.

Create some empty containers for organization purposes, and make the dataset that will hold /
# zfs create -p tank/os/funtoo
# zfs create -o mountpoint=/ tank/os/funtoo/root

Optional, but recommended datasets: /home
# zfs create -o mountpoint=/home tank/os/funtoo/home

Optional datasets: /usr/src, /usr/portage/{distfiles,packages}
# zfs create -o mountpoint=/usr/src tank/os/funtoo/src
# zfs create -o mountpoint=/usr/portage -o compression=off tank/os/funtoo/portage
# zfs create -o mountpoint=/usr/portage/distfiles tank/os/funtoo/portage/distfiles
# zfs create -o mountpoint=/usr/portage/packages tank/os/funtoo/portage/packages

Create your swap zvol

Make your swap +1G greater than your RAM. An 8G machine would have 9G of SWAP (This is kinda big though). For machines with this much memory, You could just make it 2G if you don't have any problems.

# zfs create -o sync=always -o primarycache=metadata -o secondarycache=none -o volblocksize=4K -V 1G rpool/swap

Format your swap zvol

# mkswap -f /dev/zvol/rpool/swap
# swapon /dev/zvol/rpool/swap


Last minute checks and touches

Check to make sure everything appears fine. Your output may differ depending on the choices you made above:

# zpool status
  pool: rpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          sda2      ONLINE       0     0     0

errors: No known data errors

# zfs list
rpool              3.10G  15.5G   136K  none
rpool/HOME          136K  15.5G   136K  /mnt/funtoo/home
rpool/ROOT          308K  15.5G   136K  none
rpool/ROOT/funtoo   172K  15.5G   172K  /mnt/funtoo
rpool/swap         3.09G  18.6G    76K  -

Now we will continue to install funtoo.

Installing Funtoo

Download and extract the Funtoo stage3 and continue installation as normal.

Then once you've extracted the stage3, chroot into your new funtoo environment:

Go into the directory that you will chroot into
# cd /mnt/funtoo

Mount your boot drive
# mount /dev/sda1 /mnt/funtoo/boot

Bind the kernel related directories
# mount -t proc none /mnt/funtoo/proc
# mount --rbind /dev /mnt/funtoo/dev
# mount --rbind /sys /mnt/funtoo/sys

Copy network settings
# cp /etc/resolv.conf /mnt/funtoo/etc/

chroot into your new funtoo environment
# env -i HOME=/root TERM=$TERM chroot /mnt/funtoo /bin/bash --login

Place your mountpoints into your /etc/mtab file
# cat /proc/mounts > /etc/mtab

Sync your tree
# emerge --sync

Add filesystems to /etc/fstab

Before we continue to compile and or install our kernel in the next step, we will edit the /etc/fstab file because if we decide to install our kernel through portage, portage will need to know where is your /boot so that it can place the files in there. We also need to update /etc/mtab so our system knows what is mounted

# nano /etc/fstab

# <fs>                  <mountpoint>    <type>          <opts>          <dump/pass>
# Do not add the /boot line below if you are using whole-disk zfs
/dev/sda1               /boot           ext2            defaults        0 2
/dev/zvol/rpool/swap    none            swap            sw              0 0

Kernel Configuration

To speed up this step, you can install "bliss-kernel" since it's already properly configured for ZFS and a lot of other configurations. The kernel is also compiled and ready to go. To install 'bliss-kernel' type the following:

# emerge bliss-kernel

Now make sure that your /usr/src/linux symlink is pointing to this kernel by typing the following:

# eselect kernel list
Available kernel symlink targets:
[1]   linux-3.10.10-FB.01 *

You should see a star next to the bliss-kernel version you installed. In this case it was 3.10.10-FB.01. If it's not set, you can type eselect kernel set #.

Installing the ZFS userspace tools and kernel modules

# emerge -av zfs spl zfs-kmod

(spl = Solaris Porting Layer)

Check to make sure that the zfs tools are working, the zpool.cache file that you copied before should be displayed.

# zpool status
# zfs list

If everything worked, continue.

Install the bootloader

GRUB 2

Before you do this, make sure this checklist is followed:

  • Installed kernel and kernel modules
  • Installed zfs package from the tree
  • /dev, /proc, /sys are mounted in the chroot environment

Once all this is checked, let's install grub2. First we need to enable the "libzfs" use flag so zfs support is compiled for grub2.

# echo "sys-boot/grub libzfs" >> /etc/portage/package.use

Then we will compile grub2:

# emerge -av grub

Once this is done, you can check that grub is version 2.00 by doing the following command:

# grub-install --version
grub-install (GRUB) 2.00

Now try to install grub2:

# grub-install --no-floppy /dev/sda

You should receive the following message

Installation finished. No error reported.

If not, then go back to the above checklist.

LILO

Before you do this, make sure the following checklist is followed:

  • /dev/, /proc and /sys are mounted.
  • Installed the sys-fs/zfs package from the tree.

Once the above requirements are met, LILO can be installed.

Now we will install LILO.

# emerge -av sys-boot/lilo

Once the installation of LILO is complete we will need to edit the lilo.conf file.

# nano /etc/lilo.conf
boot=/dev/sda
prompt
timeout=4
default=Funtoo

image=/boot/bzImage
      label=Funtoo
      read-only
      append="root=rpool/ROOT/funtoo"
      initrd=/boot/initramfs

All that is left now is to install the bootcode to the MBR.

This can be accomplished by running:

# /sbin/lilo

If it is successful you should see:

Warning: LBA32 addressing assumed
Added Funtoo + *
One warning was issued

Create the initramfs

There are two ways to do this, you can use genkernel, or you can use my bliss initramfs creator. I will show you both.

genkernel

# emerge -av sys-kernel/genkernel
# You only need to add --luks if you used encryption
# genkernel --zfs --luks initramfs

Bliss Initramfs Creator

If you are encrypting your drives, then add the "luks" use flag to your package.use before emerging:

# echo "sys-kernel/bliss-initramfs luks" >> /etc/portage/package.use

Now install the creator:

# emerge bliss-initramfs


Then go into the install directory, run the script as root, and place it into /boot:

# cd /opt/bliss-initramfs
# ./createInit
# mv initrd-<kernel_name> /boot

<kernel_name> is the name of what you selected in the initramfs creator, and the name of the outputted file.

Using boot-update

/boot on separate partition

If you created a separate non-zfs partition for boot then configuring boot-update is almost exactly the same as a normal install except that auto detection for root does not work. You must tell boot-update what your root is.

Genkernel

If your using genkernel you must add 'real_root=ZFS=<root>' and 'dozfs' to your params. Example entry for boot.conf:

"Funtoo ZFS" {
        kernel vmlinuz[-v]
        initrd initramfs-genkernel-x86_64[-v]
        params real_root=ZFS=rpool/ROOT/funtoo
        params += dozfs=force
        # Also add 'params += crypt_root=/dev/sda3' if you used encryption
        # Adjust the above setting to your system if needed
}

Bliss Initramfs Creator

If you used the Bliss Initramfs Creator then all you need to do is add 'root=<root>' to your params. Example entry for boot.conf:

"Funtoo ZFS" {
        kernel vmlinuz[-v]
        initrd initrd[-v]
        params root=rpool/ROOT/funtoo quiet
        # If you have an encrypted device with a regular passphrase,
        # you can add the following line
        params += enc_root=/dev/sda3 enc_type=pass
}

After editing /etc/boot.conf, you just need to run boot-update to update grub.cfg

# boot-update

/boot on ZFS

TBC - pending update to boot-update to support this

Final configuration

Add the zfs tools to openrc

# rc-update add zfs boot

Clean up and reboot

We are almost done, we are just going to clean up, set our root password, and unmount whatever we mounted and get out.

Delete the stage3 tarball that you downloaded earlier so it doesn't take up space.
# cd /
# rm stage3-latest.tar.xz

Set your root password
# passwd
>> Enter your password, you won't see what you are writing (for security reasons), but it is there!

Get out of the chroot environment
# exit

Unmount all the kernel filesystem stuff and boot (if you have a separate /boot)
# umount -l proc dev sys boot

Turn off the swap
# swapoff /dev/zvol/rpool/swap

Export the zpool
# cd /
# zpool export rpool

Reboot
# reboot

Important

Don't forget to set your root password as stated above before exiting chroot and rebooting. If you don't set the root password, you won't be able to log into your new system.

and that should be enough to get your system to boot on ZFS.

After reboot

Create initial ZFS Snapshot

Continue to set up anything you need in terms of /etc configurations. Once you have everything the way you like it, take a snapshot of your system. You will be using this snapshot to revert back to this state if anything ever happens to your system down the road. The snapshots are cheap, and almost instant.

To take the snapshot of your system, type the following:

# zfs snapshot -r rpool@install

To see if your snapshot was taken, type:

# zfs list -t snapshot

If your machine ever fails and you need to get back to this state, just type (This will only revert your / dataset while keeping the rest of your data intact):

# zfs rollback rpool/ROOT/funtoo@install

Important

For a detailed overview, presentation of ZFS' capabilities, as well as usage examples, please refer to the ZFS Fun page.