Difference between revisions of "ZFS as Root Filesystem"

From Funtoo
Jump to navigation Jump to search
(it should be /dev/sda3 right?)
 
(241 intermediate revisions by 14 users not shown)
Line 33: Line 33:
to obtain similar benefits (with arguably better data integrity) when
to obtain similar benefits (with arguably better data integrity) when
dealing with many small files (e.g. the portage tree).
dealing with many small files (e.g. the portage tree).
For a quick tour of ZFS and have a big picture of its common operations you can consult the page [[ZFS Fun]].


=== Disclaimers ===
=== Disclaimers ===


{{fancywarning|This guide is a work in progress. Expect some quirks.}}
{{fancywarning|This guide is a work in progress. Expect some quirks.  
{{fancyimportant|'''Since ZFS was really designed for 64 bit systems, we are only recommending and supporting 64 bit platforms and installations. We will not be supporting 32 bit platforms'''!}}
 
Today is 2015-05-12. ZFS has undertaken an upgrade - from 0.6.3 to 0.6.4. Please ensure that you use a RescueCD with ZFS 0.6.3. At present date grub 2.02 is not able to deal with those new ZFS parameters. If you want to use ZFS 0.6.4 for pool creation, you should use the compatability mode.  


== Video Tutorial ==
You should upgrade an existing pool only when grub is able to deal with - in a future version ... If not, you will not be able to boot into your system, and no rollback will help!


As a companion to the install instructions below, a YouTube video ZFS install tutorial is now available:
Please inform yourself!}}


{{#widget:YouTube|id=kxEdSXwU0ZI|width=640|height=360}}
{{fancyimportant|'''Since ZFS was really designed for 64 bit systems, we are only recommending and supporting 64 bit platforms and installations. We will not be supporting 32 bit platforms'''!}}


== Downloading the ISO (With ZFS) ==
== Downloading the ISO (With ZFS) ==
In order for us to install Funtoo on ZFS, you will need an environment that provides the ZFS tools. Therefore we will download a customized version of System Rescue CD with ZFS already included.
In order for us to install Funtoo on ZFS, you will need an environment that already provides the ZFS tools. Therefore we will download a customized version of System Rescue CD with ZFS included.  


<pre>
<pre>
Name: sysresccd-3.7.1_zfs_0.6.2.iso   (492 MB)
Name: sysresccd-4.2.0_zfs_0.6.2.iso (545 MB)
Release Date: 2013-08-27
Release Date: 2014-02-25
md5sum e6cbebfafb3c32c97be4acd1bb099743
md5sum 01f4e6929247d54db77ab7be4d156d85
</pre>
</pre>


Line 57: Line 60:
'''[http://ftp.osuosl.org/pub/funtoo/distfiles/sysresccd/ Download System Rescue CD with ZFS]'''<br />
'''[http://ftp.osuosl.org/pub/funtoo/distfiles/sysresccd/ Download System Rescue CD with ZFS]'''<br />


== Creating a bootable USB from ISO ==
== Creating a bootable USB from ISO (From a Linux Environment) ==
After you download the iso, you can do the following steps to create a bootable USB:
After you download the iso, you can do the following steps to create a bootable USB:


Line 65: Line 68:


Mount the iso
Mount the iso
# ##i##mount -o ro,loop /root/sysresccd-3.7.1_zfs_0.6.2.iso /tmp/loop
# ##i##mount -o ro,loop /root/sysresccd-4.2.0_zfs_0.6.2.iso /tmp/loop


Run the usb installer
Run the usb installer
Line 72: Line 75:


That should be all you need to do to get your flash drive working.
That should be all you need to do to get your flash drive working.
== Booting the ISO ==
{{fancywarning|'''When booting into the ISO, Make sure that you select the "Alternate 64 bit kernel (altker64)". The ZFS modules have been built specifically for this kernel rather than the standard kernel. If you select a different kernel, you will get a fail to load module stack error message.'''}}


== Creating partitions ==
== Creating partitions ==
Line 86: Line 93:


<console>
<console>
# ##i##gdisk /dev/sda
# ##i##sgdisk -Z /dev/sda
 
Command: ##i##x ↵
Expert command: ##i##z ↵
About to wipe out GPT on /dev/sda. Proceed?: ##i##y ↵
GPT data structures destroyed! You may now partition the disk using fdisk or other utilities.
Blank out MBR?: ##i##y ↵
</console>
</console>


{{fancywarning|This is a destructive operation. Make sure you really don't want anything on this disk.}}
{{fancywarning|This is a destructive operation and the program will not ask you for confirmation! Make sure you really don't want anything on this disk.}}


Now that we have a clean drive, we will create the new layout.
Now that we have a clean drive, we will create the new layout.
First open up the application:
<console>
# ##i##gdisk /dev/sda
</console>


'''Create Partition 1''' (boot):
'''Create Partition 1''' (boot):
Line 134: Line 141:




=== Format your boot volume ===
=== Format your /boot partition ===
Format your separate /boot partition:
<console># ##i##mkfs.ext2 /dev/sda1</console>
 
=== Encryption (Optional) ===
If you want encryption, then create your encrypted vault(s) now by doing the following:


<console>
<console>
# ##i##cryptsetup luksFormat /dev/sda2
# ##i##mkfs.ext2 -m 1 /dev/sda1
# ##i##cryptsetup luksOpen /dev/sda2 vault_1
</console>
</console>


=== Create the zpool ===
=== Create the zpool ===
We will first create the pool. The pool will be named `rpool` and the disk will be aligned to 4096 (using ashift=12)
We will first create the pool. The pool will be named <code>tank</code>. Feel free to name your pool as you want.  We will use <code>ashift=12</code> option  which is used for a hard drives with a 4096 sector size.
<console># ##i##zpool create -f -o ashift=12 -o cachefile= -O compression=on -m none -R /mnt/funtoo rpool /dev/sda3</console>
<console># ##i## zpool create -f -o ashift=12 -o cachefile=/tmp/zpool.cache -O normalization=formD -m none -R /mnt/funtoo tank /dev/sda3 </console>
 
{{fancyimportant|If you are using encrypted root, change '''/dev/sda3 to /dev/mapper/vault_1'''.}}
 
{{fancynote|'''ashift<nowiki>=</nowiki>12''' should be use if you have a newer, advanced format disk that has a sector size of 4096 bytes. If you have an older disk with 512 byte sectors, you should use '''ashift<nowiki>=</nowiki>9''' or don't add the option for auto detection}}
 
{{fancynote|If you have a previous pool that you would like to import, you can do a: '''zpool import -f -R /mnt/funtoo <pool_name>'''}}


=== Create the zfs datasets ===
=== Create the zfs datasets ===
We will now create some datasets. For this installation, we will create a small but future proof amount of datasets. We will have a dataset for the OS (/), and your swap. We will also show you how to create some optional datasets: /home, /var, /usr/src, and /usr/portage.
We will now create some datasets. For this installation, we will create a small but future proof amount of datasets. We will have a dataset for the OS (/), and your swap. We will also show you how to create some optional datasets: <code>/home</code>, <code>/usr/src</code>, and <code>/usr/portage</code>.  Notice, datasets are examples only and not strictly required.


<console>
<console>
Create some empty containers for organization purposes, and make the dataset that will hold /
Create some empty containers for organization purposes, and make the dataset that will hold /
# ##i##zfs create rpool/ROOT
# ##i## zfs create -p tank/funtoo
# ##i##zfs create -o mountpoint=/ rpool/ROOT/funtoo
# ##i## zfs create -o mountpoint=/ tank/funtoo/root


Optional, but recommended datasets: /home
Optional, but recommended datasets: /home
# ##i##zfs create -o mountpoint=/home rpool/HOME
# ##i## zfs create -o mountpoint=/home tank/funtoo/home


Optional, portage tree, distfiles, and binary packages:
Optional datasets: /usr/src, /usr/portage/{distfiles,packages}
# ##i##zfs create rpool/FUNTOO
# ##i## zfs create -o mountpoint=/usr/src tank/funtoo/src
# ##i##zfs create -o mountpoint=/usr/portage -o compression=off rpool/FUNTOO/portage
# ##i## zfs create -o mountpoint=/usr/portage -o compression=off tank/funtoo/portage
# ##i##zfs create -o mountpoint=/usr/portage/distfiles rpool/FUNTOO/portage/distfiles
# ##i## zfs create -o mountpoint=/usr/portage/distfiles tank/funtoo/portage/distfiles
# ##i##zfs create -o mountpoint=/usr/portage/packages rpool/FUNTOO/portage/packages
# ##i## zfs create -o mountpoint=/usr/portage/packages tank/funtoo/portage/packages
</console>


Optional datasets: /usr/src
== Installing Funtoo ==
# ##i##zfs create -o mountpoint=/usr/src rpool/FUNTOO/src
 
</console>
=== Pre-Chroot ===


=== Create your swap zvol ===
'''Make your swap +1G greater than your RAM. An 8G machine would have 9G of SWAP (This is kinda big though). For machines with this much memory, You could just make it 2G if you don't have any problems.'''
<console>
<console>
# ##i##zfs create -o sync=always -o primarycache=metadata -o secondarycache=none -o volblocksize=4K -V 1G rpool/swap
Go into the directory that you will chroot into
</console>
# ##i##cd /mnt/funtoo


=== Format your swap zvol ===
Make a boot folder and mount your boot drive
<console>
# ##i##mkdir boot
# ##i##mkswap -f /dev/zvol/rpool/swap
# ##i##mount /dev/sda1 boot
# ##i##swapon /dev/zvol/rpool/swap
</console>
</console>


[[Funtoo_Linux_Installation|Now download and extract the Funtoo stage3 ...]]


=== Last minute checks and touches ===
Check to make sure everything appears fine. Your output may differ depending on the choices you made above:
<console>
# ##i##zpool status
  pool: rpool
state: ONLINE
  scan: none requested
config:


        NAME        STATE    READ WRITE CKSUM
{{fancynote|It is trully recommended to use the current version and generic64. That reduces the risk of a broken build.
        rpool      ONLINE      0    0    0
          sda2      ONLINE      0    0    0


errors: No known data errors
After successfull ZFS installation and successfull first boot, the kernel may be changed using the <code> eselect profile set ... </code> command. If you create a snapshot before, you may allways come back to your previous installation, with some simple steps ... (rollback your pool and in the worst case configure and install the bootloader again)}}


# ##i##zfs list
rpool              3.10G  15.5G  136K  none
rpool/HOME          136K  15.5G  136K  /mnt/funtoo/home
rpool/ROOT          308K  15.5G  136K  none
rpool/ROOT/funtoo  172K  15.5G  172K  /mnt/funtoo
rpool/swap        3.09G  18.6G    76K  -
</console>


Now we will continue to install funtoo.


== Installing Funtoo ==
Once you've extracted the stage3, do a few more preparations and chroot into your new funtoo environment:
[[Funtoo_Linux_Installation|Download and extract the Funtoo stage3 and continue installation as normal.]]


Then once you've extracted the stage3, chroot into your new funtoo environment:
<console>
<console>
Go into the directory that you will chroot into
# ##i##cd /mnt/funtoo
Mount your boot drive
# ##i##mount /dev/sda1 /mnt/funtoo/boot
Bind the kernel related directories
Bind the kernel related directories
# ##i##mount -t proc none /mnt/funtoo/proc
# ##i##mount -t proc none proc
# ##i##mount --rbind /dev /mnt/funtoo/dev
# ##i##mount --rbind /dev dev
# ##i##mount --rbind /sys /mnt/funtoo/sys
# ##i##mount --rbind /sys sys


Copy network settings
Copy network settings
# ##i##cp /etc/resolv.conf /mnt/funtoo/etc/
# ##i##cp -f /etc/resolv.conf etc


chroot into your new funtoo environment
Make the zfs folder in 'etc' and copy your zpool.cache
# ##i##env -i HOME=/root TERM=$TERM chroot /mnt/funtoo /bin/bash --login
# ##i##mkdir etc/zfs
# ##i##cp /tmp/zpool.cache etc/zfs


Place your mountpoints into your /etc/mtab file
Chroot into Funtoo
# ##i##cat /proc/mounts > /etc/mtab
# ##i##env -i HOME=/root TERM=$TERM chroot . bash -l
</console>


Sync your tree
{{fancynote|How to create zpool.cache file?}}
# ##i##emerge --sync
If no <code>zpool.cache</code> file is available, the following command will create one:
<console>
# ##i##zpool set cachefile=/etc/zfs/zpool.cache tank
</console>
</console>
{{:Install/PortageTree}}


=== Add filesystems to /etc/fstab ===
=== Add filesystems to /etc/fstab ===


Before we continue to compile and or install our kernel in the next step, we will edit the /etc/fstab file because if we decide to install our kernel through portage, portage will need to know where is your /boot so that it can place the files in there. We also need to update /etc/mtab so our system knows what is mounted
Before we continue to compile and or install our kernel in the next step, we will edit the <code>/etc/fstab</code> file because if we decide to install our kernel through portage, portage will need to know where our <code>/boot</code> is, so that it can place the files in there.  


<console>
Edit <code>/etc/fstab</code>:
# ##i##nano /etc/fstab


{{file|name=/etc/fstab|desc= |body=
# <fs>                  <mountpoint>    <type>          <opts>          <dump/pass>
# <fs>                  <mountpoint>    <type>          <opts>          <dump/pass>
# Do not add the /boot line below if you are using whole-disk zfs
 
/dev/sda1              /boot          ext2            defaults        0 2
/dev/sda1              /boot          ext2            defaults        0 2
/dev/zvol/rpool/swap    none            swap            sw              0 0
}}
</console>


== Kernel Configuration ==
== Building kernel, initramfs and grub to work with zfs==
To speed up this step, you can install "bliss-kernel" since it's already properly configured for ZFS and a lot of other configurations. The kernel is also compiled and ready to go. To install 'bliss-kernel' type the following:
=== Install genkernel and initial kernel build ===


We need to build a genkernel initially:
<console>
<console>
# ##i##emerge bliss-kernel
# ##i##emerge genkernel
 
Build initial kernel (required for checks in sys-kernel/spl and sys-fs/zfs):
# ##i##genkernel kernel --no-clean --no-mountboot
 
</console>
</console>


Now make sure that your /usr/src/linux symlink is pointing to this kernel by typing the following:
=== Installing the ZFS userspace tools and kernel modules ===
Emerge {{Package|sys-fs/zfs}}. This package will bring in {{Package|sys-kernel/spl}}, and {{Package|sys-fs/zfs-kmod}} as its dependencies:


<console>
<console>
# ##i##eselect kernel list
# ##i##emerge zfs
Available kernel symlink targets:
[1]  linux-3.10.10-FB.01 *
</console>
</console>


You should see a star next to the bliss-kernel version you installed. In this case it was 3.10.10-FB.01. If it's not set, you can type '''eselect kernel set #'''.
Check to make sure that the zfs tools are working. The <code>zpool.cache</code> file that you copied before should be displayed.
 
== Installing the ZFS userspace tools ==
 
<console># ##i##emerge -av zfs</console>
 
Check to make sure that the zfs tools are working, the zpool.cache file that you copied before should be displayed.


<console>
<console>
Line 285: Line 256:
</console>
</console>


If everything worked, continue.
{{fancynote|If /etc/mtab is missing, these two commands will complaine.  
 
In that case solve that with:
== Install the bootloader ==
=== GRUB 2 ===
Before you do this, make sure this checklist is followed:
* Installed kernel and kernel modules
* Installed zfs package from the tree
* /dev, /proc, /sys are mounted in the chroot environment
 
Once all this is checked, let's install grub2. First we need to enable the "libzfs" use flag so zfs support is compiled for grub2.
 
<console># ##i##echo "sys-boot/grub libzfs" >> /etc/portage/package.use</console>
 
Then we will compile grub2:
 
<console># ##i##emerge -av grub</console>
 
Once this is done, you can check that grub is version 2.00 by doing the following command:
<console>
<console>
# ##i##grub-install --version
# ##i##grep -v rootfs /proc/mounts > /etc/mtab
grub-install (GRUB) 2.00
</console>}}
</console>


Now try to install grub2:
Add the zfs tools to openrc.
<console># ##i##grub-install --no-floppy /dev/sda</console>
<console># ##i##rc-update add zfs boot</console>


You should receive the following message
If everything worked, continue.
<console>Installation finished. No error reported.</console>


If not, then go back to the above checklist.
=== Install GRUB 2  ===


=== LILO ===
Install grub2:
Before you do this, make sure the following checklist is followed:
* /dev/, /proc and /sys are mounted.
* Installed the sys-fs/zfs package from the tree.
Once the above requirements are met, LILO can be installed.
 
Now we will install LILO.
<console># ##i##emerge -av sys-boot/lilo</console>
Once the installation of LILO is complete we will need to edit the lilo.conf file.
<console># ##i##nano /etc/lilo.conf
boot=/dev/sda
prompt
timeout=4
default=Funtoo
 
image=/boot/bzImage
      label=Funtoo
      read-only
      append="root=rpool/ROOT/funtoo"
      initrd=/boot/initramfs
</console>
All that is left now is to install the bootcode to the MBR.
 
This can be accomplished by running:
<console># ##i##/sbin/lilo</console>
If it is successful you should see:
<console>
<console>
Warning: LBA32 addressing assumed
# ##i##echo "sys-boot/grub libzfs -truetype" >> /etc/portage/package.use
Added Funtoo + *
# ##i##emerge grub
One warning was issued
</console>
</console>


== Create the initramfs ==
Now install grub to the drive itself (not a partition):
There are two ways to do this, you can use genkernel, or you can use my bliss initramfs creator. I will show you both.
 
=== genkernel ===
<console>
<console>
# ##i##emerge -av sys-kernel/genkernel
# ##i##grub-install /dev/sda
# You only need to add --luks if you used encryption
# ##i##genkernel --zfs --luks initramfs
</console>
</console>


=== Bliss Initramfs Creator ===
=== Initial kernel build ===
If you are encrypting your drives, then add the "luks" use flag to your package.use before emerging:
Build now kernel and initramfs with --zfs
 
<console>
<console>
# ##i##echo "sys-kernel/bliss-initramfs luks" >> /etc/portage/package.use
# ##i##genkernel all --zfs --no-clean --no-mountboot --callback="emerge @module-rebuild"
</console>
</console>


Now install the creator:
=== Configuring the Bootloader ===


<console>
Using the genkernel you must add 'real_root=ZFS=<root>' and 'dozfs' to your params.
# ##i##emerge bliss-initramfs
Edit  <code>/etc/boot.conf</code>:
</console>


{{file|name=/etc/boot.conf|desc= |body=
"Funtoo ZFS" {
        kernel kernel[-v]
        initrd initramfs-genkernel-x86_64[-v]
        params real_root=ZFS=tank/funtoo/root
        params += dozfs=force
}
}}


Then go into the install directory, run the script as root, and place it into /boot:
The command <code>boot-update</code> should take care of grub configuration:
<console># ##i##cd /opt/bliss-initramfs
# ##i##./createInit
# ##i##mv initrd-<kernel_name> /boot
</console>
'''<kernel_name>''' is the name of what you selected in the initramfs creator, and the name of the outputted file.


== Using boot-update ==
=== /boot on separate partition ===
If you created a separate non-zfs partition for boot then configuring boot-update is almost exactly the same as a normal install except that auto detection for root does not work. You must tell boot-update what your root is.
==== Genkernel ====
If your using genkernel you must add 'real_root=ZFS=<root>' and 'dozfs' to your params.
Example entry for boot.conf:
<console>
<console>
"Funtoo ZFS" {
Install boot-update (if it is missing):
        kernel vmlinuz[-v]
###i## emerge boot-update
        initrd initramfs-genkernel-x86_64[-v]
 
        params real_root=ZFS=rpool/ROOT/funtoo
Run boot-update to update grub.cfg
        params += dozfs
###i## boot-update
        # Also add 'params += crypt_root=/dev/sda2' if you used encryption
        # Adjust the above setting to your system if needed
}
</console>
</console>


==== Bliss Initramfs Creator ====
{{fancynote|If <code>boot-update</code>fails, try this:
If you used the Bliss Initramfs Creator then all you need to do is add 'root=<root>' to your params.
Example entry for boot.conf:
<console>
<console>
"Funtoo ZFS" {
# ##i##grub-mkconfig -o /boot/grub/grub.cfg
        kernel vmlinuz[-v]
        initrd initrd[-v]
        params root=rpool/ROOT/funtoo quiet
        # If you have an encrypted device with a regular passphrase,
        # you can add the following line
        params += enc_root=/dev/sda3 enc_type=pass
}
</console>
</console>
}}
Now you should have a new installation of the kernel, initramfs and grub which are zfs capable. The configuration files should be updated, and the system should come up during the next reboot.


After editing /etc/boot.conf, you just need to run boot-update to update grub.cfg
{{fancynote|If The <code>luks</code> integration works basically the same way.}}
<console># ##i##boot-update</console>
 
=== /boot on ZFS ===
TBC - pending update to boot-update to support this


== Final configuration ==
== Final configuration ==
=== Add the zfs tools to openrc ===
<console># ##i##rc-update add zfs boot</console>
=== Clean up and reboot ===
=== Clean up and reboot ===
We are almost done, we are just going to clean up, '''set our root password''', and unmount whatever we mounted and get out.
We are almost done, we are just going to clean up, '''set our root password''', and unmount whatever we mounted and get out.
Line 439: Line 339:


Turn off the swap
Turn off the swap
# ##i##swapoff /dev/zvol/rpool/swap
# ##i##swapoff /dev/zvol/tank/swap


Export the zpool
Export the zpool
# ##i##cd /
# ##i##cd /
# ##i##zpool export rpool
# ##i##zpool export tank


Reboot
Reboot
Line 454: Line 354:


== After reboot ==
== After reboot ==
=== Forgot to reset password? ===
==== System Rescue CD ====
If you aren't using bliss-initramfs, then you can reboot back into your sysresccd and reset through there by mounting your drive, chrooting, and then typing passwd.
Example:
<console>
# ##i##zpool import -f -R /mnt/funtoo tank
# ##i##chroot /mnt/funtoo bash -l
# ##i##passwd
# ##i##exit
# ##i##zpool export -f tank
# ##i##reboot
</console>
=== Create initial ZFS Snapshot ===
=== Create initial ZFS Snapshot ===
Continue to set up anything you need in terms of /etc configurations. Once you have everything the way you like it, take a snapshot of your system. You will be using this snapshot to revert back to this state if anything ever happens to your system down the road. The snapshots are cheap, and almost instant.  
Continue to set up anything you need in terms of /etc configurations. Once you have everything the way you like it, take a snapshot of your system. You will be using this snapshot to revert back to this state if anything ever happens to your system down the road. The snapshots are cheap, and almost instant.  


To take the snapshot of your system, type the following:
To take the snapshot of your system, type the following:
<console># ##i##zfs snapshot -r rpool@install</console>
<console># ##i##zfs snapshot -r tank@install</console>


To see if your snapshot was taken, type:
To see if your snapshot was taken, type:
Line 464: Line 379:


If your machine ever fails and you need to get back to this state, just type (This will only revert your / dataset while keeping the rest of your data intact):
If your machine ever fails and you need to get back to this state, just type (This will only revert your / dataset while keeping the rest of your data intact):
<console># ##i##zfs rollback rpool/ROOT/funtoo@install</console>
<console># ##i##zfs rollback tank/funtoo/root@install</console>


{{fancyimportant|'''For a detailed overview, presentation of ZFS' capabilities, as well as usage examples, please refer to the [[ZFS_Fun|ZFS Fun]] page.'''}}
{{fancyimportant|'''For a detailed overview, presentation of ZFS' capabilities, as well as usage examples, please refer to the [[ZFS_Fun|ZFS Fun]] page.'''}}
== Troubleshooting ==
=== Starting from scratch ===
If your installation has gotten screwed up for whatever reason and you need a fresh restart, you can do the following from sysresccd to start fresh:
<console>
Destroy the pool and any snapshots and datasets it has
# ##i##zpool destroy -R -f tank
This deletes the files from /dev/sda1 so that even after we zap, recreating the drive in the exact sector
position and size will not give us access to the old files in this partition.
# ##i##mkfs.ext2 /dev/sda1
# ##i##sgdisk -Z /dev/sda
</console>
Now start the guide again :).
=== Starting again reusing the same disk partitions and the same pool ===
If your installation has gotten screwed up for whatever reason and you want to keep your pole named tank than you should boou into the Rescue CD / USB as done before.
<console>import the pool reusing all existing datasets:
# ##i##zpool import -f -R /mnt/funtoo tank
</console>
Now you should wipe the previous installation off:
<console>
let's go to our base installation directory:
# ##i##cd /mnt/funtoo
and delete the old installation:
# ##i##rm -rf *
</console>
Now start the guide again, at "Pre-Chroot"


[[Category:HOWTO]]
[[Category:HOWTO]]
[[Category:Filesystems]]
[[Category:Filesystems]]
[[Category:Featured]]
[[Category:Featured]]
[[Category:Install]]


__NOTITLE__
__NOTITLE__

Revision as of 01:51, September 1, 2015

Introduction

This tutorial will show you how to install Funtoo on ZFS (rootfs). This tutorial is meant to be an "overlay" over the Regular Funtoo Installation. Follow the normal installation and only use this guide for steps 2, 3, and 8.

Introduction to ZFS

Since ZFS is a new technology for Linux, it can be helpful to understand some of its benefits, particularly in comparison to BTRFS, another popular next-generation Linux filesystem:

  • On Linux, the ZFS code can be updated independently of the kernel to obtain the latest fixes. btrfs is exclusive to Linux and you need to build the latest kernel sources to get the latest fixes.
  • ZFS is supported on multiple platforms. The platforms with the best support are Solaris, FreeBSD and Linux. Other platforms with varying degrees of support are NetBSD, Mac OS X and Windows. btrfs is exclusive to Linux.
  • ZFS has the Adaptive Replacement Cache replacement algorithm while btrfs uses the Linux kernel's Last Recently Used replacement algorithm. The former often has an overwhelmingly superior hit rate, which means fewer disk accesses.
  • ZFS has the ZFS Intent Log and SLOG devices, which accelerates small synchronous write performance.
  • ZFS handles internal fragmentation gracefully, such that you can fill it until 100%. Internal fragmentation in btrfs can make btrfs think it is full at 10%. Btrfs has no automatic rebalancing code, so it requires a manual rebalance to correct it.
  • ZFS has raidz, which is like RAID 5/6 (or a hypothetical RAID 7 that supports 3 parity disks), except it does not suffer from the RAID write hole issue thanks to its use of CoW and a variable stripe size. btrfs gained integrated RAID 5/6 functionality in Linux 3.9. However, its implementation uses a stripe cache that can only partially mitigate the effect of the RAID write hole.
  • ZFS send/receive implementation supports incremental update when doing backups. btrfs' send/receive implementation requires sending the entire snapshot.
  • ZFS supports data deduplication, which is a memory hog and only works well for specialized workloads. btrfs has no equivalent.
  • ZFS datasets have a hierarchical namespace while btrfs subvolumes have a flat namespace.
  • ZFS has the ability to create virtual block devices called zvols in its namespace. btrfs has no equivalent and must rely on the loop device for this functionality, which is cumbersome.

The only area where btrfs is ahead of ZFS is in the area of small file efficiency. btrfs supports a feature called block suballocation, which enables it to store small files far more efficiently than ZFS. It is possible to use another filesystem (e.g. reiserfs) on top of a ZFS zvol to obtain similar benefits (with arguably better data integrity) when dealing with many small files (e.g. the portage tree).

For a quick tour of ZFS and have a big picture of its common operations you can consult the page ZFS Fun.

Disclaimers

   Warning

This guide is a work in progress. Expect some quirks.

Today is 2015-05-12. ZFS has undertaken an upgrade - from 0.6.3 to 0.6.4. Please ensure that you use a RescueCD with ZFS 0.6.3. At present date grub 2.02 is not able to deal with those new ZFS parameters. If you want to use ZFS 0.6.4 for pool creation, you should use the compatability mode.

You should upgrade an existing pool only when grub is able to deal with - in a future version ... If not, you will not be able to boot into your system, and no rollback will help!

Please inform yourself!

   Important

Since ZFS was really designed for 64 bit systems, we are only recommending and supporting 64 bit platforms and installations. We will not be supporting 32 bit platforms!

Downloading the ISO (With ZFS)

In order for us to install Funtoo on ZFS, you will need an environment that already provides the ZFS tools. Therefore we will download a customized version of System Rescue CD with ZFS included.

Name: sysresccd-4.2.0_zfs_0.6.2.iso  (545 MB)
Release Date: 2014-02-25
md5sum 01f4e6929247d54db77ab7be4d156d85


Download System Rescue CD with ZFS

Creating a bootable USB from ISO (From a Linux Environment)

After you download the iso, you can do the following steps to create a bootable USB:

Make a temporary directory
root # mkdir /tmp/loop

Mount the iso
root # mount -o ro,loop /root/sysresccd-4.2.0_zfs_0.6.2.iso /tmp/loop

Run the usb installer
root # /tmp/loop/usb_inst.sh

That should be all you need to do to get your flash drive working.

Booting the ISO

   Warning

When booting into the ISO, Make sure that you select the "Alternate 64 bit kernel (altker64)". The ZFS modules have been built specifically for this kernel rather than the standard kernel. If you select a different kernel, you will get a fail to load module stack error message.

Creating partitions

There are two ways to partition your disk: You can use your entire drive and let ZFS automatically partition it for you, or you can do it manually.

We will be showing you how to partition it manually because if you partition it manually you get to create your own layout, you get to have your own separate /boot partition (Which is nice since not every bootloader supports booting from ZFS pools), and you get to boot into RAID10, RAID5 (RAIDZ) pools and any other layouts due to you having a separate /boot partition.

gdisk (GPT Style)

A Fresh Start:

First lets make sure that the disk is completely wiped from any previous disk labels and partitions. We will also assume that /dev/sda is the target drive.

root # sgdisk -Z /dev/sda
   Warning

This is a destructive operation and the program will not ask you for confirmation! Make sure you really don't want anything on this disk.

Now that we have a clean drive, we will create the new layout.

First open up the application:

root # gdisk /dev/sda

Create Partition 1 (boot):

Command: n ↵
Partition Number: 
First sector: 
Last sector: +250M ↵
Hex Code: 

Create Partition 2 (BIOS Boot Partition):

Command: n ↵
Partition Number: 
First sector: 
Last sector: +32M ↵
Hex Code: EF02 ↵

Create Partition 3 (ZFS):

Command: n ↵
Partition Number: 
First sector: 
Last sector: 
Hex Code: bf00 ↵

Command: p ↵

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048          514047   250.0 MiB   8300  Linux filesystem
   2          514048          579583   32.0 MiB    EF02  BIOS boot partition
   3          579584      1953525134   931.2 GiB   BF00  Solaris root

Command: w ↵


Format your /boot partition

root # mkfs.ext2 -m 1 /dev/sda1

Create the zpool

We will first create the pool. The pool will be named tank. Feel free to name your pool as you want. We will use ashift=12 option which is used for a hard drives with a 4096 sector size.

root #   zpool create -f -o ashift=12 -o cachefile=/tmp/zpool.cache -O normalization=formD -m none -R /mnt/funtoo tank /dev/sda3 

Create the zfs datasets

We will now create some datasets. For this installation, we will create a small but future proof amount of datasets. We will have a dataset for the OS (/), and your swap. We will also show you how to create some optional datasets: /home, /usr/src, and /usr/portage. Notice, datasets are examples only and not strictly required.

Create some empty containers for organization purposes, and make the dataset that will hold /
root #  zfs create -p tank/funtoo
root #  zfs create -o mountpoint=/ tank/funtoo/root

Optional, but recommended datasets: /home
root #  zfs create -o mountpoint=/home tank/funtoo/home

Optional datasets: /usr/src, /usr/portage/{distfiles,packages}
root #  zfs create -o mountpoint=/usr/src tank/funtoo/src
root #  zfs create -o mountpoint=/usr/portage -o compression=off tank/funtoo/portage
root #  zfs create -o mountpoint=/usr/portage/distfiles tank/funtoo/portage/distfiles
root #  zfs create -o mountpoint=/usr/portage/packages tank/funtoo/portage/packages

Installing Funtoo

Pre-Chroot

Go into the directory that you will chroot into
root # cd /mnt/funtoo

Make a boot folder and mount your boot drive
root # mkdir boot
root # mount /dev/sda1 boot

Now download and extract the Funtoo stage3 ...


   Note

It is trully recommended to use the current version and generic64. That reduces the risk of a broken build.

After successfull ZFS installation and successfull first boot, the kernel may be changed using the eselect profile set ... command. If you create a snapshot before, you may allways come back to your previous installation, with some simple steps ... (rollback your pool and in the worst case configure and install the bootloader again)


Once you've extracted the stage3, do a few more preparations and chroot into your new funtoo environment:

Bind the kernel related directories
root # mount -t proc none proc
root # mount --rbind /dev dev
root # mount --rbind /sys sys

Copy network settings
root # cp -f /etc/resolv.conf etc

Make the zfs folder in 'etc' and copy your zpool.cache
root # mkdir etc/zfs
root # cp /tmp/zpool.cache etc/zfs

Chroot into Funtoo
root # env -i HOME=/root TERM=$TERM chroot . bash -l
   Note

How to create zpool.cache file?

If no zpool.cache file is available, the following command will create one:

root # zpool set cachefile=/etc/zfs/zpool.cache tank

Install/PortageTree

Add filesystems to /etc/fstab

Before we continue to compile and or install our kernel in the next step, we will edit the /etc/fstab file because if we decide to install our kernel through portage, portage will need to know where our /boot is, so that it can place the files in there.

Edit /etc/fstab:

   /etc/fstab
# <fs>                  <mountpoint>    <type>          <opts>          <dump/pass>

/dev/sda1               /boot           ext2            defaults        0 2

Building kernel, initramfs and grub to work with zfs

Install genkernel and initial kernel build

We need to build a genkernel initially:

root # emerge genkernel

Build initial kernel (required for checks in sys-kernel/spl and sys-fs/zfs):
root # genkernel kernel --no-clean --no-mountboot 

Installing the ZFS userspace tools and kernel modules

Emerge No results. This package will bring in No results, and No results as its dependencies:

root # emerge zfs

Check to make sure that the zfs tools are working. The zpool.cache file that you copied before should be displayed.

root # zpool status
root # zfs list
   Note

If /etc/mtab is missing, these two commands will complaine. In that case solve that with:

root # grep -v rootfs /proc/mounts > /etc/mtab

Add the zfs tools to openrc.

root # rc-update add zfs boot

If everything worked, continue.

Install GRUB 2

Install grub2:

root # echo "sys-boot/grub libzfs -truetype" >> /etc/portage/package.use
root # emerge grub

Now install grub to the drive itself (not a partition):

root # grub-install /dev/sda

Initial kernel build

Build now kernel and initramfs with --zfs

root # genkernel all --zfs --no-clean --no-mountboot --callback="emerge @module-rebuild"

Configuring the Bootloader

Using the genkernel you must add 'real_root=ZFS=<root>' and 'dozfs' to your params. Edit /etc/boot.conf:

   /etc/boot.conf
"Funtoo ZFS" {
        kernel kernel[-v]
        initrd initramfs-genkernel-x86_64[-v]
        params real_root=ZFS=tank/funtoo/root
        params += dozfs=force
}

The command boot-update should take care of grub configuration:

Install boot-update (if it is missing):
root # emerge boot-update

Run boot-update to update grub.cfg
root # boot-update
   Note

If boot-updatefails, try this:

root # grub-mkconfig -o /boot/grub/grub.cfg

Now you should have a new installation of the kernel, initramfs and grub which are zfs capable. The configuration files should be updated, and the system should come up during the next reboot.

   Note

If The luks integration works basically the same way.

Final configuration

Clean up and reboot

We are almost done, we are just going to clean up, set our root password, and unmount whatever we mounted and get out.

Delete the stage3 tarball that you downloaded earlier so it doesn't take up space.
root # cd /
root # rm stage3-latest.tar.xz

Set your root password
root # passwd
>> Enter your password, you won't see what you are writing (for security reasons), but it is there!

Get out of the chroot environment
root # exit

Unmount all the kernel filesystem stuff and boot (if you have a separate /boot)
root # umount -l proc dev sys boot

Turn off the swap
root # swapoff /dev/zvol/tank/swap

Export the zpool
root # cd /
root # zpool export tank

Reboot
root # reboot
   Important

Don't forget to set your root password as stated above before exiting chroot and rebooting. If you don't set the root password, you won't be able to log into your new system.

and that should be enough to get your system to boot on ZFS.

After reboot

Forgot to reset password?

System Rescue CD

If you aren't using bliss-initramfs, then you can reboot back into your sysresccd and reset through there by mounting your drive, chrooting, and then typing passwd.

Example:

root # zpool import -f -R /mnt/funtoo tank
root # chroot /mnt/funtoo bash -l
root # passwd
root # exit
root # zpool export -f tank
root # reboot

Create initial ZFS Snapshot

Continue to set up anything you need in terms of /etc configurations. Once you have everything the way you like it, take a snapshot of your system. You will be using this snapshot to revert back to this state if anything ever happens to your system down the road. The snapshots are cheap, and almost instant.

To take the snapshot of your system, type the following:

root # zfs snapshot -r tank@install

To see if your snapshot was taken, type:

root # zfs list -t snapshot

If your machine ever fails and you need to get back to this state, just type (This will only revert your / dataset while keeping the rest of your data intact):

root # zfs rollback tank/funtoo/root@install
   Important

For a detailed overview, presentation of ZFS' capabilities, as well as usage examples, please refer to the ZFS Fun page.

Troubleshooting

Starting from scratch

If your installation has gotten screwed up for whatever reason and you need a fresh restart, you can do the following from sysresccd to start fresh:

Destroy the pool and any snapshots and datasets it has
root # zpool destroy -R -f tank

This deletes the files from /dev/sda1 so that even after we zap, recreating the drive in the exact sector
position and size will not give us access to the old files in this partition.
root # mkfs.ext2 /dev/sda1
root # sgdisk -Z /dev/sda

Now start the guide again :).


Starting again reusing the same disk partitions and the same pool

If your installation has gotten screwed up for whatever reason and you want to keep your pole named tank than you should boou into the Rescue CD / USB as done before.

import the pool reusing all existing datasets:
root # zpool import -f -R /mnt/funtoo tank

Now you should wipe the previous installation off:

let's go to our base installation directory:
root # cd /mnt/funtoo

and delete the old installation: 
root # rm -rf *

Now start the guide again, at "Pre-Chroot"