Difference between pages "ZFS Install Guide" and "Install ZFS root&boot File System"

(Difference between pages)
(Installing the ZFS userspace tools and kernel modules)
 
(Installing the ZFS userspace tools)
 
Line 1: Line 1:
== Introduction ==
+
== 0. Introduction ==
  
This tutorial will show you how to install Funtoo on ZFS (rootfs). This tutorial is meant to be an "overlay" over the [[Funtoo_Linux_Installation|Regular Funtoo Installation]]. Follow the normal installation and only use this guide for steps 2, 3, and 8.
+
ZFS seams to be one of the future filesystems. At the time of writing (31.08.2015) booting on ZFS is still a headache. Some linux distributions boot on ext2 partition and run zfs as rootfs like [[ZFS_Install_Guide]] and some are able to boot from zfs, some do support zfs only as a filesystem.
  
=== Introduction to ZFS ===
+
Funtoo linux supports ZFS as filesystem and as rootfs, however it does not support it as boot/grub filesystem. For easier search by search engines, it is referred to in this document as whole disk ZFS.
  
Since ZFS is a new technology for Linux, it can be helpful to understand some of its benefits, particularly in comparison to BTRFS, another popular next-generation Linux filesystem:
+
Funtoo linux uses the grub bootloader. So it has been not really understandable, why whole disk ZFS could not be supported by funtoo, as grub is able to do so. Or better, some linux distribution, using the grub bootloader run "whole disk ZFS". What they can, ... yes, we can! ... as well.
  
* On Linux, the ZFS code can be updated independently of the kernel to obtain the latest fixes. btrfs is exclusive to Linux and you need to build the latest kernel sources to get the latest fixes.
+
This guide is based on many trial and errors - caused by lack of knowledge - the actual [[ZFS_Install_Guide]] and a guide for whole disk ZFS for ubuntu. Ubuntu runs als grub bootloader, so some ideas are adapted from there.
  
* ZFS is supported on multiple platforms. The platforms with the best support are Solaris, FreeBSD and Linux. Other platforms with varying degrees of support are NetBSD, Mac OS X and Windows. btrfs is exclusive to Linux.
+
=== Disclaimers ===
  
* ZFS has the Adaptive Replacement Cache replacement algorithm while btrfs uses the Linux kernel's Last Recently Used replacement algorithm. The former often has an overwhelmingly superior hit rate, which means fewer disk accesses.
+
{{fancywarning|This guide is a working pretty well on one computer - that is mine! If it does not run on your's, or if it breaks on your's, than you should try to sort the issue out and report it to this page.  
  
* ZFS has the ZFS Intent Log and SLOG devices, which accelerates small synchronous write performance.
+
So, you may expect that it MAY work! ... and you should be aware that it MAY break your installation. This guide is not developed enough to ENSURE a stable production environment.
  
* ZFS handles internal fragmentation gracefully, such that you can fill it until 100%. Internal fragmentation in btrfs can make btrfs think it is full at 10%. Btrfs has no automatic rebalancing code, so it requires a manual rebalance to correct it.
+
... however, I use it for that! ;-) - crazzy as I am.
 +
}}
  
* ZFS has raidz, which is like RAID 5/6 (or a hypothetical RAID 7 that supports 3 parity disks), except it does not suffer from the RAID write hole issue thanks to its use of CoW and a variable stripe size. btrfs gained integrated RAID 5/6 functionality in Linux 3.9. However, its implementation uses a stripe cache that can only partially mitigate the effect of the RAID write hole.
+
{{fancyimportant|ZFS will run properly only on 64Bit machines. If you plan to run ZFS with 32Bit, you may also try russian roulette with six bullets. The outcome is clear, and sure not that what you want!
  
* ZFS send/receive implementation supports incremental update when doing backups. btrfs' send/receive implementation requires sending the entire snapshot.
+
Not covered in this Guide are:
  
* ZFS supports data deduplication, which is a memory hog and only works well for specialized workloads. btrfs has no equivalent.
+
- obvious steps from the regular funtoo installation guide [[Install]] and the ZFS installation guide [[ZFS_Install_Guide]]
  
* ZFS datasets have a hierarchical namespace while btrfs subvolumes have a flat namespace.
+
- other partition layouts than GPT
  
* ZFS has the ability to create virtual block devices called zvols in its namespace. btrfs has no equivalent and must rely on the loop device for this functionality, which is cumbersome.
+
- other kernel sources than gentoo-sources as debian-sources do not work for sure
  
The only area where btrfs is ahead of ZFS is in the area of small file
+
- usb bootable whole disk ZFS devices (as they need a proper set of udev rules)
efficiency. btrfs supports a feature called block suballocation, which
+
enables it to store small files far more efficiently than ZFS. It is
+
possible to use another filesystem (e.g. reiserfs) on top of a ZFS zvol
+
to obtain similar benefits (with arguably better data integrity) when
+
dealing with many small files (e.g. the portage tree).
+
 
+
For a quick tour of ZFS and have a big picture of its common operations you can consult the page [[ZFS Fun]].
+
 
+
=== Disclaimers ===
+
  
{{fancywarning|This guide is a work in progress. Expect some quirks.  
+
- and may be many items more ... ;-)}}
  
Today is 2015-05-12. ZFS has undertaken an upgrade - from 0.6.3 to 0.6.4. Please ensure that you use a RescueCD with ZFS 0.6.3. At present date grub 2.02 is not able to deal with those new ZFS parameters. If you want to use ZFS 0.6.4 for pool creation, you should use the compatability mode.  
+
== 1. Preparations ==
 +
In this section, we will prepare everything, to be used during system and boot loader installation.
  
You should upgrade an existing pool only when grub is able to deal with - in a future version ... If not, you will not be able to boot into your system, and no rollback will help!
+
=== Create an installation envirnment ===
 +
To be able to install funtoo on zfs, we need a suitable installation environment. The next steps describe the setup:
  
Please inform yourself!}}
+
==== Downloading the ISO (With ZFS) ====
  
{{fancyimportant|'''Since ZFS was really designed for 64 bit systems, we are only recommending and supporting 64 bit platforms and installations. We will not be supporting 32 bit platforms'''!}}
+
This is a copy by the date of 31.08.2015 from the [[ZFS_Install_Guide]].
  
== Downloading the ISO (With ZFS) ==
 
 
In order for us to install Funtoo on ZFS, you will need an environment that already provides the ZFS tools. Therefore we will download a customized version of System Rescue CD with ZFS included.  
 
In order for us to install Funtoo on ZFS, you will need an environment that already provides the ZFS tools. Therefore we will download a customized version of System Rescue CD with ZFS included.  
  
Line 60: Line 53:
 
'''[http://ftp.osuosl.org/pub/funtoo/distfiles/sysresccd/ Download System Rescue CD with ZFS]'''<br />
 
'''[http://ftp.osuosl.org/pub/funtoo/distfiles/sysresccd/ Download System Rescue CD with ZFS]'''<br />
  
== Creating a bootable USB from ISO (From a Linux Environment) ==
+
==== Creating a bootable USB from ISO (From a Linux Environment) ====
 +
 
 +
This is a copy by the date of 31.08.2015 from the [[ZFS_Install_Guide]].
 +
 
 
After you download the iso, you can do the following steps to create a bootable USB:
 
After you download the iso, you can do the following steps to create a bootable USB:
  
Line 76: Line 72:
 
That should be all you need to do to get your flash drive working.
 
That should be all you need to do to get your flash drive working.
  
== Booting the ISO ==
+
==== Booting the ISO ====
 +
While booting the above mentioned iso image - you should use exactly that one, unless you are writing another guide - you shoud use the option '''E) Alternative 64bit kernel (altker64) with more choice...'''. 64Bit, as mentioned above.
  
{{fancywarning|'''When booting into the ISO, Make sure that you select the "Alternate 64 bit kernel (altker64)". The ZFS modules have been built specifically for this kernel rather than the standard kernel. If you select a different kernel, you will get a fail to load module stack error message.'''}}
+
On the next page you should select '''7. SystemRecsueCd with the default graphic environment'''. The system will come up with a small graphical environment and some tools. One of them is a graphical version of GParted, which will be used in this guide.
  
== Creating partitions ==
+
=== Preparing the harddrives ===
There are two ways to partition your disk: You can use your entire drive and let ZFS automatically partition it for you, or you can do it manually.
+
  
We will be showing you how to partition it '''manually''' because if you partition it manually you get to create your own layout, you get to have your own separate /boot partition (Which is nice since not every bootloader supports booting from ZFS pools), and you get to boot into RAID10, RAID5 (RAIDZ) pools and any other layouts due to you having a separate /boot partition.
+
The preparation of the harddrives envolves the following steps:
  
==== gdisk (GPT Style) ====
+
==== Cleaning the disks ====
 +
This guide was developed using GPT partition layout. Any other layout type is not included here.
  
'''A Fresh Start''':
+
All below this note is a copy by the date of 31.08.2015 from the respective section of the [[ZFS_Install_Guide]].
  
 
First lets make sure that the disk is completely wiped from any previous disk labels and partitions.
 
First lets make sure that the disk is completely wiped from any previous disk labels and partitions.
We will also assume that <tt>/dev/sda</tt> is the target drive.<br />
+
We will also assume that <tt>/dev/sda</tt> is the target drive.
 
+
 
<console>
 
<console>
 
# ##i##sgdisk -Z /dev/sda
 
# ##i##sgdisk -Z /dev/sda
 
</console>
 
</console>
 
 
{{fancywarning|This is a destructive operation and the program will not ask you for confirmation! Make sure you really don't want anything on this disk.}}
 
{{fancywarning|This is a destructive operation and the program will not ask you for confirmation! Make sure you really don't want anything on this disk.}}
 +
{{fancynote|Do not forget to run this command for each drive participating the new pool. In this guide a mirror pool is created and the devices <tt>/dev/sda</tt> and <tt>/dev/sdb</tt> are used.}}
 +
==== Create a pool ====
 +
In this section, we create a whole disk ZFS / <code>zpool</code>. For the convenience of this guide, a <code>mirror</code> pool with the name <code>tank</code> will be created and not mounted (option <code>-m none</code>). Single disk ZFS or even <code>zraid</code> could be used respectively.
  
Now that we have a clean drive, we will create the new layout.
+
The option <code>ashift=12</code> is set for hard drives with a 4096 sector size commonly used by SSD drives.
  
First open up the application:
+
<console># ##i##  zpool create -f -o ashift=12 -m none -R /mnt/funtoo tank mirror /dev/sda /dev/sdb </console>
 +
{{fancyimportant|Here we use the devices /dev/sda and /dev/sdb, and not the partitions as described in the [[ZFS_Install_Guide]]. The devices will be partitioned with a GPT partition label and the respective pool including the ZFS filesystems will be created in one step.
  
<console>
+
The cachefile is omitted, as it only speeds booting and unfortunately creates destructive issues while manipulating the pool later on.}}
# ##i##gdisk /dev/sda
+
{{fancynote|It is preferred to use /dev/disk/by-id/ ... Here it does not really matter, and at the important points we will use the reference to the drives by ID.}}
</console>
+
  
'''Create Partition 1''' (boot):
+
<code>zpool status</code> can be used to verify the pool.
<console>
+
Command: ##i##n ↵
+
Partition Number: ##i##↵
+
First sector: ##i##↵
+
Last sector: ##i##+250M ↵
+
Hex Code: ##i##↵
+
</console>
+
  
'''Create Partition 2''' (BIOS Boot Partition):
+
==== Create the zfs datasets ====
<console>Command: ##i##n ↵
+
Partition Number: ##i##↵
+
First sector: ##i##↵
+
Last sector: ##i##+32M ↵
+
Hex Code: ##i##EF02 ↵
+
</console>
+
 
+
'''Create Partition 3''' (ZFS):
+
<console>Command: ##i##n ↵
+
Partition Number: ##i##↵
+
First sector: ##i##↵
+
Last sector: ##i##↵
+
Hex Code: ##i##bf00 ↵
+
 
+
Command: ##i##p ↵
+
 
+
Number  Start (sector)    End (sector)  Size      Code  Name
+
  1            2048          514047  250.0 MiB  8300  Linux filesystem
+
  2          514048          579583  32.0 MiB    EF02  BIOS boot partition
+
  3          579584      1953525134  931.2 GiB  BF00  Solaris root
+
 
+
Command: ##i##w ↵
+
</console>
+
 
+
 
+
=== Format your /boot partition ===
+
 
+
<console>
+
# ##i##mkfs.ext2 -m 1 /dev/sda1
+
</console>
+
  
=== Create the zpool ===
+
This is a copy by the date of 31.08.2015 from the [[ZFS_Install_Guide]], unless the <code>root</code> dataset has been changed to <code>ROOT</code> to avoid confusions with the normal linux <code>root</code> home directory <code>/root</code>
We will first create the pool. The pool will be named  <code>tank</code>. Feel free to name your pool as you want.  We will use <code>ashift=12</code> option  which is used for a hard drives with a 4096 sector size.
+
<console># ##i##  zpool create -f -o ashift=12 -o cachefile=/tmp/zpool.cache -O normalization=formD -m none -R /mnt/funtoo tank /dev/sda3 </console>
+
  
=== Create the zfs datasets ===
 
 
We will now create some datasets. For this installation, we will create a small but future proof amount of datasets. We will have a dataset for the OS (/), and your swap. We will also show you how to create some optional datasets: <code>/home</code>,  <code>/usr/src</code>, and <code>/usr/portage</code>.  Notice, datasets are examples only and not strictly required.
 
We will now create some datasets. For this installation, we will create a small but future proof amount of datasets. We will have a dataset for the OS (/), and your swap. We will also show you how to create some optional datasets: <code>/home</code>,  <code>/usr/src</code>, and <code>/usr/portage</code>.  Notice, datasets are examples only and not strictly required.
  
Line 157: Line 115:
 
Create some empty containers for organization purposes, and make the dataset that will hold /
 
Create some empty containers for organization purposes, and make the dataset that will hold /
 
# ##i## zfs create -p tank/funtoo
 
# ##i## zfs create -p tank/funtoo
# ##i## zfs create -o mountpoint=/ tank/funtoo/root
+
# ##i## zfs create -o mountpoint=/ tank/funtoo/ROOT
 
+
Optional, Create swap
+
# ##i## zfs create tank/swap -V 2G -b 4K
+
# ##i## mkswap /dev/tank/swap
+
# ##i## swapon /dev/tank/swap
+
  
 
Optional, but recommended datasets: /home
 
Optional, but recommended datasets: /home
Line 173: Line 126:
 
# ##i## zfs create -o mountpoint=/usr/portage/packages tank/funtoo/portage/packages
 
# ##i## zfs create -o mountpoint=/usr/portage/packages tank/funtoo/portage/packages
 
</console>
 
</console>
 +
<code>df -k</code> can be used to verify the mountpoints.
  
== Installing Funtoo ==
+
==== Create empty snapshot of the pool (purely optional) ====
 +
Expecting some trial and error during the system installation, a snapshot of the empty disks makes it easy to return this point ... of the game ...
 +
<console># ##i##  zfs snapshot -r tank@000-empty_pool  </console>
 +
<code>zfs list -t snapshot</code> allows to list the existing snapshots.
 +
The rollback will be descibed further down in the troubleshooting guide...
  
=== Pre-Chroot ===
+
==== Flag the bios_grub partition ====
 +
The freshly created GPT partition tables of the devices used in the pool contains two partitions. The first and larger one contains the zfs filesystem, the second (number 9), <code>8.0MiB</code> in size, is reserved by Solaris (<code>BF07</code>).
  
<console>
+
We use now the graphical tool GParted selecting the partition 9 of each device and set the <code>bios_grub</code> flag. This action will add the flag and changes the partition type to BIOS boot partition (<code>EF02</code>).
Go into the directory that you will chroot into
+
# ##i##cd /mnt/funtoo
+
  
Make a boot folder and mount your boot drive
+
This can be done by command line as well ...
# ##i##mkdir boot
+
# ##i##mount /dev/sda1 boot
+
</console>
+
  
[[Funtoo_Linux_Installation|Now download and extract the Funtoo stage3 ...]]
+
=== 2. Basic Funtoo installation ===
 +
Now the installation environment has been set up, and the harddrives laid out. The next step will be the basic funtoo installation according to the following steps:
  
 +
==== Import the pool ====
 +
For installation purpose, the pool created will be imported under the mountpoint <code>/mnt/funtoo</code>.
 +
<code>ashift=12</code> option  which is used for a hard drives with a 4096 sector size.
 +
<console># ##i##  zpool import -f -d /dev/disk/by-id -R /mnt/funtoo tank 
 +
</console>
 +
Using <code>zpool status</code> and / or <code>df -k</code> can be used to verify the pool and the mountpoint.
  
{{fancynote|It is trully recommended to use the current version and generic64. That reduces the risk of a broken build.  
+
==== Change directory to the pool's root directory ====
 
+
Then we should change into the pool's mountpoint directory. At the same time this is the root directory of the intended funtoo installation. That is the directory where you will chroot into.
After successfull ZFS installation and successfull first boot, the kernel may be changed using the <code> eselect profile set ... </code> command. If you create a snapshot before, you may allways come back to your previous installation, with some simple steps ... (rollback your pool and in the worst case configure and install the bootloader again)}}
+
<console>
 +
# ##i##cd /mnt/funtoo
 +
</console>
  
 +
==== Download and extract funtoo stage3 ====
  
 +
It is strongly recommended to use the generic_64 funtoo stage 3 tarball, to avoid a broken build.
 +
The download should be done using <code>wget</code>:
 +
<console>
 +
###i## wget http://build.funtoo.org/funtoo-current/x86-64bit/generic_64/stage3-latest.tar.xz
 +
</console>
 +
{{fancynote|A snapshot is recommended to allow the rollback to this stage.}}
 +
The tarball should be extracted using the following command:
 +
<console>
 +
###i## tar xpf stage3-latest.tar.xz
 +
</console>
 +
{{Important|Omitting the option '''p''' will result in a broken system!}}
 +
{{fancynote|More information could be found under [[Funtoo_Linux_Installation|Now download and extract the Funtoo stage3 ...]]}}
  
Once you've extracted the stage3, do a few more preparations and chroot into your new funtoo environment:
+
==== Bind the kernel related directories ====
 
+
 
<console>
 
<console>
Bind the kernel related directories
 
 
# ##i##mount -t proc none proc
 
# ##i##mount -t proc none proc
 
# ##i##mount --rbind /dev dev
 
# ##i##mount --rbind /dev dev
 
# ##i##mount --rbind /sys sys
 
# ##i##mount --rbind /sys sys
 +
</console>
  
Copy network settings
+
==== Copy network settings ====
 +
<console>
 
# ##i##cp -f /etc/resolv.conf etc
 
# ##i##cp -f /etc/resolv.conf etc
 +
</console>
  
Make the zfs folder in 'etc' and copy your zpool.cache
+
==== zpool.cache ====
# ##i##mkdir etc/zfs
+
<code>zpool.cache</code> is omitted as described above.
# ##i##cp /tmp/zpool.cache etc/zfs
+
  
Chroot into Funtoo
+
==== Chroot into Funtoo ====
 +
<console>
 
# ##i##env -i HOME=/root TERM=$TERM chroot . bash -l
 
# ##i##env -i HOME=/root TERM=$TERM chroot . bash -l
 
</console>
 
</console>
 
+
{{fancynote|Using the graphical installation environment as described above, allows now to open several windows and changing the root into the chroot environment usind the following steps:
{{fancynote|How to create zpool.cache file?}}
+
If no <code>zpool.cache</code> file is available, the following command will create one:  
+
 
<console>
 
<console>
# ##i##zpool set cachefile=/etc/zfs/zpool.cache tank
+
# ##i##cd /mnt/funtoo
 +
# ##i##env -i HOME=/root TERM=$TERM chroot . bash -l
 
</console>
 
</console>
 
+
This allows you to prepare allready the next steps, while the previous is still busy.
{{:Install/PortageTree}}
+
}}
 
+
==== Setup portage ====
=== Add filesystems to /etc/fstab ===
+
Now you should think a little about how to set up <code>MAKEOPTS</code>, <code>LINGUAS</code> and <code>VIDEO_CARDS</code> in your portage <code>make.conf</code> file.
 
+
==== Setup /etc/fstab ====
Before we continue to compile and or install our kernel in the next step, we will edit the <code>/etc/fstab</code> file because if we decide to install our kernel through portage, portage will need to know where our <code>/boot</code> is, so that it can place the files in there.
+
 
+
Edit <code>/etc/fstab</code>:
+
 
+
 
{{file|name=/etc/fstab|desc= |body=
 
{{file|name=/etc/fstab|desc= |body=
 
# <fs>                  <mountpoint>    <type>          <opts>          <dump/pass>
 
# <fs>                  <mountpoint>    <type>          <opts>          <dump/pass>
  
/dev/sda1               /boot           ext2            defaults        0 2
+
tank/funtoo/ROOT               /          zfs           noatime       0 0
 
+
# If you set up a swap partition, you will have to add this line as well
+
/dev/zvol/tank/swap    none            swap           defaults       0 0
+
 
}}
 
}}
 +
==== Setup /etc/mtab ====
 +
<code>zfs</code> and <code>zpool</code> will need it:
 +
<console>
 +
# ##i##grep -v rootfs /proc/mounts > /etc/mtab
 +
</console>
  
== Building kernel, initramfs and grub to work with zfs==
+
==== Emerge portage tree ====
=== Install genkernel and initial kernel build ===
+
Before we install zfs and the bootchain, we should update our portage tree using:
 
+
<console>
We need to build a genkernel initially:
+
# ##i##emerge --sync
 +
</console>
 +
==== Update the system ====
 +
Now we will update the system using the following command:
 +
<console>
 +
# ##i##emerge --uaDN @world
 +
</console>
 +
{{fancynote|Here consider to make a snapshot, so we could rollback to a clean and updated system... This must be done outside the chroot environment, because here we still have no zfs / zpool installation.}}
 +
==== Emerge genkernel ====
 +
<code>genkernel</code> is the tool used by gentoo and funtoo to build kernel and initramfs from sources. it supports <code>zfs</code> for the <code>gentoo-sources</code> as mentioned above.
 
<console>
 
<console>
 
# ##i##emerge genkernel
 
# ##i##emerge genkernel
 +
</console>
 +
==== Generate initial (unbootable) kernel ====
 +
We need to build one first kernel. The <code>zfs</code> needs to find a valid kernel to configure itself properly.
 +
<console>
 +
# ##i##genkernel kernel
 +
</console>
 +
{{fancynote|No options are needed... And once the <code>genkernel</code> starts to build the kernel modules, the build may be interrupted with <code>ctrl c</code>. We have, what we need, and the system is still none-bootable.}}
 +
{{fancynote|As we have done now some cpu consuming tasks, consider to make a snapshot, so we could rollback to a clean and updated system... This must be done outside the chroot environment, because here we still have no zfs / zpool installation.}}
 +
==== The debian-sources ====
 +
From mid 2015 on, funtoo comes with a debian kernel and initramfs preinstalled. At the same time it is configured for debian-sources.
 +
Basically the debian distribution has a much better automatic hardware detection and configuration support. That is the advantage. The disadvantage is, we will have to recompile it, and it will take a long time!
 +
So first of all, we may delete the preinstalled kernel and initramfs:
 +
<console>
 +
# ##i##rm /boot*
 +
</console>
  
Build initial kernel (required for checks in sys-kernel/spl and sys-fs/zfs):
+
== 3. Add ZFS support ==
# ##i##genkernel kernel --no-clean --no-mountboot
+
Now we will add the ZFS support to the basic funtoo system:
 +
=== The debian-sources ===
 +
Using the debian-sources we have to ensure that the following kernel configuration is set:
  
</console>
+
CONFIG_CALL_SYMS=y
 +
 
 +
CONFIG_PREEMPT_NONE=y
  
=== Installing the ZFS userspace tools and kernel modules ===
+
=== Installing the ZFS userspace tools ===
 +
This is a nearly indetical copy by the date of 31.08.2015 from the [[ZFS_Install_Guide]].
 
Emerge {{Package|sys-fs/zfs}}. This package will bring in {{Package|sys-kernel/spl}}, and {{Package|sys-fs/zfs-kmod}} as its dependencies:
 
Emerge {{Package|sys-fs/zfs}}. This package will bring in {{Package|sys-kernel/spl}}, and {{Package|sys-fs/zfs-kmod}} as its dependencies:
 
 
<console>
 
<console>
 
# ##i##emerge zfs
 
# ##i##emerge zfs
 
</console>
 
</console>
 
+
Check to make sure that the zfs tools are working.  
Check to make sure that the zfs tools are working. The <code>zpool.cache</code> file that you copied before should be displayed.
+
 
+
 
<console>
 
<console>
 
# ##i##zpool status
 
# ##i##zpool status
 
# ##i##zfs list
 
# ##i##zfs list
 +
# ##i##zfs list -t snapshot
 
</console>
 
</console>
 
 
{{fancynote|If /etc/mtab is missing, these two commands will complaine.  
 
{{fancynote|If /etc/mtab is missing, these two commands will complaine.  
 
In that case solve that with:
 
In that case solve that with:
Line 269: Line 269:
 
# ##i##grep -v rootfs /proc/mounts > /etc/mtab
 
# ##i##grep -v rootfs /proc/mounts > /etc/mtab
 
</console>}}
 
</console>}}
 
+
Add the zfs tools to openrc.
Add the zfs tools to openrc.  
+
 
<console>
 
<console>
 
# ##i##rc-update add zfs-import boot
 
# ##i##rc-update add zfs-import boot
Line 277: Line 276:
 
# ##i##rc-update add zfs-zed default
 
# ##i##rc-update add zfs-zed default
 
</console>
 
</console>
 +
{{fancynote|A snapshot is recommended to allow the rollback to this stage. Now you can and should make the snapshot within the chroot environment.}}
  
If everything worked, continue.
+
=== Installing the ZFS initramfs, kernel and kernel modules ===
 
+
Build now kernel and initramfs and kernel modules:
=== Install GRUB 2 ===
+
 
+
Install grub2:
+
 
<console>
 
<console>
# ##i##echo "sys-boot/grub libzfs -truetype" >> /etc/portage/package.use
+
# ##i##genkernel all --zfs --no-clean --callback="emerge @module-rebuild"
# ##i##emerge grub
+
</console>It looks like the following is working better <code>module-rebuild</code> is fetching <code>debian-sources</code> as well, and overwrites own configuration..
 +
<console>
 +
# ##i##genkernel all --zfs --no-clean --callback="emerge spl zfs-kmod zfs"
 
</console>
 
</console>
 
+
{{fancynote|To force to get a specific kernek configuration, you may use:
Now install grub to the drive itself (not a partition):
+
 
<console>
 
<console>
# ##i##grub-install /dev/sda
+
# ##i##genkernel all --zfs --no-clean --kerneldir=/usr/src/linux -- kernel-config=/usr/src/<path_to_config> --callback="emerge spl zfs-kmod zfs"
 
</console>
 
</console>
 +
The result should be the same.}}
  
=== Initial kernel build ===
+
=== Emerge, install und configure grub bootloader ===
Build now kernel and initramfs with --zfs
+
This section now describes how to setup our boot configuration to boot from and into the whole disk ZFS.
 +
==== Emerge grub  ====
 +
To emerge grub, we need to set some permanent USE flags first:
 
<console>
 
<console>
# ##i##genkernel all --zfs --no-clean --no-mountboot --callback="emerge @module-rebuild"
+
# ##i##echo "sys-boot/grub libzfs truetype" >> /etc/portage/package.use
 +
# ##i##emerge grub
 
</console>
 
</console>
Using the debian-sources, the following command may give better results:
+
{{fancynote|Be aware that it is not <code>-truetype</code> but <code>truetype</code>. We will need some fonts to display the grub menu...}}
 +
==== Check grub zfs support ====
 +
The zfs support of grub may be checked by invoking <code>grub-probe</code>:
 
<console>
 
<console>
# ##i##genkernel all --zfs --no-clean --no-mountboot --callback="emerge spl zfs-kmod zfs"
+
# ##i##grub-probe .
 
</console>
 
</console>
 +
it should return <code>zfs</code> for a directory on the zfs filesystem.
  
=== Configuring the Bootloader ===
+
==== Install grub into /boot/grub directory ====
 
+
The command <code>grub-install</code> is basically a script taking care off all installation issues. Presently we tweak it a little to do what we want. In the first step, it will be invoked as usual with standard parameters for only one pool device.
Using the genkernel you must add 'real_root=ZFS=<root>' and 'dozfs' to your params.
+
Edit  <code>/etc/boot.conf</code>:
+
 
+
{{file|name=/etc/boot.conf|desc= |body=
+
"Funtoo ZFS" {
+
        kernel kernel[-v]
+
        initrd initramfs-genkernel-x86_64[-v]
+
        params real_root=ZFS=tank/funtoo/root
+
        params += dozfs=force
+
}
+
}}
+
 
+
The command <code>boot-update</code> should take care of grub configuration:
+
 
+
 
<console>
 
<console>
Install boot-update (if it is missing):
+
# ##i##grub-install /dev/sda
###i## emerge boot-update
+
 
+
Run boot-update to update grub.cfg
+
###i## boot-update
+
 
</console>
 
</console>
 +
That will create the /boot/grub directory and installs grub into it.
 +
It also installs the bootloader into the BIOS grub partition, which will be overwritten later on.
 +
==== Add zfs parameters to grub config ====
 +
Now we need to tell grub to use our whole disk zfs during boot time. That is done in two steps.
 +
First we edit <code>/etc/default/boot</code>
  
{{fancynote|If <code>boot-update</code>fails, try this:
+
{{file|name=/etc/default/boot|desc= |body=
 +
GRUB_CMDLINE_LINUX="boot=zfs rpool=tank bootfs=tank/funtoo/ROOT"
 +
}}
 +
Thereafter we invoke <code>grub-mkconfig</code> to generate a new grub config file.
 
<console>
 
<console>
 
# ##i##grub-mkconfig -o /boot/grub/grub.cfg
 
# ##i##grub-mkconfig -o /boot/grub/grub.cfg
 
</console>
 
</console>
}}
+
==== Finally install the grub bootloader ====
Now you should have a new installation of the kernel, initramfs and grub which are zfs capable. The configuration files should be updated, and the system should come up during the next reboot.
+
As last step, we need to install the grub bootloader properly:
 
+
{{fancynote|If The <code>luks</code> integration works basically the same way.}}
+
 
+
== Final configuration ==
+
=== Clean up and reboot ===
+
We are almost done, we are just going to clean up, '''set our root password''', and unmount whatever we mounted and get out.
+
 
+
 
<console>
 
<console>
Delete the stage3 tarball that you downloaded earlier so it doesn't take up space.
+
# ##i##grub-install $(readlink -f /dev/disk/by-id/...disk1)
# ##i##cd /
+
# ##i##grub-install $(readlink -f /dev/disk/by-id/...disk2)
# ##i##rm stage3-latest.tar.xz
+
 
+
Set your root password
+
# ##i##passwd
+
>> Enter your password, you won't see what you are writing (for security reasons), but it is there!
+
 
+
Get out of the chroot environment
+
# ##i##exit
+
 
+
Unmount all the kernel filesystem stuff and boot (if you have a separate /boot)
+
# ##i##umount -l proc dev sys boot
+
 
+
Turn off the swap
+
# ##i##swapoff /dev/zvol/tank/swap
+
 
+
Export the zpool
+
# ##i##cd /
+
# ##i##zpool export tank
+
 
+
Reboot
+
# ##i##reboot
+
 
</console>
 
</console>
 +
If all went through, without any error report, you have installed now a mirrored whole disk zpool, which may boot from any of the two disks. In any failure of one of the two disks, the system may be booted and recovered easily from the other one.
  
{{fancyimportant|'''Don't forget to set your root password as stated above before exiting chroot and rebooting. If you don't set the root password, you won't be able to log into your new system.'''}}
+
== 4. Final steps & cleanup ==
 
+
The final steps and the cleanup can be done according [[ZFS_Install_Guide]].
and that should be enough to get your system to boot on ZFS.
+
Further installation should be guided by [[Install]] and [[Funtoo_Linux_First_Steps]].
 
+
== After reboot ==
+
 
+
=== Forgot to reset password? ===
+
==== System Rescue CD ====
+
If you aren't using bliss-initramfs, then you can reboot back into your sysresccd and reset through there by mounting your drive, chrooting, and then typing passwd.  
+
 
+
Example:
+
<console>
+
# ##i##zpool import -f -R /mnt/funtoo tank
+
# ##i##chroot /mnt/funtoo bash -l
+
# ##i##passwd
+
# ##i##exit
+
# ##i##zpool export -f tank
+
# ##i##reboot
+
</console>
+
 
+
=== Create initial ZFS Snapshot ===
+
Continue to set up anything you need in terms of /etc configurations. Once you have everything the way you like it, take a snapshot of your system. You will be using this snapshot to revert back to this state if anything ever happens to your system down the road. The snapshots are cheap, and almost instant.
+
 
+
To take the snapshot of your system, type the following:
+
<console># ##i##zfs snapshot -r tank@install</console>
+
 
+
To see if your snapshot was taken, type:
+
<console># ##i##zfs list -t snapshot</console>
+
 
+
If your machine ever fails and you need to get back to this state, just type (This will only revert your / dataset while keeping the rest of your data intact):
+
<console># ##i##zfs rollback tank/funtoo/root@install</console>
+
 
+
{{fancyimportant|'''For a detailed overview, presentation of ZFS' capabilities, as well as usage examples, please refer to the [[ZFS_Fun|ZFS Fun]] page.'''}}
+
 
+
== Troubleshooting ==
+
 
+
=== Starting from scratch ===
+
If your installation has gotten screwed up for whatever reason and you need a fresh restart, you can do the following from sysresccd to start fresh:
+
 
+
<console>
+
Destroy the pool and any snapshots and datasets it has
+
# ##i##zpool destroy -R -f tank
+
 
+
This deletes the files from /dev/sda1 so that even after we zap, recreating the drive in the exact sector
+
position and size will not give us access to the old files in this partition.
+
# ##i##mkfs.ext2 /dev/sda1
+
# ##i##sgdisk -Z /dev/sda
+
</console>
+
 
+
Now start the guide again :).
+
 
+
 
+
=== Starting again reusing the same disk partitions and the same pool ===
+
 
+
If your installation has gotten screwed up for whatever reason and you want to keep your pole named tank than you should boou into the Rescue CD / USB as done before.
+
 
+
<console>import the pool reusing all existing datasets:
+
# ##i##zpool import -f -R /mnt/funtoo tank
+
</console>
+
 
+
Now you should wipe the previous installation off:
+
 
+
<console>
+
let's go to our base installation directory:
+
# ##i##cd /mnt/funtoo
+
 
+
and delete the old installation:
+
# ##i##rm -rf *
+
</console>
+
 
+
Now start the guide again, at "Pre-Chroot"
+
 
+
 
+
[[Category:HOWTO]]
+
[[Category:Filesystems]]
+
[[Category:Featured]]
+
[[Category:Install]]
+
  
__NOTITLE__
+
{{important|'''enjoy it!'''}}

Revision as of 03:44, September 14, 2015

0. Introduction

ZFS seams to be one of the future filesystems. At the time of writing (31.08.2015) booting on ZFS is still a headache. Some linux distributions boot on ext2 partition and run zfs as rootfs like ZFS_Install_Guide and some are able to boot from zfs, some do support zfs only as a filesystem.

Funtoo linux supports ZFS as filesystem and as rootfs, however it does not support it as boot/grub filesystem. For easier search by search engines, it is referred to in this document as whole disk ZFS.

Funtoo linux uses the grub bootloader. So it has been not really understandable, why whole disk ZFS could not be supported by funtoo, as grub is able to do so. Or better, some linux distribution, using the grub bootloader run "whole disk ZFS". What they can, ... yes, we can! ... as well.

This guide is based on many trial and errors - caused by lack of knowledge - the actual ZFS_Install_Guide and a guide for whole disk ZFS for ubuntu. Ubuntu runs als grub bootloader, so some ideas are adapted from there.

Disclaimers

Warning

This guide is a working pretty well on one computer - that is mine! If it does not run on your's, or if it breaks on your's, than you should try to sort the issue out and report it to this page.

So, you may expect that it MAY work! ... and you should be aware that it MAY break your installation. This guide is not developed enough to ENSURE a stable production environment.

... however, I use it for that! ;-) - crazzy as I am.

Important

ZFS will run properly only on 64Bit machines. If you plan to run ZFS with 32Bit, you may also try russian roulette with six bullets. The outcome is clear, and sure not that what you want!

Not covered in this Guide are:

- obvious steps from the regular funtoo installation guide Install and the ZFS installation guide ZFS_Install_Guide

- other partition layouts than GPT

- other kernel sources than gentoo-sources as debian-sources do not work for sure

- usb bootable whole disk ZFS devices (as they need a proper set of udev rules)

- and may be many items more ... ;-)

1. Preparations

In this section, we will prepare everything, to be used during system and boot loader installation.

Create an installation envirnment

To be able to install funtoo on zfs, we need a suitable installation environment. The next steps describe the setup:

Downloading the ISO (With ZFS)

This is a copy by the date of 31.08.2015 from the ZFS_Install_Guide.

In order for us to install Funtoo on ZFS, you will need an environment that already provides the ZFS tools. Therefore we will download a customized version of System Rescue CD with ZFS included.

Name: sysresccd-4.2.0_zfs_0.6.2.iso  (545 MB)
Release Date: 2014-02-25
md5sum 01f4e6929247d54db77ab7be4d156d85
Download System Rescue CD with ZFS

Creating a bootable USB from ISO (From a Linux Environment)

This is a copy by the date of 31.08.2015 from the ZFS_Install_Guide.

After you download the iso, you can do the following steps to create a bootable USB:

Make a temporary directory
# mkdir /tmp/loop

Mount the iso
# mount -o ro,loop /root/sysresccd-4.2.0_zfs_0.6.2.iso /tmp/loop

Run the usb installer
# /tmp/loop/usb_inst.sh

That should be all you need to do to get your flash drive working.

Booting the ISO

While booting the above mentioned iso image - you should use exactly that one, unless you are writing another guide - you shoud use the option E) Alternative 64bit kernel (altker64) with more choice.... 64Bit, as mentioned above.

On the next page you should select 7. SystemRecsueCd with the default graphic environment. The system will come up with a small graphical environment and some tools. One of them is a graphical version of GParted, which will be used in this guide.

Preparing the harddrives

The preparation of the harddrives envolves the following steps:

Cleaning the disks

This guide was developed using GPT partition layout. Any other layout type is not included here.

All below this note is a copy by the date of 31.08.2015 from the respective section of the ZFS_Install_Guide.

First lets make sure that the disk is completely wiped from any previous disk labels and partitions. We will also assume that /dev/sda is the target drive.

# sgdisk -Z /dev/sda
Warning

This is a destructive operation and the program will not ask you for confirmation! Make sure you really don't want anything on this disk.

Note

Do not forget to run this command for each drive participating the new pool. In this guide a mirror pool is created and the devices /dev/sda and /dev/sdb are used.

Create a pool

In this section, we create a whole disk ZFS / zpool. For the convenience of this guide, a mirror pool with the name tank will be created and not mounted (option -m none). Single disk ZFS or even zraid could be used respectively.

The option ashift=12 is set for hard drives with a 4096 sector size commonly used by SSD drives.

#   zpool create -f -o ashift=12 -m none -R /mnt/funtoo tank mirror /dev/sda /dev/sdb 
Important

Here we use the devices /dev/sda and /dev/sdb, and not the partitions as described in the ZFS_Install_Guide. The devices will be partitioned with a GPT partition label and the respective pool including the ZFS filesystems will be created in one step.

The cachefile is omitted, as it only speeds booting and unfortunately creates destructive issues while manipulating the pool later on.

Note

It is preferred to use /dev/disk/by-id/ ... Here it does not really matter, and at the important points we will use the reference to the drives by ID.

zpool status can be used to verify the pool.

Create the zfs datasets

This is a copy by the date of 31.08.2015 from the ZFS_Install_Guide, unless the root dataset has been changed to ROOT to avoid confusions with the normal linux root home directory /root.

We will now create some datasets. For this installation, we will create a small but future proof amount of datasets. We will have a dataset for the OS (/), and your swap. We will also show you how to create some optional datasets: /home, /usr/src, and /usr/portage. Notice, datasets are examples only and not strictly required.

Create some empty containers for organization purposes, and make the dataset that will hold /
#  zfs create -p tank/funtoo
#  zfs create -o mountpoint=/ tank/funtoo/ROOT

Optional, but recommended datasets: /home
#  zfs create -o mountpoint=/home tank/funtoo/home

Optional datasets: /usr/src, /usr/portage/{distfiles,packages}
#  zfs create -o mountpoint=/usr/src tank/funtoo/src
#  zfs create -o mountpoint=/usr/portage -o compression=off tank/funtoo/portage
#  zfs create -o mountpoint=/usr/portage/distfiles tank/funtoo/portage/distfiles
#  zfs create -o mountpoint=/usr/portage/packages tank/funtoo/portage/packages

df -k can be used to verify the mountpoints.

Create empty snapshot of the pool (purely optional)

Expecting some trial and error during the system installation, a snapshot of the empty disks makes it easy to return this point ... of the game ...

#   zfs snapshot -r tank@000-empty_pool  

zfs list -t snapshot allows to list the existing snapshots. The rollback will be descibed further down in the troubleshooting guide...

Flag the bios_grub partition

The freshly created GPT partition tables of the devices used in the pool contains two partitions. The first and larger one contains the zfs filesystem, the second (number 9), 8.0MiB in size, is reserved by Solaris (BF07).

We use now the graphical tool GParted selecting the partition 9 of each device and set the bios_grub flag. This action will add the flag and changes the partition type to BIOS boot partition (EF02).

This can be done by command line as well ...

2. Basic Funtoo installation

Now the installation environment has been set up, and the harddrives laid out. The next step will be the basic funtoo installation according to the following steps:

Import the pool

For installation purpose, the pool created will be imported under the mountpoint /mnt/funtoo. ashift=12 option which is used for a hard drives with a 4096 sector size.

#   zpool import -f -d /dev/disk/by-id -R /mnt/funtoo tank  

Using zpool status and / or df -k can be used to verify the pool and the mountpoint.

Change directory to the pool's root directory

Then we should change into the pool's mountpoint directory. At the same time this is the root directory of the intended funtoo installation. That is the directory where you will chroot into.

# cd /mnt/funtoo

Download and extract funtoo stage3

It is strongly recommended to use the generic_64 funtoo stage 3 tarball, to avoid a broken build. The download should be done using wget:

# wget http://build.funtoo.org/funtoo-current/x86-64bit/generic_64/stage3-latest.tar.xz
Note

A snapshot is recommended to allow the rollback to this stage.

The tarball should be extracted using the following command:

# tar xpf stage3-latest.tar.xz
Important

Omitting the option p will result in a broken system!

Note

More information could be found under Now download and extract the Funtoo stage3 ...

Bind the kernel related directories

# mount -t proc none proc
# mount --rbind /dev dev
# mount --rbind /sys sys

Copy network settings

# cp -f /etc/resolv.conf etc

zpool.cache

zpool.cache is omitted as described above.

Chroot into Funtoo

# env -i HOME=/root TERM=$TERM chroot . bash -l
Note

Using the graphical installation environment as described above, allows now to open several windows and changing the root into the chroot environment usind the following steps:

# cd /mnt/funtoo
# env -i HOME=/root TERM=$TERM chroot . bash -l

This allows you to prepare allready the next steps, while the previous is still busy.

Setup portage

Now you should think a little about how to set up MAKEOPTS, LINGUAS and VIDEO_CARDS in your portage make.conf file.

Setup /etc/fstab

/etc/fstab
# <fs>                  <mountpoint>    <type>          <opts>          <dump/pass>

tank/funtoo/ROOT               /           zfs            noatime        0 0

Setup /etc/mtab

zfs and zpool will need it:

# grep -v rootfs /proc/mounts > /etc/mtab

Emerge portage tree

Before we install zfs and the bootchain, we should update our portage tree using:

# emerge --sync

Update the system

Now we will update the system using the following command:

# emerge --uaDN @world
Note

Here consider to make a snapshot, so we could rollback to a clean and updated system... This must be done outside the chroot environment, because here we still have no zfs / zpool installation.

Emerge genkernel

genkernel is the tool used by gentoo and funtoo to build kernel and initramfs from sources. it supports zfs for the gentoo-sources as mentioned above.

# emerge genkernel

Generate initial (unbootable) kernel

We need to build one first kernel. The zfs needs to find a valid kernel to configure itself properly.

# genkernel kernel
Note

No options are needed... And once the genkernel starts to build the kernel modules, the build may be interrupted with ctrl c. We have, what we need, and the system is still none-bootable.

Note

As we have done now some cpu consuming tasks, consider to make a snapshot, so we could rollback to a clean and updated system... This must be done outside the chroot environment, because here we still have no zfs / zpool installation.

The debian-sources

From mid 2015 on, funtoo comes with a debian kernel and initramfs preinstalled. At the same time it is configured for debian-sources. Basically the debian distribution has a much better automatic hardware detection and configuration support. That is the advantage. The disadvantage is, we will have to recompile it, and it will take a long time! So first of all, we may delete the preinstalled kernel and initramfs:

# rm /boot*

3. Add ZFS support

Now we will add the ZFS support to the basic funtoo system:

The debian-sources

Using the debian-sources we have to ensure that the following kernel configuration is set:

CONFIG_CALL_SYMS=y

CONFIG_PREEMPT_NONE=y

Installing the ZFS userspace tools

This is a nearly indetical copy by the date of 31.08.2015 from the ZFS_Install_Guide. Emerge sys-fs/zfs (package not on wiki - please add). This package will bring in sys-kernel/spl (package not on wiki - please add), and sys-fs/zfs-kmod (package not on wiki - please add) as its dependencies:

# emerge zfs

Check to make sure that the zfs tools are working.

# zpool status
# zfs list
# zfs list -t snapshot
Note

If /etc/mtab is missing, these two commands will complaine. In that case solve that with:

# grep -v rootfs /proc/mounts > /etc/mtab

Add the zfs tools to openrc.

# rc-update add zfs-import boot
# rc-update add zfs-mount boot
# rc-update add zfs-share default
# rc-update add zfs-zed default
Note

A snapshot is recommended to allow the rollback to this stage. Now you can and should make the snapshot within the chroot environment.

Installing the ZFS initramfs, kernel and kernel modules

Build now kernel and initramfs and kernel modules:

# genkernel all --zfs --no-clean --callback="emerge @module-rebuild"
It looks like the following is working better module-rebuild is fetching debian-sources as well, and overwrites own configuration..
# genkernel all --zfs --no-clean --callback="emerge spl zfs-kmod zfs"
Note

To force to get a specific kernek configuration, you may use:

# genkernel all --zfs --no-clean --kerneldir=/usr/src/linux -- kernel-config=/usr/src/<path_to_config> --callback="emerge spl zfs-kmod zfs"

The result should be the same.

Emerge, install und configure grub bootloader

This section now describes how to setup our boot configuration to boot from and into the whole disk ZFS.

Emerge grub

To emerge grub, we need to set some permanent USE flags first:

# echo "sys-boot/grub libzfs truetype" >> /etc/portage/package.use
# emerge grub
Note

Be aware that it is not -truetype but truetype. We will need some fonts to display the grub menu...

Check grub zfs support

The zfs support of grub may be checked by invoking grub-probe:

# grub-probe .

it should return zfs for a directory on the zfs filesystem.

Install grub into /boot/grub directory

The command grub-install is basically a script taking care off all installation issues. Presently we tweak it a little to do what we want. In the first step, it will be invoked as usual with standard parameters for only one pool device.

# grub-install /dev/sda

That will create the /boot/grub directory and installs grub into it. It also installs the bootloader into the BIOS grub partition, which will be overwritten later on.

Add zfs parameters to grub config

Now we need to tell grub to use our whole disk zfs during boot time. That is done in two steps. First we edit /etc/default/boot

/etc/default/boot
GRUB_CMDLINE_LINUX="boot=zfs rpool=tank bootfs=tank/funtoo/ROOT"

Thereafter we invoke grub-mkconfig to generate a new grub config file.

# grub-mkconfig -o /boot/grub/grub.cfg

Finally install the grub bootloader

As last step, we need to install the grub bootloader properly:

# grub-install $(readlink -f /dev/disk/by-id/...disk1)
# grub-install $(readlink -f /dev/disk/by-id/...disk2)

If all went through, without any error report, you have installed now a mirrored whole disk zpool, which may boot from any of the two disks. In any failure of one of the two disks, the system may be booted and recovered easily from the other one.

4. Final steps & cleanup

The final steps and the cleanup can be done according ZFS_Install_Guide. Further installation should be guided by Install and Funtoo_Linux_First_Steps.

Important

enjoy it!