Difference between pages "Whole disk ZFS Install Guide" and "ZFS Install Guide"

(Difference between pages)
(Installing the ZFS initramfs, kernel and kernel modules)
 
(Initial kernel build)
 
Line 1: Line 1:
== 0. Introduction ==
+
== Introduction ==
  
ZFS seams to be one of the future filesystems. At the time of writing (31.08.2015) booting on ZFS is still a headache. Some linux distributions boot on ext2 partition and run zfs as rootfs like [[ZFS_Install_Guide]] and some are able to boot from zfs, some do support zfs only as a filesystem.
+
This tutorial will show you how to install Funtoo on ZFS (rootfs). This tutorial is meant to be an "overlay" over the [[Funtoo_Linux_Installation|Regular Funtoo Installation]]. Follow the normal installation and only use this guide for steps 2, 3, and 8.
  
Funtoo linux supports ZFS as filesystem and as rootfs, however it does not support it as boot/grub filesystem. For easier search by search engines, it is referred to in this document as whole disk ZFS.
+
=== Introduction to ZFS ===
  
Funtoo linux uses the grub bootloader. So it has been not really understandable, why whole disk ZFS could not be supported by funtoo, as grub is able to do so. Or better, some linux distribution, using the grub bootloader run "whole disk ZFS". What they can, ... yes, we can! ... as well.
+
Since ZFS is a new technology for Linux, it can be helpful to understand some of its benefits, particularly in comparison to BTRFS, another popular next-generation Linux filesystem:
  
This guide is based on many trial and errors - caused by lack of knowledge - the actual [[ZFS_Install_Guide]] and a guide for whole disk ZFS for ubuntu. Ubuntu runs als grub bootloader, so some ideas are adapted from there.
+
* On Linux, the ZFS code can be updated independently of the kernel to obtain the latest fixes. btrfs is exclusive to Linux and you need to build the latest kernel sources to get the latest fixes.
  
=== Disclaimers ===
+
* ZFS is supported on multiple platforms. The platforms with the best support are Solaris, FreeBSD and Linux. Other platforms with varying degrees of support are NetBSD, Mac OS X and Windows. btrfs is exclusive to Linux.
  
{{fancywarning|This guide is a working pretty well on one computer - that is mine! If it does not run on your's, or if it breaks on your's, than you should try to sort the issue out and report it to this page.  
+
* ZFS has the Adaptive Replacement Cache replacement algorithm while btrfs uses the Linux kernel's Last Recently Used replacement algorithm. The former often has an overwhelmingly superior hit rate, which means fewer disk accesses.
  
So, you may expect that it MAY work! ... and you should be aware that it MAY break your installation. This guide is not developed enough to ENSURE a stable production environment.
+
* ZFS has the ZFS Intent Log and SLOG devices, which accelerates small synchronous write performance.
  
... however, I use it for that! ;-) - crazzy as I am.
+
* ZFS handles internal fragmentation gracefully, such that you can fill it until 100%. Internal fragmentation in btrfs can make btrfs think it is full at 10%. Btrfs has no automatic rebalancing code, so it requires a manual rebalance to correct it.
}}
+
  
{{fancyimportant|ZFS will run properly only on 64Bit machines. If you plan to run ZFS with 32Bit, you may also try russian roulette with six bullets. The outcome is clear, and sure not that what you want!
+
* ZFS has raidz, which is like RAID 5/6 (or a hypothetical RAID 7 that supports 3 parity disks), except it does not suffer from the RAID write hole issue thanks to its use of CoW and a variable stripe size. btrfs gained integrated RAID 5/6 functionality in Linux 3.9. However, its implementation uses a stripe cache that can only partially mitigate the effect of the RAID write hole.
  
Not covered in this Guide are:
+
* ZFS send/receive implementation supports incremental update when doing backups. btrfs' send/receive implementation requires sending the entire snapshot.
  
- obvious steps from the regular funtoo installation guide [[Install]] and the ZFS installation guide [[ZFS_Install_Guide]]
+
* ZFS supports data deduplication, which is a memory hog and only works well for specialized workloads. btrfs has no equivalent.
  
- other partition layouts than GPT
+
* ZFS datasets have a hierarchical namespace while btrfs subvolumes have a flat namespace.
  
- other kernel sources than gentoo-sources as debian-sources do not work for sure
+
* ZFS has the ability to create virtual block devices called zvols in its namespace. btrfs has no equivalent and must rely on the loop device for this functionality, which is cumbersome.
  
- usb bootable whole disk ZFS devices (as they need a proper set of udev rules)
+
The only area where btrfs is ahead of ZFS is in the area of small file
 +
efficiency. btrfs supports a feature called block suballocation, which
 +
enables it to store small files far more efficiently than ZFS. It is
 +
possible to use another filesystem (e.g. reiserfs) on top of a ZFS zvol
 +
to obtain similar benefits (with arguably better data integrity) when
 +
dealing with many small files (e.g. the portage tree).
 +
 
 +
For a quick tour of ZFS and have a big picture of its common operations you can consult the page [[ZFS Fun]].
 +
 
 +
=== Disclaimers ===
  
- and may be many items more ... ;-)}}
+
{{fancywarning|This guide is a work in progress. Expect some quirks.  
  
== 1. Preparations ==
+
Today is 2015-05-12. ZFS has undertaken an upgrade - from 0.6.3 to 0.6.4. Please ensure that you use a RescueCD with ZFS 0.6.3. At present date grub 2.02 is not able to deal with those new ZFS parameters. If you want to use ZFS 0.6.4 for pool creation, you should use the compatability mode.  
In this section, we will prepare everything, to be used during system and boot loader installation.
+
  
=== Create an installation envirnment ===
+
You should upgrade an existing pool only when grub is able to deal with - in a future version ... If not, you will not be able to boot into your system, and no rollback will help!
To be able to install funtoo on zfs, we need a suitable installation environment. The next steps describe the setup:
+
  
==== Downloading the ISO (With ZFS) ====
+
Please inform yourself!}}
  
This is a copy by the date of 31.08.2015 from the [[ZFS_Install_Guide]].
+
{{fancyimportant|'''Since ZFS was really designed for 64 bit systems, we are only recommending and supporting 64 bit platforms and installations. We will not be supporting 32 bit platforms'''!}}
  
 +
== Downloading the ISO (With ZFS) ==
 
In order for us to install Funtoo on ZFS, you will need an environment that already provides the ZFS tools. Therefore we will download a customized version of System Rescue CD with ZFS included.  
 
In order for us to install Funtoo on ZFS, you will need an environment that already provides the ZFS tools. Therefore we will download a customized version of System Rescue CD with ZFS included.  
  
Line 53: Line 60:
 
'''[http://ftp.osuosl.org/pub/funtoo/distfiles/sysresccd/ Download System Rescue CD with ZFS]'''<br />
 
'''[http://ftp.osuosl.org/pub/funtoo/distfiles/sysresccd/ Download System Rescue CD with ZFS]'''<br />
  
==== Creating a bootable USB from ISO (From a Linux Environment) ====
+
== Creating a bootable USB from ISO (From a Linux Environment) ==
 
+
This is a copy by the date of 31.08.2015 from the [[ZFS_Install_Guide]].
+
 
+
 
After you download the iso, you can do the following steps to create a bootable USB:
 
After you download the iso, you can do the following steps to create a bootable USB:
  
Line 72: Line 76:
 
That should be all you need to do to get your flash drive working.
 
That should be all you need to do to get your flash drive working.
  
==== Booting the ISO ====
+
== Booting the ISO ==
While booting the above mentioned iso image - you should use exactly that one, unless you are writing another guide - you shoud use the option '''E) Alternative 64bit kernel (altker64) with more choice...'''. 64Bit, as mentioned above.
+
  
On the next page you should select '''7. SystemRecsueCd with the default graphic environment'''. The system will come up with a small graphical environment and some tools. One of them is a graphical version of GParted, which will be used in this guide.
+
{{fancywarning|'''When booting into the ISO, Make sure that you select the "Alternate 64 bit kernel (altker64)". The ZFS modules have been built specifically for this kernel rather than the standard kernel. If you select a different kernel, you will get a fail to load module stack error message.'''}}
  
=== Preparing the harddrives ===
+
== Creating partitions ==
 +
There are two ways to partition your disk: You can use your entire drive and let ZFS automatically partition it for you, or you can do it manually.
  
The preparation of the harddrives envolves the following steps:
+
We will be showing you how to partition it '''manually''' because if you partition it manually you get to create your own layout, you get to have your own separate /boot partition (Which is nice since not every bootloader supports booting from ZFS pools), and you get to boot into RAID10, RAID5 (RAIDZ) pools and any other layouts due to you having a separate /boot partition.
  
==== Cleaning the disks ====
+
==== gdisk (GPT Style) ====
This guide was developed using GPT partition layout. Any other layout type is not included here.
+
  
All below this note is a copy by the date of 31.08.2015 from the respective section of the [[ZFS_Install_Guide]].
+
'''A Fresh Start''':
  
 
First lets make sure that the disk is completely wiped from any previous disk labels and partitions.
 
First lets make sure that the disk is completely wiped from any previous disk labels and partitions.
We will also assume that <tt>/dev/sda</tt> is the target drive.
+
We will also assume that <tt>/dev/sda</tt> is the target drive.<br />
 +
 
 
<console>
 
<console>
 
# ##i##sgdisk -Z /dev/sda
 
# ##i##sgdisk -Z /dev/sda
 
</console>
 
</console>
 +
 
{{fancywarning|This is a destructive operation and the program will not ask you for confirmation! Make sure you really don't want anything on this disk.}}
 
{{fancywarning|This is a destructive operation and the program will not ask you for confirmation! Make sure you really don't want anything on this disk.}}
{{fancynote|Do not forget to run this command for each drive participating the new pool. In this guide a mirror pool is created and the devices <tt>/dev/sda</tt> and <tt>/dev/sdb</tt> are used.}}
 
==== Create a pool ====
 
In this section, we create a whole disk ZFS / <code>zpool</code>. For the convenience of this guide, a <code>mirror</code> pool with the name <code>tank</code> will be created and not mounted (option <code>-m none</code>). Single disk ZFS or even <code>zraid</code> could be used respectively.
 
  
The option <code>ashift=12</code> is set for hard drives with a 4096 sector size commonly used by SSD drives.
+
Now that we have a clean drive, we will create the new layout.
  
<console># ##i##  zpool create -f -o ashift=12 -m none -R /mnt/funtoo tank mirror /dev/sda /dev/sdb </console>
+
First open up the application:
{{fancyimportant|Here we use the devices /dev/sda and /dev/sdb, and not the partitions as described in the [[ZFS_Install_Guide]]. The devices will be partitioned with a GPT partition label and the respective pool including the ZFS filesystems will be created in one step.
+
  
The cachefile is omitted, as it only speeds booting and unfortunately creates destructive issues while manipulating the pool later on.}}
+
<console>
{{fancynote|It is preferred to use /dev/disk/by-id/ ... Here it does not really matter, and at the important points we will use the reference to the drives by ID.}}
+
# ##i##gdisk /dev/sda
 +
</console>
  
<code>zpool status</code> can be used to verify the pool.
+
'''Create Partition 1''' (boot):
 +
<console>
 +
Command: ##i##n ↵
 +
Partition Number: ##i##↵
 +
First sector: ##i##↵
 +
Last sector: ##i##+250M ↵
 +
Hex Code: ##i##↵
 +
</console>
  
==== Create the zfs datasets ====
+
'''Create Partition 2''' (BIOS Boot Partition):
 +
<console>Command: ##i##n ↵
 +
Partition Number: ##i##↵
 +
First sector: ##i##↵
 +
Last sector: ##i##+32M ↵
 +
Hex Code: ##i##EF02 ↵
 +
</console>
 +
 
 +
'''Create Partition 3''' (ZFS):
 +
<console>Command: ##i##n ↵
 +
Partition Number: ##i##↵
 +
First sector: ##i##↵
 +
Last sector: ##i##↵
 +
Hex Code: ##i##bf00 ↵
 +
 
 +
Command: ##i##p ↵
 +
 
 +
Number  Start (sector)    End (sector)  Size      Code  Name
 +
  1            2048          514047  250.0 MiB  8300  Linux filesystem
 +
  2          514048          579583  32.0 MiB    EF02  BIOS boot partition
 +
  3          579584      1953525134  931.2 GiB  BF00  Solaris root
 +
 
 +
Command: ##i##w ↵
 +
</console>
 +
 
 +
 
 +
=== Format your /boot partition ===
 +
 
 +
<console>
 +
# ##i##mkfs.ext2 -m 1 /dev/sda1
 +
</console>
  
This is a copy by the date of 31.08.2015 from the [[ZFS_Install_Guide]], unless the <code>root</code> dataset has been changed to <code>ROOT</code> to avoid confusions with the normal linux <code>root</code> home directory <code>/root</code>
+
=== Create the zpool ===
 +
We will first create the pool. The pool will be named  <code>tank</code>. Feel free to name your pool as you want.  We will use <code>ashift=12</code> option  which is used for a hard drives with a 4096 sector size.
 +
<console># ##i##  zpool create -f -o ashift=12 -o cachefile=/tmp/zpool.cache -O normalization=formD -m none -R /mnt/funtoo tank /dev/sda3 </console>
  
 +
=== Create the zfs datasets ===
 
We will now create some datasets. For this installation, we will create a small but future proof amount of datasets. We will have a dataset for the OS (/), and your swap. We will also show you how to create some optional datasets: <code>/home</code>,  <code>/usr/src</code>, and <code>/usr/portage</code>.  Notice, datasets are examples only and not strictly required.
 
We will now create some datasets. For this installation, we will create a small but future proof amount of datasets. We will have a dataset for the OS (/), and your swap. We will also show you how to create some optional datasets: <code>/home</code>,  <code>/usr/src</code>, and <code>/usr/portage</code>.  Notice, datasets are examples only and not strictly required.
  
Line 115: Line 157:
 
Create some empty containers for organization purposes, and make the dataset that will hold /
 
Create some empty containers for organization purposes, and make the dataset that will hold /
 
# ##i## zfs create -p tank/funtoo
 
# ##i## zfs create -p tank/funtoo
# ##i## zfs create -o mountpoint=/ tank/funtoo/ROOT
+
# ##i## zfs create -o mountpoint=/ tank/funtoo/root
 +
 
 +
Optional, Create swap
 +
# ##i## zfs create tank/swap -V 2G -b 4K
 +
# ##i## mkswap /dev/tank/swap
 +
# ##i## swapon /dev/tank/swap
  
 
Optional, but recommended datasets: /home
 
Optional, but recommended datasets: /home
Line 126: Line 173:
 
# ##i## zfs create -o mountpoint=/usr/portage/packages tank/funtoo/portage/packages
 
# ##i## zfs create -o mountpoint=/usr/portage/packages tank/funtoo/portage/packages
 
</console>
 
</console>
<code>df -k</code> can be used to verify the mountpoints.
 
  
==== Create empty snapshot of the pool (purely optional) ====
+
== Installing Funtoo ==
Expecting some trial and error during the system installation, a snapshot of the empty disks makes it easy to return this point ... of the game ...
+
<console># ##i##  zfs snapshot -r tank@000-empty_pool  </console>
+
<code>zfs list -t snapshot</code> allows to list the existing snapshots.
+
The rollback will be descibed further down in the troubleshooting guide...
+
  
==== Flag the bios_grub partition ====
+
=== Pre-Chroot ===
The freshly created GPT partition tables of the devices used in the pool contains two partitions. The first and larger one contains the zfs filesystem, the second (number 9), <code>8.0MiB</code> in size, is reserved by Solaris (<code>BF07</code>).
+
  
We use now the graphical tool GParted selecting the partition 9 of each device and set the <code>bios_grub</code> flag. This action will add the flag and changes the partition type to BIOS boot partition (<code>EF02</code>).
+
<console>
 +
Go into the directory that you will chroot into
 +
# ##i##cd /mnt/funtoo
  
This can be done by command line as well ...
+
Make a boot folder and mount your boot drive
 +
# ##i##mkdir boot
 +
# ##i##mount /dev/sda1 boot
 +
</console>
  
=== 2. Basic Funtoo installation ===
+
[[Funtoo_Linux_Installation|Now download and extract the Funtoo stage3 ...]]
Now the installation environment has been set up, and the harddrives laid out. The next step will be the basic funtoo installation according to the following steps:
+
  
==== Import the pool ====
 
For installation purpose, the pool created will be imported under the mountpoint <code>/mnt/funtoo</code>.
 
<code>ashift=12</code> option  which is used for a hard drives with a 4096 sector size.
 
<console># ##i##  zpool import -f -d /dev/disk/by-id -R /mnt/funtoo tank 
 
</console>
 
Using <code>zpool status</code> and / or <code>df -k</code> can be used to verify the pool and the mountpoint.
 
  
==== Change directory to the pool's root directory ====
+
{{fancynote|It is trully recommended to use the current version and generic64. That reduces the risk of a broken build.  
Then we should change into the pool's mountpoint directory. At the same time this is the root directory of the intended funtoo installation. That is the directory where you will chroot into.
+
 
<console>
+
After successfull ZFS installation and successfull first boot, the kernel may be changed using the <code> eselect profile set ... </code> command. If you create a snapshot before, you may allways come back to your previous installation, with some simple steps ... (rollback your pool and in the worst case configure and install the bootloader again)}}
# ##i##cd /mnt/funtoo
+
</console>
+
  
==== Download and extract funtoo stage3 ====
 
  
It is strongly recommended to use the generic_64 funtoo stage 3 tarball, to avoid a broken build.
 
The download should be done using <code>wget</code>:
 
<console>
 
###i## wget http://build.funtoo.org/funtoo-current/x86-64bit/generic_64/stage3-latest.tar.xz
 
</console>
 
{{fancynote|A snapshot is recommended to allow the rollback to this stage.}}
 
The tarball should be extracted using the following command:
 
<console>
 
###i## tar xpf stage3-latest.tar.xz
 
</console>
 
{{Important|Omitting the option '''p''' will result in a broken system!}}
 
{{fancynote|More information could be found under [[Funtoo_Linux_Installation|Now download and extract the Funtoo stage3 ...]]}}
 
  
==== Bind the kernel related directories ====
+
Once you've extracted the stage3, do a few more preparations and chroot into your new funtoo environment:
 +
 
 
<console>
 
<console>
 +
Bind the kernel related directories
 
# ##i##mount -t proc none proc
 
# ##i##mount -t proc none proc
 
# ##i##mount --rbind /dev dev
 
# ##i##mount --rbind /dev dev
 
# ##i##mount --rbind /sys sys
 
# ##i##mount --rbind /sys sys
</console>
 
  
==== Copy network settings ====
+
Copy network settings
<console>
+
 
# ##i##cp -f /etc/resolv.conf etc
 
# ##i##cp -f /etc/resolv.conf etc
</console>
 
  
==== zpool.cache ====
+
Make the zfs folder in 'etc' and copy your zpool.cache
<code>zpool.cache</code> is omitted as described above.
+
# ##i##mkdir etc/zfs
 +
# ##i##cp /tmp/zpool.cache etc/zfs
  
==== Chroot into Funtoo ====
+
Chroot into Funtoo
<console>
+
 
# ##i##env -i HOME=/root TERM=$TERM chroot . bash -l
 
# ##i##env -i HOME=/root TERM=$TERM chroot . bash -l
 
</console>
 
</console>
{{fancynote|Using the graphical installation environment as described above, allows now to open several windows and changing the root into the chroot environment usind the following steps:
+
 
 +
{{fancynote|How to create zpool.cache file?}}
 +
If no <code>zpool.cache</code> file is available, the following command will create one:  
 
<console>
 
<console>
# ##i##cd /mnt/funtoo
+
# ##i##zpool set cachefile=/etc/zfs/zpool.cache tank
# ##i##env -i HOME=/root TERM=$TERM chroot . bash -l
+
 
</console>
 
</console>
This allows you to prepare allready the next steps, while the previous is still busy.
+
 
}}
+
{{:Install/PortageTree}}
==== Setup portage ====
+
 
Now you should think a little about how to set up <code>MAKEOPTS</code>, <code>LINGUAS</code> and <code>VIDEO_CARDS</code> in your portage <code>make.conf</code> file.
+
=== Add filesystems to /etc/fstab ===
==== Setup /etc/fstab ====
+
 
 +
Before we continue to compile and or install our kernel in the next step, we will edit the <code>/etc/fstab</code> file because if we decide to install our kernel through portage, portage will need to know where our <code>/boot</code> is, so that it can place the files in there.  
 +
 
 +
Edit <code>/etc/fstab</code>:
 +
 
 
{{file|name=/etc/fstab|desc= |body=
 
{{file|name=/etc/fstab|desc= |body=
 
# <fs>                  <mountpoint>    <type>          <opts>          <dump/pass>
 
# <fs>                  <mountpoint>    <type>          <opts>          <dump/pass>
  
tank/funtoo/ROOT               /          zfs           noatime       0 0
+
/dev/sda1               /boot           ext2            defaults        0 2
 +
 
 +
# If you set up a swap partition, you will have to add this line as well
 +
/dev/zvol/tank/swap    none            swap           defaults       0 0
 
}}
 
}}
==== Setup /etc/mtab ====
 
<code>zfs</code> and <code>zpool</code> will need it:
 
<console>
 
# ##i##grep -v rootfs /proc/mounts > /etc/mtab
 
</console>
 
  
==== Emerge portage tree ====
+
== Building kernel, initramfs and grub to work with zfs==
Before we install zfs and the bootchain, we should update our portage tree using:
+
=== Install genkernel and initial kernel build ===
<console>
+
 
# ##i##emerge --sync
+
We need to build a genkernel initially:
</console>
+
==== Update the system ====
+
Now we will update the system using the following command:
+
<console>
+
# ##i##emerge --uaDN @world
+
</console>
+
{{fancynote|Here consider to make a snapshot, so we could rollback to a clean and updated system... This must be done outside the chroot environment, because here we still have no zfs / zpool installation.}}
+
==== Emerge genkernel ====
+
<code>genkernel</code> is the tool used by gentoo and funtoo to build kernel and initramfs from sources. it supports <code>zfs</code> for the <code>gentoo-sources</code> as mentioned above.
+
 
<console>
 
<console>
 
# ##i##emerge genkernel
 
# ##i##emerge genkernel
</console>
 
==== Generate initial (unbootable) kernel ====
 
We need to build one first kernel. The <code>zfs</code> needs to find a valid kernel to configure itself properly.
 
<console>
 
# ##i##genkernel kernel
 
</console>
 
{{fancynote|No options are needed... And once the <code>genkernel</code> starts to build the kernel modules, the build may be interrupted with <code>ctrl c</code>. We have, what we need, and the system is still none-bootable.}}
 
{{fancynote|As we have done now some cpu consuming tasks, consider to make a snapshot, so we could rollback to a clean and updated system... This must be done outside the chroot environment, because here we still have no zfs / zpool installation.}}
 
==== The debian-sources ====
 
From mid 2015 on, funtoo comes with a debian kernel and initramfs preinstalled. At the same time it is configured for debian-sources.
 
Basically the debian distribution has a much better automatic hardware detection and configuration support. That is the advantage. The disadvantage is, we will have to recompile it, and it will take a long time!
 
So first of all, we may delete the preinstalled kernel and initramfs:
 
<console>
 
# ##i##rm /boot*
 
</console>
 
  
== 3. Add ZFS support ==
+
Build initial kernel (required for checks in sys-kernel/spl and sys-fs/zfs):
Now we will add the ZFS support to the basic funtoo system:
+
# ##i##genkernel kernel --no-clean --no-mountboot
=== The debian-sources ===
+
Using the debian-sources we have to ensure that the following kernel configuration is set:
+
  
CONFIG_CALL_SYMS=y
+
</console>
 
+
CONFIG_PREEMPT_NONE=y
+
  
=== Installing the ZFS userspace tools ===
+
=== Installing the ZFS userspace tools and kernel modules ===
This is a nearly indetical copy by the date of 31.08.2015 from the [[ZFS_Install_Guide]].
+
 
Emerge {{Package|sys-fs/zfs}}. This package will bring in {{Package|sys-kernel/spl}}, and {{Package|sys-fs/zfs-kmod}} as its dependencies:
 
Emerge {{Package|sys-fs/zfs}}. This package will bring in {{Package|sys-kernel/spl}}, and {{Package|sys-fs/zfs-kmod}} as its dependencies:
 +
 
<console>
 
<console>
 
# ##i##emerge zfs
 
# ##i##emerge zfs
 
</console>
 
</console>
Check to make sure that the zfs tools are working.  
+
 
 +
Check to make sure that the zfs tools are working. The <code>zpool.cache</code> file that you copied before should be displayed.
 +
 
 
<console>
 
<console>
 
# ##i##zpool status
 
# ##i##zpool status
 
# ##i##zfs list
 
# ##i##zfs list
# ##i##zfs list -t snapshot
 
 
</console>
 
</console>
 +
 
{{fancynote|If /etc/mtab is missing, these two commands will complaine.  
 
{{fancynote|If /etc/mtab is missing, these two commands will complaine.  
 
In that case solve that with:
 
In that case solve that with:
Line 269: Line 269:
 
# ##i##grep -v rootfs /proc/mounts > /etc/mtab
 
# ##i##grep -v rootfs /proc/mounts > /etc/mtab
 
</console>}}
 
</console>}}
 +
 
Add the zfs tools to openrc.
 
Add the zfs tools to openrc.
 
<console># ##i##rc-update add zfs boot</console>
 
<console># ##i##rc-update add zfs boot</console>
{{fancynote|A snapshot is recommended to allow the rollback to this stage. Now you can and should make the snapshot within the chroot environment.}}
+
 
=== Installing the ZFS initramfs, kernel and kernel modules ===
+
If everything worked, continue.
Build now kernel and initramfs and kernel modules:
+
 
 +
=== Install GRUB 2 ===
 +
 
 +
Install grub2:
 
<console>
 
<console>
# ##i##genkernel all --zfs --no-clean --callback="emerge @module-rebuild"
+
# ##i##echo "sys-boot/grub libzfs -truetype" >> /etc/portage/package.use
</console>It looks like the following is working better <code>module-rebuild</code> is fetching <code>debian-sources</code> as well, and overwrites own configuration..
+
# ##i##emerge grub
<console>
+
# ##i##genkernel all --zfs --no-clean --callback="emerge spl zfs-kmod zfs"
+
 
</console>
 
</console>
{{fancynote|To force to get a specific kernek configuration, you may use:
+
 
 +
Now install grub to the drive itself (not a partition):
 
<console>
 
<console>
# ##i##genkernel all --zfs --no-clean --kerneldir=/usr/src/linux -- kernel-config=/usr/src/<path_to_config> --callback="emerge spl zfs-kmod zfs"
+
# ##i##grub-install /dev/sda
 
</console>
 
</console>
The result should be the same.}}
 
  
=== Emerge, install und configure grub bootloader ===
+
=== Initial kernel build ===
This section now describes how to setup our boot configuration to boot from and into the whole disk ZFS.
+
Build now kernel and initramfs with --zfs
==== Emerge grub  ====
+
To emerge grub, we need to set some permanent USE flags first:
+
 
<console>
 
<console>
# ##i##echo "sys-boot/grub libzfs truetype" >> /etc/portage/package.use
+
# ##i##genkernel all --zfs --no-clean --no-mountboot --callback="emerge @module-rebuild"
# ##i##emerge grub
+
 
</console>
 
</console>
{{fancynote|Be aware that it is not <code>-truetype</code> but <code>truetype</code>. We will need some fonts to display the grub menu...}}
+
Using the debian-sources, the following command may give better results:
==== Check grub zfs support ====
+
The zfs support of grub may be checked by invoking <code>grub-probe</code>:
+
 
<console>
 
<console>
# ##i##grub-probe .
+
# ##i##genkernel all --zfs --no-clean --no-mountboot --callback="emerge spl zfs-kmod zfs"
 
</console>
 
</console>
it should return <code>zfs</code> for a directory on the zfs filesystem.
 
  
==== Install grub into /boot/grub directory ====
+
=== Configuring the Bootloader ===
The command <code>grub-install</code> is basically a script taking care off all installation issues. Presently we tweak it a little to do what we want. In the first step, it will be invoked as usual with standard parameters for only one pool device.
+
 
 +
Using the genkernel you must add 'real_root=ZFS=<root>' and 'dozfs' to your params.
 +
Edit  <code>/etc/boot.conf</code>:
 +
 
 +
{{file|name=/etc/boot.conf|desc= |body=
 +
"Funtoo ZFS" {
 +
        kernel kernel[-v]
 +
        initrd initramfs-genkernel-x86_64[-v]
 +
        params real_root=ZFS=tank/funtoo/root
 +
        params += dozfs=force
 +
}
 +
}}
 +
 
 +
The command <code>boot-update</code> should take care of grub configuration:
 +
 
 
<console>
 
<console>
# ##i##grub-install /dev/sda
+
Install boot-update (if it is missing):
 +
###i## emerge boot-update
 +
 
 +
Run boot-update to update grub.cfg
 +
###i## boot-update
 
</console>
 
</console>
That will create the /boot/grub directory and installs grub into it.
 
It also installs the bootloader into the BIOS grub partition, which will be overwritten later on.
 
==== Add zfs parameters to grub config ====
 
Now we need to tell grub to use our whole disk zfs during boot time. That is done in two steps.
 
First we edit <code>/etc/default/boot</code>
 
  
{{file|name=/etc/default/boot|desc= |body=
+
{{fancynote|If <code>boot-update</code>fails, try this:
GRUB_CMDLINE_LINUX="boot=zfs rpool=tank bootfs=tank/funtoo/ROOT"
+
}}
+
Thereafter we invoke <code>grub-mkconfig</code> to generate a new grub config file.
+
 
<console>
 
<console>
 
# ##i##grub-mkconfig -o /boot/grub/grub.cfg
 
# ##i##grub-mkconfig -o /boot/grub/grub.cfg
 
</console>
 
</console>
==== Finally install the grub bootloader ====
+
}}
As last step, we need to install the grub bootloader properly:
+
Now you should have a new installation of the kernel, initramfs and grub which are zfs capable. The configuration files should be updated, and the system should come up during the next reboot.
 +
 
 +
{{fancynote|If The <code>luks</code> integration works basically the same way.}}
 +
 
 +
== Final configuration ==
 +
=== Clean up and reboot ===
 +
We are almost done, we are just going to clean up, '''set our root password''', and unmount whatever we mounted and get out.
 +
 
 
<console>
 
<console>
# ##i##grub-install $(readlink -f /dev/disk/by-id/...disk1)
+
Delete the stage3 tarball that you downloaded earlier so it doesn't take up space.
# ##i##grub-install $(readlink -f /dev/disk/by-id/...disk2)
+
# ##i##cd /
 +
# ##i##rm stage3-latest.tar.xz
 +
 
 +
Set your root password
 +
# ##i##passwd
 +
>> Enter your password, you won't see what you are writing (for security reasons), but it is there!
 +
 
 +
Get out of the chroot environment
 +
# ##i##exit
 +
 
 +
Unmount all the kernel filesystem stuff and boot (if you have a separate /boot)
 +
# ##i##umount -l proc dev sys boot
 +
 
 +
Turn off the swap
 +
# ##i##swapoff /dev/zvol/tank/swap
 +
 
 +
Export the zpool
 +
# ##i##cd /
 +
# ##i##zpool export tank
 +
 
 +
Reboot
 +
# ##i##reboot
 
</console>
 
</console>
If all went through, without any error report, you have installed now a mirrored whole disk zpool, which may boot from any of the two disks. In any failure of one of the two disks, the system may be booted and recovered easily from the other one.
 
  
== 4. Final steps & cleanup ==
+
{{fancyimportant|'''Don't forget to set your root password as stated above before exiting chroot and rebooting. If you don't set the root password, you won't be able to log into your new system.'''}}
The final steps and the cleanup can be done according [[ZFS_Install_Guide]].
+
 
Further installation should be guided by [[Install]] and [[Funtoo_Linux_First_Steps]].
+
and that should be enough to get your system to boot on ZFS.
 +
 
 +
== After reboot ==
 +
 
 +
=== Forgot to reset password? ===
 +
==== System Rescue CD ====
 +
If you aren't using bliss-initramfs, then you can reboot back into your sysresccd and reset through there by mounting your drive, chrooting, and then typing passwd.  
 +
 
 +
Example:
 +
<console>
 +
# ##i##zpool import -f -R /mnt/funtoo tank
 +
# ##i##chroot /mnt/funtoo bash -l
 +
# ##i##passwd
 +
# ##i##exit
 +
# ##i##zpool export -f tank
 +
# ##i##reboot
 +
</console>
 +
 
 +
=== Create initial ZFS Snapshot ===
 +
Continue to set up anything you need in terms of /etc configurations. Once you have everything the way you like it, take a snapshot of your system. You will be using this snapshot to revert back to this state if anything ever happens to your system down the road. The snapshots are cheap, and almost instant.
 +
 
 +
To take the snapshot of your system, type the following:
 +
<console># ##i##zfs snapshot -r tank@install</console>
 +
 
 +
To see if your snapshot was taken, type:
 +
<console># ##i##zfs list -t snapshot</console>
 +
 
 +
If your machine ever fails and you need to get back to this state, just type (This will only revert your / dataset while keeping the rest of your data intact):
 +
<console># ##i##zfs rollback tank/funtoo/root@install</console>
 +
 
 +
{{fancyimportant|'''For a detailed overview, presentation of ZFS' capabilities, as well as usage examples, please refer to the [[ZFS_Fun|ZFS Fun]] page.'''}}
 +
 
 +
== Troubleshooting ==
 +
 
 +
=== Starting from scratch ===
 +
If your installation has gotten screwed up for whatever reason and you need a fresh restart, you can do the following from sysresccd to start fresh:
 +
 
 +
<console>
 +
Destroy the pool and any snapshots and datasets it has
 +
# ##i##zpool destroy -R -f tank
 +
 
 +
This deletes the files from /dev/sda1 so that even after we zap, recreating the drive in the exact sector
 +
position and size will not give us access to the old files in this partition.
 +
# ##i##mkfs.ext2 /dev/sda1
 +
# ##i##sgdisk -Z /dev/sda
 +
</console>
 +
 
 +
Now start the guide again :).
 +
 
 +
 
 +
=== Starting again reusing the same disk partitions and the same pool ===
 +
 
 +
If your installation has gotten screwed up for whatever reason and you want to keep your pole named tank than you should boou into the Rescue CD / USB as done before.
 +
 
 +
<console>import the pool reusing all existing datasets:
 +
# ##i##zpool import -f -R /mnt/funtoo tank
 +
</console>
 +
 
 +
Now you should wipe the previous installation off:
 +
 
 +
<console>
 +
let's go to our base installation directory:
 +
# ##i##cd /mnt/funtoo
 +
 
 +
and delete the old installation:
 +
# ##i##rm -rf *
 +
</console>
 +
 
 +
Now start the guide again, at "Pre-Chroot"
 +
 
 +
 
 +
[[Category:HOWTO]]
 +
[[Category:Filesystems]]
 +
[[Category:Featured]]
 +
[[Category:Install]]
  
{{important|'''enjoy it!'''}}
+
__NOTITLE__

Revision as of 07:50, September 12, 2015

Introduction

This tutorial will show you how to install Funtoo on ZFS (rootfs). This tutorial is meant to be an "overlay" over the Regular Funtoo Installation. Follow the normal installation and only use this guide for steps 2, 3, and 8.

Introduction to ZFS

Since ZFS is a new technology for Linux, it can be helpful to understand some of its benefits, particularly in comparison to BTRFS, another popular next-generation Linux filesystem:

  • On Linux, the ZFS code can be updated independently of the kernel to obtain the latest fixes. btrfs is exclusive to Linux and you need to build the latest kernel sources to get the latest fixes.
  • ZFS is supported on multiple platforms. The platforms with the best support are Solaris, FreeBSD and Linux. Other platforms with varying degrees of support are NetBSD, Mac OS X and Windows. btrfs is exclusive to Linux.
  • ZFS has the Adaptive Replacement Cache replacement algorithm while btrfs uses the Linux kernel's Last Recently Used replacement algorithm. The former often has an overwhelmingly superior hit rate, which means fewer disk accesses.
  • ZFS has the ZFS Intent Log and SLOG devices, which accelerates small synchronous write performance.
  • ZFS handles internal fragmentation gracefully, such that you can fill it until 100%. Internal fragmentation in btrfs can make btrfs think it is full at 10%. Btrfs has no automatic rebalancing code, so it requires a manual rebalance to correct it.
  • ZFS has raidz, which is like RAID 5/6 (or a hypothetical RAID 7 that supports 3 parity disks), except it does not suffer from the RAID write hole issue thanks to its use of CoW and a variable stripe size. btrfs gained integrated RAID 5/6 functionality in Linux 3.9. However, its implementation uses a stripe cache that can only partially mitigate the effect of the RAID write hole.
  • ZFS send/receive implementation supports incremental update when doing backups. btrfs' send/receive implementation requires sending the entire snapshot.
  • ZFS supports data deduplication, which is a memory hog and only works well for specialized workloads. btrfs has no equivalent.
  • ZFS datasets have a hierarchical namespace while btrfs subvolumes have a flat namespace.
  • ZFS has the ability to create virtual block devices called zvols in its namespace. btrfs has no equivalent and must rely on the loop device for this functionality, which is cumbersome.

The only area where btrfs is ahead of ZFS is in the area of small file efficiency. btrfs supports a feature called block suballocation, which enables it to store small files far more efficiently than ZFS. It is possible to use another filesystem (e.g. reiserfs) on top of a ZFS zvol to obtain similar benefits (with arguably better data integrity) when dealing with many small files (e.g. the portage tree).

For a quick tour of ZFS and have a big picture of its common operations you can consult the page ZFS Fun.

Disclaimers

Warning

This guide is a work in progress. Expect some quirks.

Today is 2015-05-12. ZFS has undertaken an upgrade - from 0.6.3 to 0.6.4. Please ensure that you use a RescueCD with ZFS 0.6.3. At present date grub 2.02 is not able to deal with those new ZFS parameters. If you want to use ZFS 0.6.4 for pool creation, you should use the compatability mode.

You should upgrade an existing pool only when grub is able to deal with - in a future version ... If not, you will not be able to boot into your system, and no rollback will help!

Please inform yourself!

Important

Since ZFS was really designed for 64 bit systems, we are only recommending and supporting 64 bit platforms and installations. We will not be supporting 32 bit platforms!

Downloading the ISO (With ZFS)

In order for us to install Funtoo on ZFS, you will need an environment that already provides the ZFS tools. Therefore we will download a customized version of System Rescue CD with ZFS included.

Name: sysresccd-4.2.0_zfs_0.6.2.iso  (545 MB)
Release Date: 2014-02-25
md5sum 01f4e6929247d54db77ab7be4d156d85
Download System Rescue CD with ZFS

Creating a bootable USB from ISO (From a Linux Environment)

After you download the iso, you can do the following steps to create a bootable USB:

Make a temporary directory
# mkdir /tmp/loop

Mount the iso
# mount -o ro,loop /root/sysresccd-4.2.0_zfs_0.6.2.iso /tmp/loop

Run the usb installer
# /tmp/loop/usb_inst.sh

That should be all you need to do to get your flash drive working.

Booting the ISO

Warning

When booting into the ISO, Make sure that you select the "Alternate 64 bit kernel (altker64)". The ZFS modules have been built specifically for this kernel rather than the standard kernel. If you select a different kernel, you will get a fail to load module stack error message.

Creating partitions

There are two ways to partition your disk: You can use your entire drive and let ZFS automatically partition it for you, or you can do it manually.

We will be showing you how to partition it manually because if you partition it manually you get to create your own layout, you get to have your own separate /boot partition (Which is nice since not every bootloader supports booting from ZFS pools), and you get to boot into RAID10, RAID5 (RAIDZ) pools and any other layouts due to you having a separate /boot partition.

gdisk (GPT Style)

A Fresh Start:

First lets make sure that the disk is completely wiped from any previous disk labels and partitions. We will also assume that /dev/sda is the target drive.

# sgdisk -Z /dev/sda
Warning

This is a destructive operation and the program will not ask you for confirmation! Make sure you really don't want anything on this disk.

Now that we have a clean drive, we will create the new layout.

First open up the application:

# gdisk /dev/sda

Create Partition 1 (boot):

Command: n ↵
Partition Number: 
First sector: 
Last sector: +250M ↵
Hex Code: 

Create Partition 2 (BIOS Boot Partition):

Command: n ↵
Partition Number: 
First sector: 
Last sector: +32M ↵
Hex Code: EF02 ↵

Create Partition 3 (ZFS):

Command: n ↵
Partition Number: 
First sector: 
Last sector: 
Hex Code: bf00 ↵

Command: p ↵

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048          514047   250.0 MiB   8300  Linux filesystem
   2          514048          579583   32.0 MiB    EF02  BIOS boot partition
   3          579584      1953525134   931.2 GiB   BF00  Solaris root

Command: w ↵

Format your /boot partition

# mkfs.ext2 -m 1 /dev/sda1

Create the zpool

We will first create the pool. The pool will be named tank. Feel free to name your pool as you want. We will use ashift=12 option which is used for a hard drives with a 4096 sector size.

#   zpool create -f -o ashift=12 -o cachefile=/tmp/zpool.cache -O normalization=formD -m none -R /mnt/funtoo tank /dev/sda3 

Create the zfs datasets

We will now create some datasets. For this installation, we will create a small but future proof amount of datasets. We will have a dataset for the OS (/), and your swap. We will also show you how to create some optional datasets: /home, /usr/src, and /usr/portage. Notice, datasets are examples only and not strictly required.

Create some empty containers for organization purposes, and make the dataset that will hold /
#  zfs create -p tank/funtoo
#  zfs create -o mountpoint=/ tank/funtoo/root

Optional, Create swap
#  zfs create tank/swap -V 2G -b 4K
#  mkswap /dev/tank/swap
#  swapon /dev/tank/swap

Optional, but recommended datasets: /home
#  zfs create -o mountpoint=/home tank/funtoo/home

Optional datasets: /usr/src, /usr/portage/{distfiles,packages}
#  zfs create -o mountpoint=/usr/src tank/funtoo/src
#  zfs create -o mountpoint=/usr/portage -o compression=off tank/funtoo/portage
#  zfs create -o mountpoint=/usr/portage/distfiles tank/funtoo/portage/distfiles
#  zfs create -o mountpoint=/usr/portage/packages tank/funtoo/portage/packages

Installing Funtoo

Pre-Chroot

Go into the directory that you will chroot into
# cd /mnt/funtoo

Make a boot folder and mount your boot drive
# mkdir boot
# mount /dev/sda1 boot

Now download and extract the Funtoo stage3 ...

Note

It is trully recommended to use the current version and generic64. That reduces the risk of a broken build.

After successfull ZFS installation and successfull first boot, the kernel may be changed using the eselect profile set ... command. If you create a snapshot before, you may allways come back to your previous installation, with some simple steps ... (rollback your pool and in the worst case configure and install the bootloader again)

Once you've extracted the stage3, do a few more preparations and chroot into your new funtoo environment:

Bind the kernel related directories
# mount -t proc none proc
# mount --rbind /dev dev
# mount --rbind /sys sys

Copy network settings
# cp -f /etc/resolv.conf etc

Make the zfs folder in 'etc' and copy your zpool.cache
# mkdir etc/zfs
# cp /tmp/zpool.cache etc/zfs

Chroot into Funtoo
# env -i HOME=/root TERM=$TERM chroot . bash -l
Note

How to create zpool.cache file?

If no zpool.cache file is available, the following command will create one:

# zpool set cachefile=/etc/zfs/zpool.cache tank

Downloading the Portage tree

Note

For an alternative way to do this, see Installing Portage From Snapshot.

Now it's time to install a copy of the Portage repository, which contains package scripts (ebuilds) that tell portage how to build and install thousands of different software packages. To create the Portage repository, simply run emerge --sync from within the chroot. This will automatically clone the portage tree from GitHub:

(chroot) # emerge --sync
Important

If you receive the error with initial emerge --sync due to git protocol restrictions, change SYNC variable in /etc/portage/make.conf:

SYNC="https://github.com/funtoo/ports-2012.git"
Note

To update the Funtoo Linux system just type:

(chroot) # emerge -auDN @world

Add filesystems to /etc/fstab

Before we continue to compile and or install our kernel in the next step, we will edit the /etc/fstab file because if we decide to install our kernel through portage, portage will need to know where our /boot is, so that it can place the files in there.

Edit /etc/fstab:

/etc/fstab
# <fs>                  <mountpoint>    <type>          <opts>          <dump/pass>

/dev/sda1               /boot           ext2            defaults        0 2

# If you set up a swap partition, you will have to add this line as well
/dev/zvol/tank/swap     none            swap            defaults        0 0

Building kernel, initramfs and grub to work with zfs

Install genkernel and initial kernel build

We need to build a genkernel initially:

# emerge genkernel

Build initial kernel (required for checks in sys-kernel/spl and sys-fs/zfs):
# genkernel kernel --no-clean --no-mountboot 

Installing the ZFS userspace tools and kernel modules

Emerge sys-fs/zfs (package not on wiki - please add). This package will bring in sys-kernel/spl (package not on wiki - please add), and sys-fs/zfs-kmod (package not on wiki - please add) as its dependencies:

# emerge zfs

Check to make sure that the zfs tools are working. The zpool.cache file that you copied before should be displayed.

# zpool status
# zfs list
Note

If /etc/mtab is missing, these two commands will complaine. In that case solve that with:

# grep -v rootfs /proc/mounts > /etc/mtab

Add the zfs tools to openrc.

# rc-update add zfs boot

If everything worked, continue.

Install GRUB 2

Install grub2:

# echo "sys-boot/grub libzfs -truetype" >> /etc/portage/package.use
# emerge grub

Now install grub to the drive itself (not a partition):

# grub-install /dev/sda

Initial kernel build

Build now kernel and initramfs with --zfs

# genkernel all --zfs --no-clean --no-mountboot --callback="emerge @module-rebuild"

Using the debian-sources, the following command may give better results:

# genkernel all --zfs --no-clean --no-mountboot --callback="emerge spl zfs-kmod zfs"

Configuring the Bootloader

Using the genkernel you must add 'real_root=ZFS=<root>' and 'dozfs' to your params. Edit /etc/boot.conf:

/etc/boot.conf
"Funtoo ZFS" {
        kernel kernel[-v]
        initrd initramfs-genkernel-x86_64[-v]
        params real_root=ZFS=tank/funtoo/root
        params += dozfs=force
}

The command boot-update should take care of grub configuration:

Install boot-update (if it is missing):
# emerge boot-update

Run boot-update to update grub.cfg
# boot-update
Note

If boot-updatefails, try this:

# grub-mkconfig -o /boot/grub/grub.cfg

Now you should have a new installation of the kernel, initramfs and grub which are zfs capable. The configuration files should be updated, and the system should come up during the next reboot.

Note

If The luks integration works basically the same way.

Final configuration

Clean up and reboot

We are almost done, we are just going to clean up, set our root password, and unmount whatever we mounted and get out.

Delete the stage3 tarball that you downloaded earlier so it doesn't take up space.
# cd /
# rm stage3-latest.tar.xz

Set your root password
# passwd
>> Enter your password, you won't see what you are writing (for security reasons), but it is there!

Get out of the chroot environment
# exit

Unmount all the kernel filesystem stuff and boot (if you have a separate /boot)
# umount -l proc dev sys boot

Turn off the swap
# swapoff /dev/zvol/tank/swap

Export the zpool
# cd /
# zpool export tank

Reboot
# reboot
Important

Don't forget to set your root password as stated above before exiting chroot and rebooting. If you don't set the root password, you won't be able to log into your new system.

and that should be enough to get your system to boot on ZFS.

After reboot

Forgot to reset password?

System Rescue CD

If you aren't using bliss-initramfs, then you can reboot back into your sysresccd and reset through there by mounting your drive, chrooting, and then typing passwd.

Example:

# zpool import -f -R /mnt/funtoo tank
# chroot /mnt/funtoo bash -l
# passwd
# exit
# zpool export -f tank
# reboot

Create initial ZFS Snapshot

Continue to set up anything you need in terms of /etc configurations. Once you have everything the way you like it, take a snapshot of your system. You will be using this snapshot to revert back to this state if anything ever happens to your system down the road. The snapshots are cheap, and almost instant.

To take the snapshot of your system, type the following:

# zfs snapshot -r tank@install

To see if your snapshot was taken, type:

# zfs list -t snapshot

If your machine ever fails and you need to get back to this state, just type (This will only revert your / dataset while keeping the rest of your data intact):

# zfs rollback tank/funtoo/root@install
Important

For a detailed overview, presentation of ZFS' capabilities, as well as usage examples, please refer to the ZFS Fun page.

Troubleshooting

Starting from scratch

If your installation has gotten screwed up for whatever reason and you need a fresh restart, you can do the following from sysresccd to start fresh:

Destroy the pool and any snapshots and datasets it has
# zpool destroy -R -f tank

This deletes the files from /dev/sda1 so that even after we zap, recreating the drive in the exact sector
position and size will not give us access to the old files in this partition.
# mkfs.ext2 /dev/sda1
# sgdisk -Z /dev/sda

Now start the guide again :).

Starting again reusing the same disk partitions and the same pool

If your installation has gotten screwed up for whatever reason and you want to keep your pole named tank than you should boou into the Rescue CD / USB as done before.

import the pool reusing all existing datasets:
# zpool import -f -R /mnt/funtoo tank

Now you should wipe the previous installation off:

let's go to our base installation directory:
# cd /mnt/funtoo

and delete the old installation: 
# rm -rf *

Now start the guide again, at "Pre-Chroot"