Difference between pages "Metro" and "ZFS Install Guide"

(Difference between pages)
(Building Gentoo stages)
 
(Install genkernel and initial kernel build)
 
Line 1: Line 1:
{{#layout:doc}}{{#widget:AddThis}}[[Metro]] is the build system for Funtoo Linux and [[Gentoo Linux]] stages. It automates the bootstrapping process.
+
== Introduction ==
  
This tutorial will take you through installing, setting up and running [[Metro]].
+
This tutorial will show you how to install Funtoo on ZFS (rootfs). This tutorial is meant to be an "overlay" over the [[Funtoo_Linux_Installation|Regular Funtoo Installation]]. Follow the normal installation and only use this guide for steps 2, 3, and 8.
  
These other Metro documents are also available:
+
=== Introduction to ZFS ===
  
{{#ask: [[Category:Metro]]
+
Since ZFS is a new technology for Linux, it can be helpful to understand some of its benefits, particularly in comparison to BTRFS, another popular next-generation Linux filesystem:
|format=ul
+
}}
+
  
= Preface =
+
* On Linux, the ZFS code can be updated independently of the kernel to obtain the latest fixes. btrfs is exclusive to Linux and you need to build the latest kernel sources to get the latest fixes.
  
== How Metro Works ==
+
* ZFS is supported on multiple platforms. The platforms with the best support are Solaris, FreeBSD and Linux. Other platforms with varying degrees of support are NetBSD, Mac OS X and Windows. btrfs is exclusive to Linux.
  
Metro is the Funtoo Linux automated build system, and is used to build Funtoo Linux stage tarballs.
+
* ZFS has the Adaptive Replacement Cache replacement algorithm while btrfs uses the Linux kernel's Last Recently Used replacement algorithm. The former often has an overwhelmingly superior hit rate, which means fewer disk accesses.
  
[[Metro]] cannot create a stage tarball out of thin air. To build a new stage tarball, [[Metro]] must use an existing, older stage tarball called a "seed" stage. This seed stage typically is used as the ''build environment'' for creating the stage we want.
+
* ZFS has the ZFS Intent Log and SLOG devices, which accelerates small synchronous write performance.
  
[[Metro]] can use two kinds of seed stages. Traditionally, [[Metro]] has used a stage3 as a seed stage. This stage3 is then used to build a new stage1, which in turn is used to build a new stage2, and then a new stage3. This is generally the most reliable way to build [[Gentoo Linux]] or Funtoo Linux, so it's the recommended approach.
+
* ZFS handles internal fragmentation gracefully, such that you can fill it until 100%. Internal fragmentation in btrfs can make btrfs think it is full at 10%. Btrfs has no automatic rebalancing code, so it requires a manual rebalance to correct it.
{{fancyimportant|'''After switching metro builds to Funtoo profile, Gentoo stages are no longer provided'''!}}
+
  
== Seeds and Build Isolation ==
+
* ZFS has raidz, which is like RAID 5/6 (or a hypothetical RAID 7 that supports 3 parity disks), except it does not suffer from the RAID write hole issue thanks to its use of CoW and a variable stripe size. btrfs gained integrated RAID 5/6 functionality in Linux 3.9. However, its implementation uses a stripe cache that can only partially mitigate the effect of the RAID write hole.
  
Another important concept to mention here is something called ''build isolation''. Because [[Metro]] creates an isolated build environment, and the build environment is explicitly defined using existing, tangible entities -- a seed stage and a portage snapshot -- you will get consistent, repeatable results. In other words, the same seed stage, portage snapshot and build instructions will generate an essentially identical result, even if you perform the build a month later on someone else's workstation.
+
* ZFS send/receive implementation supports incremental update when doing backups. btrfs' send/receive implementation requires sending the entire snapshot.
  
== Local Build ==
+
* ZFS supports data deduplication, which is a memory hog and only works well for specialized workloads. btrfs has no equivalent.
  
Say you wanted to build a new <tt>pentium4</tt> stage3 tarball. The recommended method of doing this would be to grab an existing <tt>pentium4</tt> stage3 tarball to use as your seed stage. [[Metro]] will be told to use this existing <tt>pentium4</tt> stage3 to build a new stage1 for the same <tt>pentium4</tt>. For this process, the generic <tt>pentium4</tt> stage3 would provide the ''build environment'' for creating our new stage1. Then, the new stage1 would serve as the build environment for creating the new <tt>pentium4</tt> stage2. And the new <tt>pentium4</tt> stage2 would serve as the build environment for creating the new <tt>pentium4</tt> stage3.
+
* ZFS datasets have a hierarchical namespace while btrfs subvolumes have a flat namespace.
  
In the [[Metro]] terminology this is called a '''local build''', which means a stage3 of a given architecture is used to seed a brand new build of the same architecture. Incidentally this will be the first exercise we are going to perform in this tutorial.
+
* ZFS has the ability to create virtual block devices called zvols in its namespace. btrfs has no equivalent and must rely on the loop device for this functionality, which is cumbersome.
  
A week later, you may want to build a brand new <tt>pentium4</tt> stage3 tarball. Rather than starting from the original <tt>pentium4</tt> stage3 again, you'd probably configure [[Metro]] to use the most-recently-built <tt>pentium4</tt> stage3 as the seed. [[Metro]] has built-in functionality to make this easy, allowing it to easily find and track the most recent stage3 seed available.
+
The only area where btrfs is ahead of ZFS is in the area of small file
 +
efficiency. btrfs supports a feature called block suballocation, which
 +
enables it to store small files far more efficiently than ZFS. It is
 +
possible to use another filesystem (e.g. reiserfs) on top of a ZFS zvol
 +
to obtain similar benefits (with arguably better data integrity) when
 +
dealing with many small files (e.g. the portage tree).
  
== Remote Build ==
+
For a quick tour of ZFS and have a big picture of its common operations you can consult the page [[ZFS Fun]].
  
[[Metro]] can also perform '''remote build''', where a stage3 of a different, but binary compatible, architecture is used as a seed to build a different architecture stage3. Consequentiality the second exercise we are going to perform in this tutorial will be to build a <tt>core2 32bit</tt> stage3 tarball from the <tt>pentium4</tt> stage3 tarball we have just built.
+
=== Disclaimers ===
  
TODO: add caveats about what archs can be seeded and what can be not (maybe a table?)
+
{{fancywarning|This guide is a work in progress. Expect some quirks.
  
== Tailored Build ==
+
Today is 2015-05-12. ZFS has undertaken an upgrade - from 0.6.3 to 0.6.4. Please ensure that you use a RescueCD with ZFS 0.6.3. At present date grub 2.02 is not able to deal with those new ZFS parameters. If you want to use ZFS 0.6.4 for pool creation, you should use the compatability mode.
  
Last, it's also worthy noting that both in <tt>local</tt> and <tt>remote builds</tt>, [[Metro]] can be configured to add and/or remove individual packages to the final tarball.
+
You should upgrade an existing pool only when grub is able to deal with - in a future version ... If not, you will not be able to boot into your system, and no rollback will help!
Let's say you can't live without <tt>app-misc/screen</tt>, at the end of this tutorial, we will show how to have your tailored stage3 to include it.
+
  
== Installing Metro ==
+
Please inform yourself!}}
  
'''The recommended and supported method''' is to use the Git repository of [[Metro]]. 
+
{{fancyimportant|'''Since ZFS was really designed for 64 bit systems, we are only recommending and supporting 64 bit platforms and installations. We will not be supporting 32 bit platforms'''!}}
  
Ensure that {{Package|dev-vcs/git}} and {{Package|dev-python/boto}} (optional; required for EC2 support) are installed on your system:
+
== Downloading the ISO (With ZFS) ==
 +
In order for us to install Funtoo on ZFS, you will need an environment that already provides the ZFS tools. Therefore we will download a customized version of System Rescue CD with ZFS included.
  
<console>
+
<pre>
# ##i##emerge dev-vcs/git
+
Name: sysresccd-4.2.0_zfs_0.6.2.iso  (545 MB)
# ##i##emerge dev-python/boto
+
Release Date: 2014-02-25
</console>
+
md5sum 01f4e6929247d54db77ab7be4d156d85
 +
</pre>
  
Next, clone the master git repository as follows:
 
  
<console>
+
'''[http://ftp.osuosl.org/pub/funtoo/distfiles/sysresccd/ Download System Rescue CD with ZFS]'''<br />
# ##i##cd /root
+
# ##i##git clone git://github.com/funtoo/metro.git
+
# ##i##cp /root/metro/metro.conf ~/.metro
+
</console>
+
  
You will now have a directory called <tt>/root/metro</tt> that contains all the [[Metro]] source code.
+
== Creating a bootable USB from ISO (From a Linux Environment) ==
 +
After you download the iso, you can do the following steps to create a bootable USB:
  
Metro is now installed. It's time to customize it for your local system.
+
<console>
 +
Make a temporary directory
 +
# ##i##mkdir /tmp/loop
  
= Configuring Metro =
+
Mount the iso
 +
# ##i##mount -o ro,loop /root/sysresccd-4.2.0_zfs_0.6.2.iso /tmp/loop
  
{{Note|Metro is not currently able to build Gentoo stages. See {{Bug|FL-901}}.}}
+
Run the usb installer
 +
# ##i##/tmp/loop/usb_inst.sh
 +
</console>
  
[[User:Drobbins|Daniel Robbins]] maintains [[Metro]], so it comes pre-configured to successfully build Funtoo Linux releases. Before reading further, you might want to customize some basic settings like the number of concurrent jobs to fit your hardware's capabilities or the directory to use for produced stage archives. This is accomplished by editing <tt>~/.metro</tt> which is the [[Metro]]'s master configuration file.
+
That should be all you need to do to get your flash drive working.
  
Please note that <code>path/install</code> must point to where metro was installed. Point <code>path/distfiles</code> to where your distfiles reside. Also set <code>path/mirror/owner</code> and <code>path/mirror/group</code> to the owner and group of all the files that will be written to the build repository directory, which by default (as per the configuration file) is at <code>/home/mirror/funtoo</code>. The cache directory normally resides inside the temp directory -- this can be modified as desired. The cache directory can end up holding many cached .tbz2 packages, and eat up a lot of storage. You may want to place the temp directory on faster storage, for faster compile times, and place the cache directory on slower, but more plentiful storage.
+
== Booting the ISO ==
  
{{file|name=.metro|desc=Metro configuration|body=
+
{{fancywarning|'''When booting into the ISO, Make sure that you select the "Alternate 64 bit kernel (altker64)". The ZFS modules have been built specifically for this kernel rather than the standard kernel. If you select a different kernel, you will get a fail to load module stack error message.'''}}
# Main metro configuration file - these settings need to be tailored to your install:
+
  
[section path]
+
== Creating partitions ==
install: /root/metro
+
There are two ways to partition your disk: You can use your entire drive and let ZFS automatically partition it for you, or you can do it manually.
tmp: /var/tmp/metro
+
cache: $[path/tmp]/cache
+
distfiles: /var/src/distfiles
+
work: $[path/tmp]/work/$[target/build]/$[target/name]
+
  
[section path/mirror]
+
We will be showing you how to partition it '''manually''' because if you partition it manually you get to create your own layout, you get to have your own separate /boot partition (Which is nice since not every bootloader supports booting from ZFS pools), and you get to boot into RAID10, RAID5 (RAIDZ) pools and any other layouts due to you having a separate /boot partition.
  
: /home/mirror/funtoo
+
==== gdisk (GPT Style) ====
owner: root
+
group: repomgr
+
dirmode: 775
+
  
[section portage]
+
'''A Fresh Start''':
  
MAKEOPTS: auto
+
First lets make sure that the disk is completely wiped from any previous disk labels and partitions.
 +
We will also assume that <tt>/dev/sda</tt> is the target drive.<br />
  
[section emerge]
+
<console>
 +
# ##i##sgdisk -Z /dev/sda
 +
</console>
  
options: --jobs=4 --load-average=4 --keep-going=n
+
{{fancywarning|This is a destructive operation and the program will not ask you for confirmation! Make sure you really don't want anything on this disk.}}
  
# This line should not be modified:
+
Now that we have a clean drive, we will create the new layout.
[collect $[path/install]/etc/master.conf]
+
}}
+
  
== Arch and Subarch ==
+
First open up the application:
  
In the following example we are creating a pentium4 stage 3 compiled for x86-32bit binary compatibility. Pentium4 is a subarch of the x86-32bit architecture. Once you have metro installed you may find a full list of each subarch in your <tt>/root/metro/subarch</tt> directory each subarch will have the file extension .spec
 
Example:
 
 
<console>
 
<console>
###i## ls /root/metro/subarch
+
# ##i##gdisk /dev/sda
# ls subarch/
+
amd64-bulldozer-pure64.spec  armv7a.spec          core-avx-i.spec        i686.spec        pentium.spec
+
amd64-bulldozer.spec        armv7a_hardfp.spec  core2_32.spec          k6-2.spec        pentium2.spec
+
amd64-k10-pure64.spec        athlon-4.spec        core2_64-pure64.spec    k6-3.spec        pentium3.spec
+
amd64-k10.spec              athlon-mp.spec      core2_64.spec          k6.spec          pentium4.spec
+
amd64-k8+sse3.spec          athlon-tbird.spec    corei7-pure64.spec      native_32.spec    pentiumpro.spec
+
amd64-k8+sse3_32.spec        athlon-xp.spec      corei7.spec            native_64.spec    prescott.spec
+
amd64-k8-pure64.spec        athlon.spec          generic_32.spec        niagara.spec      ultrasparc.spec
+
amd64-k8.spec                atom_32.spec        generic_64-pure64.spec  niagara2.spec    ultrasparc3.spec
+
amd64-k8_32.spec            atom_64-pure64.spec  generic_64.spec        nocona.spec      xen-pentium4+sse3.spec
+
armv5te.spec                atom_64.spec        generic_sparcv9.spec    opteron_64.spec  xen-pentium4+sse3_64.spec
+
armv6j.spec                  btver1.spec          geode.spec              pentium-m.spec
+
armv6j_hardfp.spec          btver1_64.spec      i486.spec              pentium-mmx.spec
+
 
</console>
 
</console>
  
= First stages build (local build) =
+
'''Create Partition 1''' (boot):
 
+
To get this all started, we need to bootstrap the process by downloading an initial seed stage3 to use for building and place it in its proper location in <tt>/home/mirror/funtoo</tt>, so that [[Metro]] can find it. We will also need to create some special &quot;control&quot; files in <tt>/home/mirror/funtoo</tt>, which will allow [[Metro]] to understand how it is supposed to proceed.
+
 
+
== Step 1: Set up pentium4 repository (local build) ==
+
 
+
Assuming we're following the basic steps outlined in the previous section, and building an unstable funtoo (<tt>funtoo-current</tt>) build for the <tt>pentium4</tt>, using a generic <tt>pentium4</tt> stage3 as a seed stage, then here the first set of steps we'd perform:
+
 
+
 
<console>
 
<console>
# ##i##install -d /home/mirror/funtoo/funtoo-current/x86-32bit/pentium4
+
Command: ##i##n ↵
# ##i##install -d /home/mirror/funtoo/funtoo-current/snapshots
+
Partition Number: ##i##
# ##i##cd /home/metro/mirror/funtoo/funtoo-current/x86-32bit/pentium4
+
First sector: ##i##
# ##i##install -d 2011-12-13
+
Last sector: ##i##+250M ↵
# ##i##cd 2011-12-13
+
Hex Code: ##i##
# ##i##wget -c http://ftp.osuosl.org/pub/funtoo/funtoo-current/x86-32bit/pentium4/2011-12-13/stage3-pentium4-funtoo-current-2011-12-13.tar.xz
+
# ##i##cd ..
+
# ##i##install -d .control/version
+
# ##i##echo "2011-12-13" > .control/version/stage3
+
# ##i##install -d .control/strategy
+
# ##i##echo local >  .control/strategy/build
+
# ##i##echo stage3 > .control/strategy/seed
+
 
</console>
 
</console>
  
OK, let's review the steps above. First, we create the directory <tt>/home/mirror/funtoo/funtoo-current/x86-32bit/pentium4</tt>, which is where Metro will expect to find unstable <tt>funtoo-current</tt> pentium4 builds -- it is configured to look here by default. Then we create a specially-named directory to house our seed x86 stage3. Again, by default, Metro expects the directory to be named this way. We enter this directory, and download our seed x86 stage3 from funtoo.org. Note that the <tt>2010-12-24</tt> version stamp matches. Make sure that your directory name matches the stage3 name too. Everything has been set up to match Metro's default filesystem layout.
+
'''Create Partition 2''' (BIOS Boot Partition):
 +
<console>Command: ##i##n ↵
 +
Partition Number: ##i##↵
 +
First sector: ##i##↵
 +
Last sector: ##i##+32M ↵
 +
Hex Code: ##i##EF02 ↵
 +
</console>
  
Next, we go back to the <tt>/home/mirror/metro/funtoo-current/x86-32bit/pentium4</tt> directory, and inside it, we create a <tt>.control</tt> directory. This directory and its subdirectories contain special files that Metro references to determine certain aspects of its behavior. The <tt>.control/version/stage3</tt> file is used by Metro to track the most recently-built stage3 for this particular build and subarch. Metro will automatically update this file with a new version stamp after it successfully builds a new stage3. But because Metro didn't actually ''build'' this stage3, we need to set up the <tt>.control/version/stage3</tt> file manually. This will allow Metro to find our downloaded stage3 when we set up our pentium4 build to use it as a seed. Also note that Metro will create a similar <tt>.control/version/stage1</tt> file after it successfully builds an pentium4 funtoo-current stage1.
+
'''Create Partition 3''' (ZFS):
 +
<console>Command: ##i##n ↵
 +
Partition Number: ##i##↵
 +
First sector: ##i##↵
 +
Last sector: ##i##↵
 +
Hex Code: ##i##bf00 ↵
  
We also set up <tt>.control/strategy/build</tt> and <tt>.control/strategy/seed</tt> files with values of <tt>local</tt> and <tt>stage3</tt> respectively. These files define the building strategy Metro will use when we build pentium4 funtoo-current stages. With a build strategy of <tt>local</tt>, Metro will source its seed stage from funtoo-current pentium4, the current directory. And with a seed strategy of <tt>stage3</tt>, Metro will use a stage3 as a seed, and use this seed to build a new stage1, stage2 and stage3.
+
Command: ##i##p ↵
  
== Step 2: Building the pentium4 stages ==
+
Number  Start (sector)    End (sector)  Size      Code  Name
 +
  1            2048          514047  250.0 MiB  8300  Linux filesystem
 +
  2         514048          579583  32.0 MiB    EF02  BIOS boot partition
 +
  3          579584      1953525134  931.2 GiB  BF00  Solaris root
  
Incidentally, if all you wanted to do at this point was to build a new pentium4 funtoo-current stage1/2/3 (plus openvz and vserver templates). You would begin the process by typing:
+
Command: ##i##w ↵
 +
</console>
 +
 
 +
 
 +
=== Format your /boot partition ===
  
 
<console>
 
<console>
# ##i##cd /root/metro
+
# ##i##mkfs.ext2 -m 1 /dev/sda1
# ##i##scripts/ezbuild.sh funtoo-current pentium4
+
 
</console>
 
</console>
  
If you have a slow machine, it could take several hours to be completed because several "heavy" components like gcc or glibc have to be recompiled in each stage. Once a stage has been successfully completed, it is placed in the <tt>"${METRO_MIRROR}/funtoo-current/x32-bit/pentium4/YYYY-MM-DD"</tt> subdirectory, where <tt>YYYY-MM-DD</tt> is today's date at the time the <tt>ezbuild.sh</tt> script was started or the date you put on the ezscript.sh command line.
+
=== Create the zpool ===
 +
We will first create the pool. The pool will be named  <code>tank</code>. Feel free to name your pool as you want.  We will use <code>ashift=12</code> option  which is used for a hard drives with a 4096 sector size.
 +
<console># ##i##  zpool create -f -o ashift=12 -o cachefile=/tmp/zpool.cache -O normalization=formD -m none -R /mnt/funtoo tank /dev/sda3 </console>
  
= Building for another binary compatible architecture (remote build) =
+
=== Create the zfs datasets ===
 +
We will now create some datasets. For this installation, we will create a small but future proof amount of datasets. We will have a dataset for the OS (/), and your swap. We will also show you how to create some optional datasets as examples ones: <code>/home</code>,  <code>/usr/src</code>, and <code>/usr/portage</code>. 
  
As written above, [[Metro]] is able to perform '''remote build''' building different architecture stage3 from a binary compatible seeding stage3 (e.g. using a pentium4 stage3 to seed a <tt>Intel Core2 32bits</tt> stage3).
+
<console>
 +
Create some empty containers for organization purposes, and make the dataset that will hold /
 +
# ##i##zfs create -p tank/funtoo
 +
# ##i##zfs create -o mountpoint=/ tank/funtoo/root
  
In the Metro terminology this is called a '''remote build''' (a stage 3 of a different, but binary compatible, architecture is used as a seed).
+
Optional, but recommended datasets: /home
What's not compatible? You can't use a <tt>Sparc</tt> architecture to generate an <tt>x86</tt> or <tt>ARM</tt> based stage and vice-versa. If you use a 32bit stage then you don't want to seed a 64bit build from it. Be sure that you are using a stage from the same architecture that you are trying to seed. Check [http://ftp.osuosl.org/pub/funtoo/funtoo-current/ Funtoo-current FTP Mirror] for a stage that is from the same Architecture that you will be building. 
+
# ##i##zfs create -o mountpoint=/home tank/funtoo/home
  
{{Note|Often, one build (ie. funtoo-current) can be used as a seed for another build such as funtoo-stable. However, hardened builds require hardened stages as seeds in order for the build to complete successfully.}}
+
Optional datasets: /usr/src, /usr/portage/{distfiles,packages}
 +
# ##i##zfs create -o mountpoint=/usr/src tank/funtoo/src
 +
# ##i##zfs create -o mountpoint=/usr/portage -o compression=off tank/funtoo/portage
 +
# ##i##zfs create -o mountpoint=/usr/portage/distfiles tank/funtoo/portage/distfiles
 +
# ##i##zfs create -o mountpoint=/usr/portage/packages tank/funtoo/portage/packages
 +
</console>
  
== Step 1: Set up Core_2 32bit repository ==
+
== Installing Funtoo ==
  
In this example, we're going to use this pentium4 funtoo-current stage3 to seed a new Core_2 32bit funtoo-current build. To get that done, we need to set up the pentium4 build directory as follows:
+
=== Pre-Chroot ===
  
 
<console>
 
<console>
# ##i## cd /home/mirror/funtoo/funtoo-current/x86-32bit
+
Go into the directory that you will chroot into
# ##i##install -d core2_32
+
# ##i##cd /mnt/funtoo
# ##i##cd core2_32
+
 
# ##i##install -d .control/strategy
+
Make a boot folder and mount your boot drive
# ##i##echo remote > .control/strategy/build
+
# ##i##mkdir boot
# ##i##echo stage3 > .control/strategy/seed
+
# ##i##mount /dev/sda1 boot
# ##i##install -d .control/remote
+
# ##i##echo funtoo-current > .control/remote/build
+
# ##i##echo x86-32bit > .control/remote/arch_desc
+
# ##i##echo pentium4 > .control/remote/subarch
+
 
</console>
 
</console>
  
The steps we follow are similar to those we performed for a ''local build'' to set up our pentium4 directory for local build. However, note the differences. We didn't download a stage, because we are going to use the pentium4 stage to build a new Core_2 32bit stage. We also didn't create the <tt>.control/version/stage{1,3}</tt> files because Metro will create them for us after it successfully builds a new stage1 and stage3. We are still using a <tt>stage3</tt> seed strategy, but we've set the build strategy to <tt>remote</tt>, which means that we're going to use a seed stage that's not from this particular subdirectory. Where are we going to get it from? The <tt>.control/remote</tt> directory contains this information, and lets Metro know that it should look for its seed stage3 in the <tt>/home/mirror/funtoo/funtoo-current/x86-32bit/pentium4</tt> directory. Which one will it grab? You guessed it -- the most recently built ''stage3'' (since our seed strategy was set to <tt>stage3</tt>) that has the version stamp of <tt>2010-12-24</tt>, as recorded in <tt>/home/mirror/funtoo-current/x86-32bit/pentium4/.control/version/stage</tt>. Now you can see how all those control files come together to direct Metro to do the right thing.
+
[[Funtoo_Linux_Installation|Now download and extract the Funtoo stage3 ...]]
  
{{Note|<code>arch_desc</code> should be set to one of: <code>x86-32bit</code>, <code>x86-64bit</code> or <code>pure64</code> for PC-compatible systems. You must use a 32-bit build as a seed for other 32-bit builds, and a 64-bit build as a seed for other 64-bit builds.}}
 
  
== Step 2: Building the Core_2 32bit stages ==
+
{{fancynote|It is trully recommended to use the current version and generic64. That reduces the risk of a broken build.
  
Now, you could start building your new Core_2 32bit stage1/2/3 (plus openvz and vserver templates) by typing the following:
+
After successfull ZFS installation and successfull first boot, the kernel may be changed using the <code> eselect profile set ... </code> command. If you create a snapshot before, you may allways come back to your previous installation, with some simple steps ... (rollback your pool and in the worst case configure and install the bootloader again)}}
  
<console>
 
# ##i##/root/metro/scripts/ezbuild.sh funtoo-current core2_32
 
</console>
 
  
In that case, the produced stages are placed in the <tt>/home/mirror/funtoo/funtoo-current/x32-bit/core2_32/YYYY-MM-DD</tt> subdirectory.
 
  
== Step 3: The Next Build ==
+
Once you've extracted the stage3, do a few more preparations and chroot into your new funtoo environment:
  
At this point, you now have a new Core_2 32bit stage3, built using a "remote" pentium4 stage3. Once the first remote build completes successfully, metro will automatically change <code>.control/strategy/build</code> to be <code>local</code> instead of <code>remote</code>, so it will use the most recently-built Core_2 32bit stage3 as a seed for any new Core_2 32bit builds from now on.
+
<console>
 +
Bind the kernel related directories
 +
# ##i##mount -t proc none proc
 +
# ##i##mount --rbind /dev dev
 +
# ##i##mount --rbind /sys sys
  
= Build your own tailored stage3 =
+
Copy network settings
 +
# ##i##cp -f /etc/resolv.conf etc
  
Metro can be easily configured for building custom stage3 by including additional packages. Edit the following configuration file <tt>/root/metro/etc/builds/funtoo-current/build.conf</tt>:
+
Make the zfs folder in 'etc' and copy your zpool.cache
{{file|name=funtoo-current/build.conf|body=
+
# ##i##mkdir etc/zfs
[collect ../../fslayouts/funtoo/layout.conf]
+
# ##i##cp /tmp/zpool.cache etc/zfs
  
[section release]
+
Chroot into Funtoo
 +
# ##i##env -i HOME=/root TERM=$TERM chroot . bash -l
 +
</console>
  
author: Daniel Robbins <drobbins@funtoo.org>
+
{{fancynote|How to create zpool.cache file?}}
 +
If no <code>zpool.cache</code> file is available, the following command will create one:  
 +
<console>
 +
# ##i##zpool set cachefile=/etc/zfs/zpool.cache tank
 +
</console>
  
[section target]
+
{{:Install/PortageTree}}
  
compression: xz
+
=== Add filesystems to /etc/fstab ===
  
[section portage]
+
Before we continue to compile and or install our kernel in the next step, we will edit the <code>/etc/fstab</code> file because if we decide to install our kernel through portage, portage will need to know where our <code>/boot</code> is, so that it can place the files in there.
  
FEATURES:
+
Edit <code>/etc/fstab</code>:
SYNC: $[snapshot/source/remote]
+
USE:
+
  
[section profile]
+
{{file|name=/etc/fstab|desc= |body=
 +
# <fs>                  <mountpoint>    <type>          <opts>          <dump/pass>
  
format: new
+
/dev/sda1              /boot          ext2            defaults        0 2
path: gentoo:funtoo/1.0/linux-gnu
+
}}
arch: $[:path]/arch/$[target/arch_desc]
+
build: $[:path]/build/current
+
flavor: $[:path]/flavor/core
+
mix-ins:
+
  
[section version]
+
== Building kernel, initramfs and grub to work with zfs==
 +
=== Install genkernel and initial kernel build ===
  
python: 2.7
+
We need to build a genkernel initially:
 +
<console>
 +
# ##i##emerge genkernel
  
[section emerge]
+
Build initial kernel (required for checks in sys-kernel/spl and sys-fs/zfs):
 +
# ##i##genkernel kernel --no-clean --no-mountboot
  
 +
</console>
  
[section snapshot]
+
=== Installing the ZFS userspace tools and kernel modules ===
 +
Emerge {{Package|sys-fs/zfs}}. This package will bring in {{Package|sys-kernel/spl}}, and {{Package|sys-fs/zfs-kmod}} as its dependencies:
  
type: live
+
<console>
compression: xz
+
# ##i##emerge zfs
 +
</console>
  
[section snapshot/source]
+
Check to make sure that the zfs tools are working. The <code>zpool.cache</code> file that you copied before should be displayed.
  
type: git
+
<console>
branch: funtoo.org
+
# ##i##zpool status
# branch to have checked out for tarball:
+
# ##i##zfs list
branch/tar: origin/master
+
</console>
name: ports-2012
+
remote: git://github.com/funtoo/ports-2012.git
+
options: pull
+
  
[section metro]
+
Add the zfs tools to openrc.
 +
<console># ##i##rc-update add zfs boot</console>
  
options:
+
If everything worked, continue.
options/stage: cache/package
+
target: gentoo
+
  
[section baselayout]
+
=== Install GRUB 2  ===
  
services: sshd
+
Install grub2:
 +
<console>
 +
# ##i##echo "sys-boot/grub libzfs -truetype" >> /etc/portage/package.use
 +
# ##i##emerge grub
 +
</console>
  
[section multi]
+
Now install grub to the drive itself (not a partition):
 +
<console>
 +
# ##i##grub-install /dev/sda
 +
</console>
  
snapshot: snapshot
+
=== Emerge genkernel and initial kernel build ===
 +
Install genkernel using:
 +
<console>
 +
# ##i##echo "sys-kernel/genkernel zfs" >> /etc/portage/package.use
 +
# ##i##emerge genkernel
  
[section files]
+
Build now kernel and initramfs with --zfs
 +
# ##i##genkernel all --zfs --no-clean --no-mountboot --callback="emerge @module-rebuild"
 +
</console>
  
motd/trailer: [
 
  
>>> Send suggestions, improvements, bug reports relating to...
+
{{fancynote|During the build, ZFS configurations should be observed.
  
>>> This release:                  $[release/author]
+
If the build breaks, restart it again.}}
>>> Funtoo Linux (general):        Funtoo Linux (http://www.funtoo.org)
+
>>> Gentoo Linux (general):        Gentoo Linux (http://www.gentoo.org)
+
]
+
  
[collect ../../multi-targets/$[multi/mode:zap]]
+
=== Configuring the Bootloader ===
 +
 
 +
Using the genkernel you must add 'real_root=ZFS=<root>' and 'dozfs' to your params.
 +
Edit the  entry for <code>/etc/boot.conf</code>:
 +
 
 +
{{file|name=/etc/boot.conf|desc= |body=
 +
"Funtoo ZFS" {
 +
        kernel kernel[-v]
 +
        initrd initramfs-genkernel-x86_64[-v]
 +
        params real_root=ZFS=tank/funtoo/root
 +
        params += dozfs=force
 +
}
 
}}
 
}}
  
= Building Gentoo stages =
+
The command <code>boot-update</code> should take care of grub configuration:
  
Metro can also build Gentoo stages.  After switching to Funtoo profile, see http://www.funtoo.org/Funtoo_Profiles metro require additional steps for this. We have an open bug for this -- it is simply due to the fact that we focus on ensuring Funtoo Linux builds and building Gentoo is a lower priority. Historical note: Funtoo Linux originally started as a fork of Gentoo Linux so that metro could reliably build Gentoo stages.
+
<console>
http://www.funtoo.org/Funtoo_Profiles
+
Install boot-update (if it is missing):
 +
###i##emerge boot-update
  
= Advanced Features =
+
Run boot-update to update grub.cfg
 +
###i##boot-update
 +
</console>
  
Metro also includes a number of advanced features that can be used to automate builds and set up distributed build servers. These features require you to {{c|emerge sqlalchemy}}, as SQLite is used as a dependency.
+
{{fancynote|If <code>boot-update</code>fails, try this:
 +
<console>
 +
# ##i##grub-mkconfig -o /boot/grub/grub.cfg
 +
</console>
 +
}}
 +
Now you should have a new installation of the kernel, initramfs and grub which are zfs capable. The configurtion files should be updated, and the system should come up during the next reboot.  
  
== Repository Management ==
+
{{fancynote|If The <code>luks</code> integration works basically the same way.}}
  
Metro includes a script in the {{c|scripts}} directory called {{c|buildrepo}}. Buildrepo serves as the heart of Metro's advanced repository management features.
+
== Final configuration ==
 +
=== Clean up and reboot ===
 +
We are almost done, we are just going to clean up, '''set our root password''', and unmount whatever we mounted and get out.
  
=== Initial Setup ===
+
<console>
 +
Delete the stage3 tarball that you downloaded earlier so it doesn't take up space.
 +
# ##i##cd /
 +
# ##i##rm stage3-latest.tar.xz
  
To use {{c|buildrepo}}, you will first need to create a {{f|.buildbot}} configuration file. Here is the file I use on my AMD Jaguar build server:
+
Set your root password
 +
# ##i##passwd
 +
>> Enter your password, you won't see what you are writing (for security reasons), but it is there!
  
{{file|name=/root/.buildbot|lang=python|body=
+
Get out of the chroot environment
builds = (
+
# ##i##exit
"funtoo-current",
+
"funtoo-current-hardened",
+
"funtoo-stable",
+
)
+
  
arches = (
+
Unmount all the kernel filesystem stuff and boot (if you have a separate /boot)
"x86-64bit",
+
# ##i##umount -l proc dev sys boot
"pure64"
+
)
+
  
subarches = (
+
Turn off the swap
"amd64-jaguar",
+
# ##i##swapoff /dev/zvol/tank/swap
"amd64-jaguar-pure64",
+
)
+
  
def map_build(build, subarch, full, full_date):
+
Export the zpool
# arguments refer to last build...
+
# ##i##cd /
if full == True:
+
# ##i##zpool export tank
buildtype =  ( "freshen", )
+
else:
+
buildtype =  ("full", )
+
return buildtype
+
}}
+
  
This file is actually a python source file that defines the tuples {{c|builds}}, {{c|arches}} and {{c|subarches}}. These variables tell {{c|buildrepo}} which builds, arches and subarches it should manage. A {{c|map_build()}} function is also defined which {{c|buildbot}} uses to determine what kind of build to perform. The arguments passed to the function are based on the last successful build. The function can read these arguments and return a string to define the type of the next build. In the above example, the {{c|map_build()}} function will cause the next build after a freshen build to be a full build, and the next build after a full build to be a freshen build, so that the build will alternate between full and freshen.
+
Reboot
 +
# ##i##reboot
 +
</console>
  
== Automated Builds ==
+
{{fancyimportant|'''Don't forget to set your root password as stated above before exiting chroot and rebooting. If you don't set the root password, you won't be able to log into your new system.'''}}
  
Once the {{c|.buildbot}} file has been created, the {{c|buildrepo}} and {{c|buildbot.sh}} tools are ready to use. Here's how they work. These tools are designed to keep your repository ({{c|path/mirror}} in {{f|/root/.metro}} up-to-date by inspecting your repository and looking for stages that are out-of-date.  
+
and that should be enough to get your system to boot on ZFS.
  
To list the next build that will be performed, do this -- this is from my ARM build server:
+
== After reboot ==
  
{{console|body=
+
=== Forgot to reset password? ===
# ##i##./buildrepo nextbuild
+
==== System Rescue CD ====
build=funtoo-current
+
If you aren't using bliss-initramfs, then you can reboot back into your sysresccd and reset through there by mounting your drive, chrooting, and then typing passwd.
arch_desc=arm-32bit
+
subarch=armv7a_hardfp
+
fulldate=2015-02-08
+
nextdate=2015-02-20
+
failcount=0
+
target=full
+
extras=''
+
}}
+
  
If no output is displayed, then all your builds are up-to-date.
+
Example:
 +
<console>
 +
# ##i##zpool import -f -R /mnt/funtoo tank
 +
# ##i##chroot /mnt/funtoo bash -l
 +
# ##i##passwd
 +
# ##i##exit
 +
# ##i##zpool export -f tank
 +
# ##i##reboot
 +
</console>
  
To actually run the next build, run {{c|buildbot.sh}}:
+
=== Create initial ZFS Snapshot ===
 +
Continue to set up anything you need in terms of /etc configurations. Once you have everything the way you like it, take a snapshot of your system. You will be using this snapshot to revert back to this state if anything ever happens to your system down the road. The snapshots are cheap, and almost instant.  
  
{{console|body=
+
To take the snapshot of your system, type the following:
# ##i##./buildbot.sh
+
<console># ##i##zfs snapshot -r tank@install</console>
}}
+
  
If you're thinking that {{c|buildbot.sh}} would be a good candidate for a cron job, you've got the right idea!
+
To see if your snapshot was taken, type:
 +
<console># ##i##zfs list -t snapshot</console>
  
=== List Builds ===
+
If your machine ever fails and you need to get back to this state, just type (This will only revert your / dataset while keeping the rest of your data intact):
 +
<console># ##i##zfs rollback tank/funtoo/root@install</console>
  
To get a quick look at our repository, let's run the {{c|buildrepo fails}} command:
+
{{fancyimportant|'''For a detailed overview, presentation of ZFS' capabilities, as well as usage examples, please refer to the [[ZFS_Fun|ZFS Fun]] page.'''}}
  
{{console|body=
+
== Troubleshooting ==
# ##i##./buildrepo fails
+
  0  2015-02-18 /home/mirror/funtoo/funtoo-current/x86-64bit/amd64-jaguar
+
  0  2015-02-18 /home/mirror/funtoo/funtoo-current/pure64/amd64-jaguar-pure64
+
  0  2015-02-18 /home/mirror/funtoo/funtoo-current-hardened/x86-64bit/amd64-jaguar
+
  0  2015-02-18 /home/mirror/funtoo/funtoo-current-hardened/pure64/amd64-jaguar-pure64
+
  0  2015-02-18 /home/mirror/funtoo/funtoo-stable/x86-64bit/amd64-jaguar
+
  0  2015-02-18 /home/mirror/funtoo/funtoo-stable/pure64/amd64-jaguar-pure64
+
}}
+
  
On my AMD Jaguar build server, on Feb 20, 2015, this lists all the builds that {{c|buildrepo}} has been configured to manage. The first number on each line is a '''failcount''', which is the number of consecutive times that the build has failed. A zero value indicates that everything's okay. The failcount is an important feature of the advanced repository management features. Here are a number of behaviors that are implemented based on failcount:
+
=== Starting from scratch ===
 +
If your installation has gotten screwed up for whatever reason and you need a fresh restart, you can do the following from sysresccd to start fresh:
  
* If {{c|buildbot.sh}} tries to build a stage and the build fails, the failcount is incremented.
+
<console>
* If the build succeeds for a particular build, the failcount is reset to zero.
+
Destroy the pool and any snapshots and datasets it has
* Builds with the lowest failcount are prioritized by {{buildrepo}} to build next, to steer towards builds that are more likely to complete successfully.
+
# ##i##zpool destroy -R -f tank
* Once the failcount reaches 3 for a particular build, it is removed from the build rotation.
+
  
=== Resetting Failcount ===
+
This deletes the files from /dev/sda1 so that even after we zap, recreating the drive in the exact sector
 +
position and size will not give us access to the old files in this partition.
 +
# ##i##mkfs.ext2 /dev/sda1
 +
# ##i##sgdisk -Z /dev/sda
 +
</console>
  
If a build has issues, the failcount for a build will reach 3, at which point it will be pulled out of build rotation. To clear failcount, so that these builds are attempted again -- possibly fixed by new updates to the Portage tree -- use {{c|buildrepo zap}}:
+
Now start the guide again :).
  
{{console|body=
 
# /root/metro/scripts/buildrepo zap
 
Removing /mnt/data/funtoo/funtoo-current/arm-32bit/armv7a_hardfp/.control/.failcount...
 
Removing /mnt/data/funtoo/funtoo-current/arm-32bit/armv6j_hardfp/.control/.failcount...
 
Removing /mnt/data/funtoo/funtoo-current/arm-32bit/armv5te/.control/.failcount...
 
}}
 
  
== Repository Maintenance ==
+
=== Starting again reusing the same disk partitions and the same pool ===
  
A couple of repository maintenance tools are provided:
+
If your installation has gotten screwed up for whatever reason and you want to keep your pole named tank than you should boou into the Rescue CD / USB as done before.
  
* {{c|buildrepo digestgen}} will generate hash files for the archives in your repository, and clean up stale hashes.
+
<console>import the pool reusing all existing datasets:
* {{c|buildrepo index.xml}} will create an index.xml file at the root of your repository, listing all builds available.
+
# ##i##zpool import -f -R /mnt/funtoo tank
* {{c|buildrepo clean}} will output a shell script that will remove old stages. No more than the three most recent stage builds for each build/arch/subarch are kept.
+
</console>
  
== Distributed Repositories ==
+
Now you should wipe the previous installation off:
  
In many situation, you will have a number of build servers, and each will build a subset of your master repository, and then upload builds to the master repository. This is an area of Metro that is being actively developed. For now, automated upload functionality is not enabled, but is expected to be implemented in the relatively near future. However, it is possible to have your master repository differentiate between subarches that are built locally, and thus should be part of that system's {{c|buildbot}} build rotation, and those that are stored locally and built remotely. These builds should be cleaned when {{c|buildrepo clean}} is run, but should not enter the local build rotation. To set this up, modify {{f|/root/.buildbot}} and use the {{c|subarches}} and {{c|all_subarches}} variables:
+
<console>
 +
let's go to our base installation directory:
 +
# ##i##cd /mnt/funtoo
  
{{file|name=/root/.metro|desc=Excerpt of .metro config for master repository|body=
+
and delete the old installation:
# subarches we are building locally:
+
# ##i##rm -rf *
 +
</console>
 +
 
 +
Now start the guide again, at "Pre-Chroot"
  
subarches = (
 
        "pentium4",
 
        "athlon-xp",
 
        "corei7",
 
        "corei7-pure64",
 
        "generic_32",
 
        "i686",
 
        "amd64-k8",
 
        "amd64-k8-pure64",
 
        "core2_64",
 
        "core2_64-pure64",
 
        "generic_64",
 
        "generic_64-pure64",
 
)
 
 
 
# Things we need to clean, even if we may not be building:
 
 
 
all_subarches = subarches + (
 
        "atom_32",
 
        "atom_64",
 
        "atom_64-pure64",
 
        "amd64-k10",
 
        "amd64-k10-pure64",
 
        "amd64-bulldozer",
 
        "amd64-bulldozer-pure64",
 
        "amd64-steamroller",
 
        "amd64-steamroller-pure64",
 
        "amd64-piledriver",
 
        "amd64-piledriver-pure64",
 
        "amd64-jaguar",
 
        "amd64-jaguar-pure64",
 
        "intel64-haswell",
 
        "intel64-haswell-pure64",
 
        "intel64-ivybridge-pure64",
 
        "intel64-ivybridge",
 
        "armv7a_hardfp",
 
        "armv6j_hardfp",
 
        "armv5te"
 
)
 
}}
 
  
 
[[Category:HOWTO]]
 
[[Category:HOWTO]]
[[Category:Metro]]
+
[[Category:Filesystems]]
__TOC__
+
[[Category:Featured]]
 +
[[Category:Install]]
 +
 
 +
__NOTITLE__

Revision as of 15:57, May 13, 2015

Introduction

This tutorial will show you how to install Funtoo on ZFS (rootfs). This tutorial is meant to be an "overlay" over the Regular Funtoo Installation. Follow the normal installation and only use this guide for steps 2, 3, and 8.

Introduction to ZFS

Since ZFS is a new technology for Linux, it can be helpful to understand some of its benefits, particularly in comparison to BTRFS, another popular next-generation Linux filesystem:

  • On Linux, the ZFS code can be updated independently of the kernel to obtain the latest fixes. btrfs is exclusive to Linux and you need to build the latest kernel sources to get the latest fixes.
  • ZFS is supported on multiple platforms. The platforms with the best support are Solaris, FreeBSD and Linux. Other platforms with varying degrees of support are NetBSD, Mac OS X and Windows. btrfs is exclusive to Linux.
  • ZFS has the Adaptive Replacement Cache replacement algorithm while btrfs uses the Linux kernel's Last Recently Used replacement algorithm. The former often has an overwhelmingly superior hit rate, which means fewer disk accesses.
  • ZFS has the ZFS Intent Log and SLOG devices, which accelerates small synchronous write performance.
  • ZFS handles internal fragmentation gracefully, such that you can fill it until 100%. Internal fragmentation in btrfs can make btrfs think it is full at 10%. Btrfs has no automatic rebalancing code, so it requires a manual rebalance to correct it.
  • ZFS has raidz, which is like RAID 5/6 (or a hypothetical RAID 7 that supports 3 parity disks), except it does not suffer from the RAID write hole issue thanks to its use of CoW and a variable stripe size. btrfs gained integrated RAID 5/6 functionality in Linux 3.9. However, its implementation uses a stripe cache that can only partially mitigate the effect of the RAID write hole.
  • ZFS send/receive implementation supports incremental update when doing backups. btrfs' send/receive implementation requires sending the entire snapshot.
  • ZFS supports data deduplication, which is a memory hog and only works well for specialized workloads. btrfs has no equivalent.
  • ZFS datasets have a hierarchical namespace while btrfs subvolumes have a flat namespace.
  • ZFS has the ability to create virtual block devices called zvols in its namespace. btrfs has no equivalent and must rely on the loop device for this functionality, which is cumbersome.

The only area where btrfs is ahead of ZFS is in the area of small file efficiency. btrfs supports a feature called block suballocation, which enables it to store small files far more efficiently than ZFS. It is possible to use another filesystem (e.g. reiserfs) on top of a ZFS zvol to obtain similar benefits (with arguably better data integrity) when dealing with many small files (e.g. the portage tree).

For a quick tour of ZFS and have a big picture of its common operations you can consult the page ZFS Fun.

Disclaimers

Warning

This guide is a work in progress. Expect some quirks.

Today is 2015-05-12. ZFS has undertaken an upgrade - from 0.6.3 to 0.6.4. Please ensure that you use a RescueCD with ZFS 0.6.3. At present date grub 2.02 is not able to deal with those new ZFS parameters. If you want to use ZFS 0.6.4 for pool creation, you should use the compatability mode.

You should upgrade an existing pool only when grub is able to deal with - in a future version ... If not, you will not be able to boot into your system, and no rollback will help!

Please inform yourself!

Important

Since ZFS was really designed for 64 bit systems, we are only recommending and supporting 64 bit platforms and installations. We will not be supporting 32 bit platforms!

Downloading the ISO (With ZFS)

In order for us to install Funtoo on ZFS, you will need an environment that already provides the ZFS tools. Therefore we will download a customized version of System Rescue CD with ZFS included.

Name: sysresccd-4.2.0_zfs_0.6.2.iso  (545 MB)
Release Date: 2014-02-25
md5sum 01f4e6929247d54db77ab7be4d156d85


Download System Rescue CD with ZFS

Creating a bootable USB from ISO (From a Linux Environment)

After you download the iso, you can do the following steps to create a bootable USB:

Make a temporary directory
# mkdir /tmp/loop

Mount the iso
# mount -o ro,loop /root/sysresccd-4.2.0_zfs_0.6.2.iso /tmp/loop

Run the usb installer
# /tmp/loop/usb_inst.sh

That should be all you need to do to get your flash drive working.

Booting the ISO

Warning

When booting into the ISO, Make sure that you select the "Alternate 64 bit kernel (altker64)". The ZFS modules have been built specifically for this kernel rather than the standard kernel. If you select a different kernel, you will get a fail to load module stack error message.

Creating partitions

There are two ways to partition your disk: You can use your entire drive and let ZFS automatically partition it for you, or you can do it manually.

We will be showing you how to partition it manually because if you partition it manually you get to create your own layout, you get to have your own separate /boot partition (Which is nice since not every bootloader supports booting from ZFS pools), and you get to boot into RAID10, RAID5 (RAIDZ) pools and any other layouts due to you having a separate /boot partition.

gdisk (GPT Style)

A Fresh Start:

First lets make sure that the disk is completely wiped from any previous disk labels and partitions. We will also assume that /dev/sda is the target drive.

# sgdisk -Z /dev/sda
Warning

This is a destructive operation and the program will not ask you for confirmation! Make sure you really don't want anything on this disk.

Now that we have a clean drive, we will create the new layout.

First open up the application:

# gdisk /dev/sda

Create Partition 1 (boot):

Command: n ↵
Partition Number: 
First sector: 
Last sector: +250M ↵
Hex Code: 

Create Partition 2 (BIOS Boot Partition):

Command: n ↵
Partition Number: 
First sector: 
Last sector: +32M ↵
Hex Code: EF02 ↵

Create Partition 3 (ZFS):

Command: n ↵
Partition Number: 
First sector: 
Last sector: 
Hex Code: bf00 ↵

Command: p ↵

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048          514047   250.0 MiB   8300  Linux filesystem
   2          514048          579583   32.0 MiB    EF02  BIOS boot partition
   3          579584      1953525134   931.2 GiB   BF00  Solaris root

Command: w ↵


Format your /boot partition

# mkfs.ext2 -m 1 /dev/sda1

Create the zpool

We will first create the pool. The pool will be named tank. Feel free to name your pool as you want. We will use ashift=12 option which is used for a hard drives with a 4096 sector size.

#   zpool create -f -o ashift=12 -o cachefile=/tmp/zpool.cache -O normalization=formD -m none -R /mnt/funtoo tank /dev/sda3 

Create the zfs datasets

We will now create some datasets. For this installation, we will create a small but future proof amount of datasets. We will have a dataset for the OS (/), and your swap. We will also show you how to create some optional datasets as examples ones: /home, /usr/src, and /usr/portage.

Create some empty containers for organization purposes, and make the dataset that will hold /
# zfs create -p tank/funtoo
# zfs create -o mountpoint=/ tank/funtoo/root

Optional, but recommended datasets: /home
# zfs create -o mountpoint=/home tank/funtoo/home

Optional datasets: /usr/src, /usr/portage/{distfiles,packages}
# zfs create -o mountpoint=/usr/src tank/funtoo/src
# zfs create -o mountpoint=/usr/portage -o compression=off tank/funtoo/portage
# zfs create -o mountpoint=/usr/portage/distfiles tank/funtoo/portage/distfiles
# zfs create -o mountpoint=/usr/portage/packages tank/funtoo/portage/packages

Installing Funtoo

Pre-Chroot

Go into the directory that you will chroot into
# cd /mnt/funtoo

Make a boot folder and mount your boot drive
# mkdir boot
# mount /dev/sda1 boot

Now download and extract the Funtoo stage3 ...


Note

It is trully recommended to use the current version and generic64. That reduces the risk of a broken build.

After successfull ZFS installation and successfull first boot, the kernel may be changed using the eselect profile set ... command. If you create a snapshot before, you may allways come back to your previous installation, with some simple steps ... (rollback your pool and in the worst case configure and install the bootloader again)


Once you've extracted the stage3, do a few more preparations and chroot into your new funtoo environment:

Bind the kernel related directories
# mount -t proc none proc
# mount --rbind /dev dev
# mount --rbind /sys sys

Copy network settings
# cp -f /etc/resolv.conf etc

Make the zfs folder in 'etc' and copy your zpool.cache
# mkdir etc/zfs
# cp /tmp/zpool.cache etc/zfs

Chroot into Funtoo
# env -i HOME=/root TERM=$TERM chroot . bash -l
Note

How to create zpool.cache file?

If no zpool.cache file is available, the following command will create one:

# zpool set cachefile=/etc/zfs/zpool.cache tank


Downloading the Portage tree

Note

For an alternative way to do this, see Installing Portage From Snapshot.

Now it's time to install a copy of the Portage repository, which contains package scripts (ebuilds) that tell portage how to build and install thousands of different software packages. To create the Portage repository, simply run emerge --sync from within the chroot. This will automatically clone the portage tree from GitHub:

(chroot) # emerge --sync
Important

If you receive the error with initial emerge --sync due to git protocol restrictions, change SYNC variable in /etc/portage/make.conf:

SYNC="https://github.com/funtoo/ports-2012.git"
Note

To update the Funtoo Linux system just type:

(chroot) # emerge -auDN @world


Add filesystems to /etc/fstab

Before we continue to compile and or install our kernel in the next step, we will edit the /etc/fstab file because if we decide to install our kernel through portage, portage will need to know where our /boot is, so that it can place the files in there.

Edit /etc/fstab:

/etc/fstab
# <fs>                  <mountpoint>    <type>          <opts>          <dump/pass>

/dev/sda1               /boot           ext2            defaults        0 2

Building kernel, initramfs and grub to work with zfs

Install genkernel and initial kernel build

We need to build a genkernel initially:

# emerge genkernel

Build initial kernel (required for checks in sys-kernel/spl and sys-fs/zfs):
# genkernel kernel --no-clean --no-mountboot 

Installing the ZFS userspace tools and kernel modules

Emerge No results. This package will bring in No results, and No results as its dependencies:

# emerge zfs

Check to make sure that the zfs tools are working. The zpool.cache file that you copied before should be displayed.

# zpool status
# zfs list

Add the zfs tools to openrc.

# rc-update add zfs boot

If everything worked, continue.

Install GRUB 2

Install grub2:

# echo "sys-boot/grub libzfs -truetype" >> /etc/portage/package.use
# emerge grub

Now install grub to the drive itself (not a partition):

# grub-install /dev/sda

Emerge genkernel and initial kernel build

Install genkernel using:

# echo "sys-kernel/genkernel zfs" >> /etc/portage/package.use
# emerge genkernel

Build now kernel and initramfs with --zfs
# genkernel all --zfs --no-clean --no-mountboot --callback="emerge @module-rebuild"


Note

During the build, ZFS configurations should be observed.

If the build breaks, restart it again.

Configuring the Bootloader

Using the genkernel you must add 'real_root=ZFS=<root>' and 'dozfs' to your params. Edit the entry for /etc/boot.conf:

/etc/boot.conf
"Funtoo ZFS" {
        kernel kernel[-v]
        initrd initramfs-genkernel-x86_64[-v]
        params real_root=ZFS=tank/funtoo/root
        params += dozfs=force
}

The command boot-update should take care of grub configuration:

Install boot-update (if it is missing):
#emerge boot-update

Run boot-update to update grub.cfg
#boot-update
Note

If boot-updatefails, try this:

# grub-mkconfig -o /boot/grub/grub.cfg

Now you should have a new installation of the kernel, initramfs and grub which are zfs capable. The configurtion files should be updated, and the system should come up during the next reboot.

Note

If The luks integration works basically the same way.

Final configuration

Clean up and reboot

We are almost done, we are just going to clean up, set our root password, and unmount whatever we mounted and get out.

Delete the stage3 tarball that you downloaded earlier so it doesn't take up space.
# cd /
# rm stage3-latest.tar.xz

Set your root password
# passwd
>> Enter your password, you won't see what you are writing (for security reasons), but it is there!

Get out of the chroot environment
# exit

Unmount all the kernel filesystem stuff and boot (if you have a separate /boot)
# umount -l proc dev sys boot

Turn off the swap
# swapoff /dev/zvol/tank/swap

Export the zpool
# cd /
# zpool export tank

Reboot
# reboot
Important

Don't forget to set your root password as stated above before exiting chroot and rebooting. If you don't set the root password, you won't be able to log into your new system.

and that should be enough to get your system to boot on ZFS.

After reboot

Forgot to reset password?

System Rescue CD

If you aren't using bliss-initramfs, then you can reboot back into your sysresccd and reset through there by mounting your drive, chrooting, and then typing passwd.

Example:

# zpool import -f -R /mnt/funtoo tank
# chroot /mnt/funtoo bash -l
# passwd
# exit
# zpool export -f tank
# reboot

Create initial ZFS Snapshot

Continue to set up anything you need in terms of /etc configurations. Once you have everything the way you like it, take a snapshot of your system. You will be using this snapshot to revert back to this state if anything ever happens to your system down the road. The snapshots are cheap, and almost instant.

To take the snapshot of your system, type the following:

# zfs snapshot -r tank@install

To see if your snapshot was taken, type:

# zfs list -t snapshot

If your machine ever fails and you need to get back to this state, just type (This will only revert your / dataset while keeping the rest of your data intact):

# zfs rollback tank/funtoo/root@install
Important

For a detailed overview, presentation of ZFS' capabilities, as well as usage examples, please refer to the ZFS Fun page.

Troubleshooting

Starting from scratch

If your installation has gotten screwed up for whatever reason and you need a fresh restart, you can do the following from sysresccd to start fresh:

Destroy the pool and any snapshots and datasets it has
# zpool destroy -R -f tank

This deletes the files from /dev/sda1 so that even after we zap, recreating the drive in the exact sector
position and size will not give us access to the old files in this partition.
# mkfs.ext2 /dev/sda1
# sgdisk -Z /dev/sda

Now start the guide again :).


Starting again reusing the same disk partitions and the same pool

If your installation has gotten screwed up for whatever reason and you want to keep your pole named tank than you should boou into the Rescue CD / USB as done before.

import the pool reusing all existing datasets:
# zpool import -f -R /mnt/funtoo tank

Now you should wipe the previous installation off:

let's go to our base installation directory:
# cd /mnt/funtoo

and delete the old installation: 
# rm -rf *

Now start the guide again, at "Pre-Chroot"