Difference between pages "User:Pytony/Home Server Journey" and "ZFS Install Guide"

< User:Pytony(Difference between pages)
(completed day 8)
 
(Installing the ZFS userspace tools and kernel modules)
 
Line 1: Line 1:
== Home Server Journey ==
+
== Introduction ==
  
=== Day 0 ===
+
This tutorial will show you how to install Funtoo on ZFS (rootfs). This tutorial is meant to be an "overlay" over the [[Funtoo_Linux_Installation|Regular Funtoo Installation]]. Follow the normal installation and only use this guide for steps 2, 3, and 8.
  
I am thinking about setting up a local server that would mainly serve as a NAS but also provide some other services I could reach at home or from my office or anywhere else. I also would like to use it as a media center to bind any audio source (jack, bluetooth, inner storage, USB peripherals, network, ...) to the amplifier. Most services (especially those I'm not gonna be the only user) would be controllable via a mobile-friendly web interface.
+
=== Introduction to ZFS ===
  
This is the base idea for this beginning journey. :)
+
Since ZFS is a new technology for Linux, it can be helpful to understand some of its benefits, particularly in comparison to BTRFS, another popular next-generation Linux filesystem:
  
=== Day 1 ===
+
* On Linux, the ZFS code can be updated independently of the kernel to obtain the latest fixes. btrfs is exclusive to Linux and you need to build the latest kernel sources to get the latest fixes.
  
Time to find the hardware... My main criteria were the price, the size of the case and power consumption. In order to satisfy these criteria I decided to look for a mini-ITX motherboard with integrated CPU and supplied through ATX. My other criteria for the motherboard were:
+
* ZFS is supported on multiple platforms. The platforms with the best support are Solaris, FreeBSD and Linux. Other platforms with varying degrees of support are NetBSD, Mac OS X and Windows. btrfs is exclusive to Linux.
  
* At least 1 HDMI output (ideally 2)
+
* ZFS has the Adaptive Replacement Cache replacement algorithm while btrfs uses the Linux kernel's Last Recently Used replacement algorithm. The former often has an overwhelmingly superior hit rate, which means fewer disk accesses.
* At least 1 optical S/PDIF output
+
* At least 4 SATA III ports (ideally 6)
+
* At least 1 Ethernet 1Gbps controller
+
* Ideally some PCI slots
+
* Ideally hardware RAID support
+
  
Originally, I chose the [http://www.asrock.com/mb/Intel/N3700-ITX/ ASRock N3700-ITX] but it was not in stock and procurement was unknown. I decided to wait but finally I found the [http://www.asrock.com/mb/Intel/N3150-ITX/ ASRock N3150-ITX] which was $30 cheaper and very close to my original choice. One thing I especially appreciated with these motherboards is that the processor is fanless, which I think is good for power consumption and silence.
+
* ZFS has the ZFS Intent Log and SLOG devices, which accelerates small synchronous write performance.
  
For this kind of server I don't think RAM is essential so I just took 2×2GB [http://www.crucial.com/usa/en/ct25664bf160b Crucial CT25664BF160B].
+
* ZFS handles internal fragmentation gracefully, such that you can fill it until 100%. Internal fragmentation in btrfs can make btrfs think it is full at 10%. Btrfs has no automatic rebalancing code, so it requires a manual rebalance to correct it.
  
For the storage, I first thought about 3 or 4 HDD in RAID 5 or 6. But I remembered my price criteria and decided to start with 1×2To HDD and add some more in the future. I chose [http://www.wdc.com/en/products/products.aspx?id=780 WD Green WD20EZRX] (mainly because there was $20 off as it was a reconditioned product).
+
* ZFS has raidz, which is like RAID 5/6 (or a hypothetical RAID 7 that supports 3 parity disks), except it does not suffer from the RAID write hole issue thanks to its use of CoW and a variable stripe size. btrfs gained integrated RAID 5/6 functionality in Linux 3.9. However, its implementation uses a stripe cache that can only partially mitigate the effect of the RAID write hole.
  
I wanted a silent modular 350-450W power supply at least 80Plus Gold certified. The choice was quite limited and I found myself with a [http://www.corsair.com/en-us/cs-series-modular-cs450m-450-watt-80-plus-gold-certified-psu Corsair CS450M].
+
* ZFS send/receive implementation supports incremental update when doing backups. btrfs' send/receive implementation requires sending the entire snapshot.
  
Finally, I looked for a small case, ideally white in order to break with all that black furniture and stuff in my living room, and not ugly. My choice fell on a [http://www.coolermaster.com/case/mini-itx/elite-120-advanced-white/ Cooler Master Elite 120 Advanced white].
+
* ZFS supports data deduplication, which is a memory hog and only works well for specialized workloads. btrfs has no equivalent.
  
=== Day 2 ===
+
* ZFS datasets have a hierarchical namespace while btrfs subvolumes have a flat namespace.
  
My excitation when I received all that stuff quickly dissipated when I realized the stand-off screws were missing in the case. Without them I was unable to set up the motherboard.
+
* ZFS has the ability to create virtual block devices called zvols in its namespace. btrfs has no equivalent and must rely on the loop device for this functionality, which is cumbersome.
  
=== Day 3 ===
+
The only area where btrfs is ahead of ZFS is in the area of small file
 +
efficiency. btrfs supports a feature called block suballocation, which
 +
enables it to store small files far more efficiently than ZFS. It is
 +
possible to use another filesystem (e.g. reiserfs) on top of a ZFS zvol
 +
to obtain similar benefits (with arguably better data integrity) when
 +
dealing with many small files (e.g. the portage tree).
  
A week later I received the missing screws I had requested. Still I could assemble my server. I originally planned a fanless system, but I noticed there were two fans in the case, so I plugged them, just in case.
+
For a quick tour of ZFS and have a big picture of its common operations you can consult the page [[ZFS Fun]].
  
The first time I turned on the computer, nothing happened. :( When checking again the cable connections, I noticed I plugged the system panel header on the COM port header by mistake while I rearranged the cables in the case. Fortunately this didn't damage any component, and everything worked fine the second time I turned on the computer.
+
=== Disclaimers ===
  
=== Day 4 ===
+
{{fancywarning|This guide is a work in progress. Expect some quirks.
  
Now it's time to install the best GNU/Linux distribution in the world: Funtoo. =)
+
Today is 2015-05-12. ZFS has undertaken an upgrade - from 0.6.3 to 0.6.4. Please ensure that you use a RescueCD with ZFS 0.6.3. At present date grub 2.02 is not able to deal with those new ZFS parameters. If you want to use ZFS 0.6.4 for pool creation, you should use the compatability mode.  
  
I wanted to try ZFS and build my own initramfs for the first time. But I also wanted to get it working as soon as possible. So I decided to stick with the [[ZFS Install Guide]] and use genkernel to configure the kernel and initramfs. Knowing that this doesn't prevent me from building alternate kernel/initramfs later.
+
You should upgrade an existing pool only when grub is able to deal with - in a future version ... If not, you will not be able to boot into your system, and no rollback will help!
  
Bleh, got a kernel panic after reboot. =( I think there is a couple of things I've done wrong:
+
Please inform yourself!}}
  
* I didn't boot sysresccd via UEFI, though I partitioned my disk using `gdisk`
+
{{fancyimportant|'''Since ZFS was really designed for 64 bit systems, we are only recommending and supporting 64 bit platforms and installations. We will not be supporting 32 bit platforms'''!}}
* I didn't care about this warning "''Today is 2015-05-12. ZFS has undertaken an upgrade - from 0.6.3 to 0.6.4. Please ensure that you use a RescueCD with ZFS 0.6.3.''"
+
* I didn't care about this warning "''When booting into the ISO, Make sure that you select the "Alternate 64 bit kernel (altker64)".''".
+
  
So let's try again with <code>funtoo-stable-hardened/pure64/generic_64-pure64</code> stage3 (nothing related to the kernel panic here, but I think this will be better than what I originally chose (<code>funtoo-current/pure64/intel64-silvermont-pure64</code>)).
+
== Downloading the ISO (With ZFS) ==
 +
In order for us to install Funtoo on ZFS, you will need an environment that already provides the ZFS tools. Therefore we will download a customized version of System Rescue CD with ZFS included.  
  
And... it failed again. :) <code>grub-install: error: cannot find EFI directory</code>. I didn't to extensive researches about this error, I'm pretty sure this is due to something I've done wrong in the beginning. So I'm gonna start again. However I'll first try to do a "standard" installation. I thought about using LVM instead of ZFS (I also learned ZFS is a bit greedy in terms of RAM, so this is probably better anyway), but there is a couple of things that I am not used to: UEFI installation, initramfs, use of genkernel to build a kernel, ... I think the first thing to do is successfully building a system booting with UEFI.
+
<pre>
 +
Name: sysresccd-4.2.0_zfs_0.6.2.iso  (545 MB)
 +
Release Date: 2014-02-25
 +
md5sum 01f4e6929247d54db77ab7be4d156d85
 +
</pre>
  
=== Day 5 ===
 
  
OK, standard install worked with UEFI. I accidentally made <code>/boot</code> partition <code>ext2</code> instead of <code>vfat</code>, fortunately it was quite easy to fix afterwards. I think it's one of the mistakes I made in the previous installs. So let's try again taking especially care about UEFI things.
+
'''[http://ftp.osuosl.org/pub/funtoo/distfiles/sysresccd/ Download System Rescue CD with ZFS]'''<br />
  
I made a 500MB EFI partition and all space left for ZFS:
+
== Creating a bootable USB from ISO (From a Linux Environment) ==
 +
After you download the iso, you can do the following steps to create a bootable USB:
  
{{console|body=
+
<console>
Disk /dev/sda: 3907029168 sectors, 1.8 TiB
+
Make a temporary directory
Logical sector size: 512 bytes
+
# ##i##mkdir /tmp/loop
Disk identifier (GUID): E2066145-69F3-46DD-8329-6DC3D3094EB2
+
 
Partition table holds up to 128 entries
+
Mount the iso
First usable sector is 34, last usable sector is 3907029134
+
# ##i##mount -o ro,loop /root/sysresccd-4.2.0_zfs_0.6.2.iso /tmp/loop
Partitions will be aligned on 2048-sector boundaries
+
 
Total free space is 2014 sectors (1007.0 KiB)
+
Run the usb installer
 +
# ##i##/tmp/loop/usb_inst.sh
 +
</console>
 +
 
 +
That should be all you need to do to get your flash drive working.
 +
 
 +
== Booting the ISO ==
 +
 
 +
{{fancywarning|'''When booting into the ISO, Make sure that you select the "Alternate 64 bit kernel (altker64)". The ZFS modules have been built specifically for this kernel rather than the standard kernel. If you select a different kernel, you will get a fail to load module stack error message.'''}}
 +
 
 +
== Creating partitions ==
 +
There are two ways to partition your disk: You can use your entire drive and let ZFS automatically partition it for you, or you can do it manually.
 +
 
 +
We will be showing you how to partition it '''manually''' because if you partition it manually you get to create your own layout, you get to have your own separate /boot partition (Which is nice since not every bootloader supports booting from ZFS pools), and you get to boot into RAID10, RAID5 (RAIDZ) pools and any other layouts due to you having a separate /boot partition.
 +
 
 +
==== gdisk (GPT Style) ====
 +
 
 +
'''A Fresh Start''':
 +
 
 +
First lets make sure that the disk is completely wiped from any previous disk labels and partitions.
 +
We will also assume that <tt>/dev/sda</tt> is the target drive.<br />
 +
 
 +
<console>
 +
# ##i##sgdisk -Z /dev/sda
 +
</console>
 +
 
 +
{{fancywarning|This is a destructive operation and the program will not ask you for confirmation! Make sure you really don't want anything on this disk.}}
 +
 
 +
Now that we have a clean drive, we will create the new layout.
 +
 
 +
First open up the application:
 +
 
 +
<console>
 +
# ##i##gdisk /dev/sda
 +
</console>
 +
 
 +
'''Create Partition 1''' (boot):
 +
<console>
 +
Command: ##i##n ↵
 +
Partition Number: ##i##↵
 +
First sector: ##i##↵
 +
Last sector: ##i##+250M ↵
 +
Hex Code: ##i##↵
 +
</console>
 +
 
 +
'''Create Partition 2''' (BIOS Boot Partition):
 +
<console>Command: ##i##n ↵
 +
Partition Number: ##i##↵
 +
First sector: ##i##↵
 +
Last sector: ##i##+32M ↵
 +
Hex Code: ##i##EF02 ↵
 +
</console>
 +
 
 +
'''Create Partition 3''' (ZFS):
 +
<console>Command: ##i##n ↵
 +
Partition Number: ##i##↵
 +
First sector: ##i##↵
 +
Last sector: ##i##↵
 +
Hex Code: ##i##bf00 ↵
 +
 
 +
Command: ##i##p ↵
  
 
Number  Start (sector)    End (sector)  Size      Code  Name
 
Number  Start (sector)    End (sector)  Size      Code  Name
   1            2048         1026047   500.0 MiB  EF00 EFI System
+
   1            2048         514047   250.0 MiB  8300 Linux filesystem
   2         1026048     3907029134   1.8 TiB    BF00  Solaris root
+
   2         514048          579583  32.0 MiB    EF02  BIOS boot partition
}}
+
  3          579584     1953525134   931.2 GiB  BF00  Solaris root
  
And here is how I partitioned the system:
+
Command: ##i##w ↵
 +
</console>
  
{{console|body=
 
root@sysresccd /root % mkfs.vfat -F 32 /dev/sda1
 
mkfs.fat 3.0.22 (2013-07-19)
 
root@sysresccd /root % zpool create -f -o ashift=12 -o cachefile=/tmp/zpool.cache -O normalization=formD -m none -R /mnt/funtoo toast /dev/sda2
 
root@sysresccd /root % zfs create -p toast/funtoo
 
root@sysresccd /root % cd /mnt
 
root@sysresccd /mnt % zfs create -o mountpoint=/ toast/funtoo/root
 
root@sysresccd /mnt % zfs create -o mountpoint=/home toast/funtoo/home
 
root@sysresccd /mnt % zfs create toast/swap -V 8G -b 4K
 
root@sysresccd /mnt % mkswap /dev/toast/swap
 
Setting up swapspace version 1, size = 8388604 KiB
 
no label, UUID=dffed32f-f0f0-4e9c-b405-9a82b1e30805
 
root@sysresccd /mnt % swapon /dev/toast/swap
 
root@sysresccd /mnt % zfs create -o mountpoint=/opt toast/funtoo/opt
 
root@sysresccd /mnt % zfs create -o mountpoint=/usr toast/funtoo/usr
 
root@sysresccd /mnt % zfs create -o mountpoint=/var toast/funtoo/var
 
root@sysresccd /mnt % zfs create -o mountpoint=/tmp toast/funtoo/tmp
 
root@sysresccd /mnt % zfs create -o mountpoint=/var/tmp toast/funtoo/var/tmp
 
root@sysresccd /mnt % zfs create -o mountpoint=/var/portage/distfiles toast/funtoo/var/portage-distfiles
 
root@sysresccd /mnt % zfs create -o mountpoint=/var/portage/packages toast/funtoo/var/portage-packages
 
root@sysresccd /mnt % cd funtoo
 
root@sysresccd /mnt/funtoo % chmod 1777 var/tmp
 
root@sysresccd /mnt/funtoo % chmod 1777 tmp
 
}}
 
  
Obivously, not to forget the fstab:
+
=== Format your /boot partition ===
 +
 
 +
<console>
 +
# ##i##mkfs.ext2 -m 1 /dev/sda1
 +
</console>
 +
 
 +
=== Create the zpool ===
 +
We will first create the pool. The pool will be named  <code>tank</code>. Feel free to name your pool as you want.  We will use <code>ashift=12</code> option  which is used for a hard drives with a 4096 sector size.
 +
<console># ##i##  zpool create -f -o ashift=12 -o cachefile=/tmp/zpool.cache -O normalization=formD -m none -R /mnt/funtoo tank /dev/sda3 </console>
 +
 
 +
=== Create the zfs datasets ===
 +
We will now create some datasets. For this installation, we will create a small but future proof amount of datasets. We will have a dataset for the OS (/), and your swap. We will also show you how to create some optional datasets: <code>/home</code>,  <code>/usr/src</code>, and <code>/usr/portage</code>.  Notice, datasets are examples only and not strictly required.
 +
 
 +
<console>
 +
Create some empty containers for organization purposes, and make the dataset that will hold /
 +
# ##i## zfs create -p tank/funtoo
 +
# ##i## zfs create -o mountpoint=/ tank/funtoo/root
 +
 
 +
Optional, Create swap
 +
# ##i## zfs create tank/swap -V 2G -b 4K
 +
# ##i## mkswap /dev/tank/swap
 +
# ##i## swapon /dev/tank/swap
 +
 
 +
Optional, but recommended datasets: /home
 +
# ##i## zfs create -o mountpoint=/home tank/funtoo/home
 +
 
 +
Optional datasets: /usr/src, /usr/portage/{distfiles,packages}
 +
# ##i## zfs create -o mountpoint=/usr/src tank/funtoo/src
 +
# ##i## zfs create -o mountpoint=/usr/portage -o compression=off tank/funtoo/portage
 +
# ##i## zfs create -o mountpoint=/usr/portage/distfiles tank/funtoo/portage/distfiles
 +
# ##i## zfs create -o mountpoint=/usr/portage/packages tank/funtoo/portage/packages
 +
</console>
 +
 
 +
== Installing Funtoo ==
 +
 
 +
=== Pre-Chroot ===
 +
 
 +
<console>
 +
Go into the directory that you will chroot into
 +
# ##i##cd /mnt/funtoo
 +
 
 +
Make a boot folder and mount your boot drive
 +
# ##i##mkdir boot
 +
# ##i##mount /dev/sda1 boot
 +
</console>
 +
 
 +
[[Funtoo_Linux_Installation|Now download and extract the Funtoo stage3 ...]]
 +
 
 +
 
 +
{{fancynote|It is trully recommended to use the current version and generic64. That reduces the risk of a broken build.
 +
 
 +
After successfull ZFS installation and successfull first boot, the kernel may be changed using the <code> eselect profile set ... </code> command. If you create a snapshot before, you may allways come back to your previous installation, with some simple steps ... (rollback your pool and in the worst case configure and install the bootloader again)}}
 +
 
 +
 
 +
 
 +
Once you've extracted the stage3, do a few more preparations and chroot into your new funtoo environment:
 +
 
 +
<console>
 +
Bind the kernel related directories
 +
# ##i##mount -t proc none proc
 +
# ##i##mount --rbind /dev dev
 +
# ##i##mount --rbind /sys sys
 +
 
 +
Copy network settings
 +
# ##i##cp -f /etc/resolv.conf etc
 +
 
 +
Make the zfs folder in 'etc' and copy your zpool.cache
 +
# ##i##mkdir etc/zfs
 +
# ##i##cp /tmp/zpool.cache etc/zfs
 +
 
 +
Chroot into Funtoo
 +
# ##i##env -i HOME=/root TERM=$TERM chroot . bash -l
 +
</console>
 +
 
 +
{{fancynote|How to create zpool.cache file?}}
 +
If no <code>zpool.cache</code> file is available, the following command will create one:
 +
<console>
 +
# ##i##zpool set cachefile=/etc/zfs/zpool.cache tank
 +
</console>
 +
 
 +
{{:Install/PortageTree}}
 +
 
 +
=== Add filesystems to /etc/fstab ===
 +
 
 +
Before we continue to compile and or install our kernel in the next step, we will edit the <code>/etc/fstab</code> file because if we decide to install our kernel through portage, portage will need to know where our <code>/boot</code> is, so that it can place the files in there.
 +
 
 +
Edit <code>/etc/fstab</code>:
  
 
{{file|name=/etc/fstab|desc= |body=
 
{{file|name=/etc/fstab|desc= |body=
 
# <fs>                  <mountpoint>    <type>          <opts>          <dump/pass>
 
# <fs>                  <mountpoint>    <type>          <opts>          <dump/pass>
  
/dev/sda1              /boot          vfat           defaults        0 2
+
/dev/sda1              /boot          ext2           defaults        0 2
/dev/zvol/toast/swap   none            swap            defaults        0 0
+
 
 +
# If you set up a swap partition, you will have to add this line as well
 +
/dev/zvol/tank/swap     none            swap            defaults        0 0
 
}}
 
}}
  
And still kept <code>funtoo-stable-hardened/pure64/generic_64-pure64</code> build.
+
== Building kernel, initramfs and grub to work with zfs==
 +
=== Install genkernel and initial kernel build ===
  
Let's run <code>emerge -uDN --with-bdeps=y @world && genkernel kernel --no-clean --no-mountboot && emerge zfs</code> and go to sleep. :)
+
We need to build a genkernel initially:
 +
<console>
 +
# ##i##emerge genkernel
  
=== Day 6 ===
+
Build initial kernel (required for checks in sys-kernel/spl and sys-fs/zfs):
 +
# ##i##genkernel kernel --no-clean --no-mountboot
  
Reboot... And... it worked :D Except that I forgot to setup my keymap and I had to type my password in qwerty while my keyboard is azerty. :P
+
</console>
  
I ran into troubles installing samba server. I didn't know if I should user <code>security = user</code> or <code>security = share</code>. Default was <code>user</code> (as well as most docs) but all docs mentioned <code>smbpasswd</code> utility to create new users, which was not provided by portage. Using <code>share</code> I could use smbclient to login but was unable to mount CIFS partitions. Finally, I figured out <code>security=share</code> was deprecated, so I duckducked "gentoo smbpassword" and found I had to use <code>pdbedit</code> rather than <code>smbpasswd</code>. Now it works fine.
+
=== Installing the ZFS userspace tools and kernel modules ===
 +
Emerge {{Package|sys-fs/zfs}}. This package will bring in {{Package|sys-kernel/spl}}, and {{Package|sys-fs/zfs-kmod}} as its dependencies:
  
=== Day 7 ===
+
<console>
 +
# ##i##emerge zfs
 +
</console>
  
Today I tried to figure out how to control the fans. Unfortunately I could not find how to set fan speed from <code>sysfs</code>. On my other computer, I can do it within <code>/sys/class/hwmon/hwmon3/pwm[1-3]</code>. Unfortunately I have no such file available on the sysfs of my server. Maybe a missing configuration in the kernel, but this is the default debian-sources configuration. I would rather say my motherboard do not provide such interface to the OS. In the BIOS, I can configure the fans speed (from level 1 to level 9) but not turn them off. :/
+
Check to make sure that the zfs tools are working. The <code>zpool.cache</code> file that you copied before should be displayed.
  
I monitored the temperature in normal usage and full CPU usage. With both fans at lowest speed level, the CPUs temperature stays between 30 and 40°C. When running a compilation that requires all processors working at 100%, the temperature rarely raises above 50°C. So I think I can remove both fans. I will try to do so, and keep monitoring the temperature.
+
<console>
 +
# ##i##zpool status
 +
# ##i##zfs list
 +
</console>
  
=== Day 8 ===
+
{{fancynote|If /etc/mtab is missing, these two commands will complaine.
 +
In that case solve that with:
 +
<console>
 +
# ##i##grep -v rootfs /proc/mounts > /etc/mtab
 +
</console>}}
  
Time to get interested in ZFS properties... Oops, I just noticed I created datasets for <code>/var/portage/{distfiles,packages}</code> rather than <code>/usr/portage/{distfiles,packages}</code>.
+
Add the zfs tools to openrc.  
 +
<console>
 +
# ##i##rc-update add zfs-import boot
 +
# ##i##rc-update add zfs-mount boot
 +
# ##i##rc-update add zfs-share default
 +
# ##i##rc-update add zfs-zed default
 +
</console>
  
Let's change this :
+
If everything worked, continue.
  
{{console|body=
+
=== Install GRUB 2  ===
###i## zfs rename toast/funtoo/var/portage-distfiles toast/funtoo/usr/portage-distfiles
+
 
###i## zfs rename toast/funtoo/var/portage-packages toast/funtoo/usr/portage-packages
+
Install grub2:
###i## zfs set mountpoint=/usr/portage/packages toast/funtoo/usr/portage-packages
+
<console>
###i## zfs set mountpoint=/usr/portage/distfiles toast/funtoo/usr/portage-distfiles
+
# ##i##echo "sys-boot/grub libzfs -truetype" >> /etc/portage/package.use
cannot mount '/usr/portage/distfiles': directory is not empty
+
# ##i##emerge grub
property may be set but unable to remount filesystem
+
</console>
###i## mkdir /tmp/usr-portage-distfiles
+
 
###i## mv /usr/portage/distfiles/* /tmp/usr-portage-distfiles
+
Now install grub to the drive itself (not a partition):
###i## zfs mount toast/funtoo/usr/portage-distfiles
+
<console>
###i## mv /tmp/usr-portage-distfiles/* /usr/portage/distfiles
+
# ##i##grub-install /dev/sda
###i## zfs list
+
</console>
NAME                                USED  AVAIL  REFER  MOUNTPOINT
+
 
toast                              21.0G  1.76T  136K  none
+
=== Initial kernel build ===
toast/funtoo                        12.5G  1.76T  136K  none
+
Build now kernel and initramfs with --zfs
toast/funtoo/home                    248K  1.76T  248K /home
+
<console>
toast/funtoo/opt                    136K  1.76T  136K  /opt
+
# ##i##genkernel all --zfs --no-clean --no-mountboot --callback="emerge @module-rebuild"
toast/funtoo/root                    250M  1.76T  250M  /
+
</console>
toast/funtoo/tmp                    160K  1.76T  160K  /tmp
+
Using the debian-sources, the following command may give better results:
toast/funtoo/usr                    12.2G  1.76T  12.2G  /usr
+
<console>
toast/funtoo/usr/portage-distfiles  136K  1.76T  136K  /usr/portage/distfiles
+
# ##i##genkernel all --zfs --no-clean --no-mountboot --callback="emerge spl zfs-kmod zfs"
toast/funtoo/usr/portage-packages    136K  1.76T  136K  /usr/portage/packages
+
</console>
toast/funtoo/var                    76.2M  1.76T  69.3M  /var
+
 
toast/funtoo/var/tmp                6.84M  1.76T  6.84M  /var/tmp
+
=== Configuring the Bootloader ===
toast/swap                          8.50G  1.77T  1.32G  -
+
 
 +
Using the genkernel you must add 'real_root=ZFS=<root>' and 'dozfs' to your params.
 +
Edit <code>/etc/boot.conf</code>:
 +
 
 +
{{file|name=/etc/boot.conf|desc= |body=
 +
"Funtoo ZFS" {
 +
        kernel kernel[-v]
 +
        initrd initramfs-genkernel-x86_64[-v]
 +
        params real_root=ZFS=tank/funtoo/root
 +
        params += dozfs=force
 +
}
 
}}
 
}}
  
I also noticed there is a way to share datasets through samba. I tried to set up this feature but I couldn't get it work. I also ran into troubles trying to set default file mode to 0660 and directory mode to 0770 on samba shared folders. My personal station keeps creating files with mode 0640 and directories with mode 0750. This is really annoying that group does not have write access on a shared folder and setting <code>umask</code> in my zshrc is not satisfying either. I tried to setup ACL on the shared folder, but I can't get ACL working on the client side. I think I'll have to learn more about ACL and try again later.
+
The command <code>boot-update</code> should take care of grub configuration:
 +
 
 +
<console>
 +
Install boot-update (if it is missing):
 +
###i## emerge boot-update
 +
 
 +
Run boot-update to update grub.cfg
 +
###i## boot-update
 +
</console>
 +
 
 +
{{fancynote|If <code>boot-update</code>fails, try this:
 +
<console>
 +
# ##i##grub-mkconfig -o /boot/grub/grub.cfg
 +
</console>
 +
}}
 +
Now you should have a new installation of the kernel, initramfs and grub which are zfs capable. The configuration files should be updated, and the system should come up during the next reboot.
 +
 
 +
{{fancynote|If The <code>luks</code> integration works basically the same way.}}
 +
 
 +
== Final configuration ==
 +
=== Clean up and reboot ===
 +
We are almost done, we are just going to clean up, '''set our root password''', and unmount whatever we mounted and get out.
 +
 
 +
<console>
 +
Delete the stage3 tarball that you downloaded earlier so it doesn't take up space.
 +
# ##i##cd /
 +
# ##i##rm stage3-latest.tar.xz
 +
 
 +
Set your root password
 +
# ##i##passwd
 +
>> Enter your password, you won't see what you are writing (for security reasons), but it is there!
 +
 
 +
Get out of the chroot environment
 +
# ##i##exit
 +
 
 +
Unmount all the kernel filesystem stuff and boot (if you have a separate /boot)
 +
# ##i##umount -l proc dev sys boot
 +
 
 +
Turn off the swap
 +
# ##i##swapoff /dev/zvol/tank/swap
 +
 
 +
Export the zpool
 +
# ##i##cd /
 +
# ##i##zpool export tank
 +
 
 +
Reboot
 +
# ##i##reboot
 +
</console>
 +
 
 +
{{fancyimportant|'''Don't forget to set your root password as stated above before exiting chroot and rebooting. If you don't set the root password, you won't be able to log into your new system.'''}}
 +
 
 +
and that should be enough to get your system to boot on ZFS.
 +
 
 +
== After reboot ==
 +
 
 +
=== Forgot to reset password? ===
 +
==== System Rescue CD ====
 +
If you aren't using bliss-initramfs, then you can reboot back into your sysresccd and reset through there by mounting your drive, chrooting, and then typing passwd.
 +
 
 +
Example:
 +
<console>
 +
# ##i##zpool import -f -R /mnt/funtoo tank
 +
# ##i##chroot /mnt/funtoo bash -l
 +
# ##i##passwd
 +
# ##i##exit
 +
# ##i##zpool export -f tank
 +
# ##i##reboot
 +
</console>
 +
 
 +
=== Create initial ZFS Snapshot ===
 +
Continue to set up anything you need in terms of /etc configurations. Once you have everything the way you like it, take a snapshot of your system. You will be using this snapshot to revert back to this state if anything ever happens to your system down the road. The snapshots are cheap, and almost instant.
 +
 
 +
To take the snapshot of your system, type the following:
 +
<console># ##i##zfs snapshot -r tank@install</console>
 +
 
 +
To see if your snapshot was taken, type:
 +
<console># ##i##zfs list -t snapshot</console>
 +
 
 +
If your machine ever fails and you need to get back to this state, just type (This will only revert your / dataset while keeping the rest of your data intact):
 +
<console># ##i##zfs rollback tank/funtoo/root@install</console>
 +
 
 +
{{fancyimportant|'''For a detailed overview, presentation of ZFS' capabilities, as well as usage examples, please refer to the [[ZFS_Fun|ZFS Fun]] page.'''}}
 +
 
 +
== Troubleshooting ==
 +
 
 +
=== Starting from scratch ===
 +
If your installation has gotten screwed up for whatever reason and you need a fresh restart, you can do the following from sysresccd to start fresh:
 +
 
 +
<console>
 +
Destroy the pool and any snapshots and datasets it has
 +
# ##i##zpool destroy -R -f tank
 +
 
 +
This deletes the files from /dev/sda1 so that even after we zap, recreating the drive in the exact sector
 +
position and size will not give us access to the old files in this partition.
 +
# ##i##mkfs.ext2 /dev/sda1
 +
# ##i##sgdisk -Z /dev/sda
 +
</console>
 +
 
 +
Now start the guide again :).
 +
 
 +
 
 +
=== Starting again reusing the same disk partitions and the same pool ===
 +
 
 +
If your installation has gotten screwed up for whatever reason and you want to keep your pole named tank than you should boou into the Rescue CD / USB as done before.
 +
 
 +
<console>import the pool reusing all existing datasets:
 +
# ##i##zpool import -f -R /mnt/funtoo tank
 +
</console>
 +
 
 +
Now you should wipe the previous installation off:
 +
 
 +
<console>
 +
let's go to our base installation directory:
 +
# ##i##cd /mnt/funtoo
 +
 
 +
and delete the old installation:
 +
# ##i##rm -rf *
 +
</console>
 +
 
 +
Now start the guide again, at "Pre-Chroot"
 +
 
 +
 
 +
[[Category:HOWTO]]
 +
[[Category:Filesystems]]
 +
[[Category:Featured]]
 +
[[Category:Install]]
 +
 
 +
__NOTITLE__

Latest revision as of 03:43, September 14, 2015

Introduction

This tutorial will show you how to install Funtoo on ZFS (rootfs). This tutorial is meant to be an "overlay" over the Regular Funtoo Installation. Follow the normal installation and only use this guide for steps 2, 3, and 8.

Introduction to ZFS

Since ZFS is a new technology for Linux, it can be helpful to understand some of its benefits, particularly in comparison to BTRFS, another popular next-generation Linux filesystem:

  • On Linux, the ZFS code can be updated independently of the kernel to obtain the latest fixes. btrfs is exclusive to Linux and you need to build the latest kernel sources to get the latest fixes.
  • ZFS is supported on multiple platforms. The platforms with the best support are Solaris, FreeBSD and Linux. Other platforms with varying degrees of support are NetBSD, Mac OS X and Windows. btrfs is exclusive to Linux.
  • ZFS has the Adaptive Replacement Cache replacement algorithm while btrfs uses the Linux kernel's Last Recently Used replacement algorithm. The former often has an overwhelmingly superior hit rate, which means fewer disk accesses.
  • ZFS has the ZFS Intent Log and SLOG devices, which accelerates small synchronous write performance.
  • ZFS handles internal fragmentation gracefully, such that you can fill it until 100%. Internal fragmentation in btrfs can make btrfs think it is full at 10%. Btrfs has no automatic rebalancing code, so it requires a manual rebalance to correct it.
  • ZFS has raidz, which is like RAID 5/6 (or a hypothetical RAID 7 that supports 3 parity disks), except it does not suffer from the RAID write hole issue thanks to its use of CoW and a variable stripe size. btrfs gained integrated RAID 5/6 functionality in Linux 3.9. However, its implementation uses a stripe cache that can only partially mitigate the effect of the RAID write hole.
  • ZFS send/receive implementation supports incremental update when doing backups. btrfs' send/receive implementation requires sending the entire snapshot.
  • ZFS supports data deduplication, which is a memory hog and only works well for specialized workloads. btrfs has no equivalent.
  • ZFS datasets have a hierarchical namespace while btrfs subvolumes have a flat namespace.
  • ZFS has the ability to create virtual block devices called zvols in its namespace. btrfs has no equivalent and must rely on the loop device for this functionality, which is cumbersome.

The only area where btrfs is ahead of ZFS is in the area of small file efficiency. btrfs supports a feature called block suballocation, which enables it to store small files far more efficiently than ZFS. It is possible to use another filesystem (e.g. reiserfs) on top of a ZFS zvol to obtain similar benefits (with arguably better data integrity) when dealing with many small files (e.g. the portage tree).

For a quick tour of ZFS and have a big picture of its common operations you can consult the page ZFS Fun.

Disclaimers

Warning

This guide is a work in progress. Expect some quirks.

Today is 2015-05-12. ZFS has undertaken an upgrade - from 0.6.3 to 0.6.4. Please ensure that you use a RescueCD with ZFS 0.6.3. At present date grub 2.02 is not able to deal with those new ZFS parameters. If you want to use ZFS 0.6.4 for pool creation, you should use the compatability mode.

You should upgrade an existing pool only when grub is able to deal with - in a future version ... If not, you will not be able to boot into your system, and no rollback will help!

Please inform yourself!

Important

Since ZFS was really designed for 64 bit systems, we are only recommending and supporting 64 bit platforms and installations. We will not be supporting 32 bit platforms!

Downloading the ISO (With ZFS)

In order for us to install Funtoo on ZFS, you will need an environment that already provides the ZFS tools. Therefore we will download a customized version of System Rescue CD with ZFS included.

Name: sysresccd-4.2.0_zfs_0.6.2.iso  (545 MB)
Release Date: 2014-02-25
md5sum 01f4e6929247d54db77ab7be4d156d85
Download System Rescue CD with ZFS

Creating a bootable USB from ISO (From a Linux Environment)

After you download the iso, you can do the following steps to create a bootable USB:

Make a temporary directory
# mkdir /tmp/loop

Mount the iso
# mount -o ro,loop /root/sysresccd-4.2.0_zfs_0.6.2.iso /tmp/loop

Run the usb installer
# /tmp/loop/usb_inst.sh

That should be all you need to do to get your flash drive working.

Booting the ISO

Warning

When booting into the ISO, Make sure that you select the "Alternate 64 bit kernel (altker64)". The ZFS modules have been built specifically for this kernel rather than the standard kernel. If you select a different kernel, you will get a fail to load module stack error message.

Creating partitions

There are two ways to partition your disk: You can use your entire drive and let ZFS automatically partition it for you, or you can do it manually.

We will be showing you how to partition it manually because if you partition it manually you get to create your own layout, you get to have your own separate /boot partition (Which is nice since not every bootloader supports booting from ZFS pools), and you get to boot into RAID10, RAID5 (RAIDZ) pools and any other layouts due to you having a separate /boot partition.

gdisk (GPT Style)

A Fresh Start:

First lets make sure that the disk is completely wiped from any previous disk labels and partitions. We will also assume that /dev/sda is the target drive.

# sgdisk -Z /dev/sda
Warning

This is a destructive operation and the program will not ask you for confirmation! Make sure you really don't want anything on this disk.

Now that we have a clean drive, we will create the new layout.

First open up the application:

# gdisk /dev/sda

Create Partition 1 (boot):

Command: n ↵
Partition Number: 
First sector: 
Last sector: +250M ↵
Hex Code: 

Create Partition 2 (BIOS Boot Partition):

Command: n ↵
Partition Number: 
First sector: 
Last sector: +32M ↵
Hex Code: EF02 ↵

Create Partition 3 (ZFS):

Command: n ↵
Partition Number: 
First sector: 
Last sector: 
Hex Code: bf00 ↵

Command: p ↵

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048          514047   250.0 MiB   8300  Linux filesystem
   2          514048          579583   32.0 MiB    EF02  BIOS boot partition
   3          579584      1953525134   931.2 GiB   BF00  Solaris root

Command: w ↵

Format your /boot partition

# mkfs.ext2 -m 1 /dev/sda1

Create the zpool

We will first create the pool. The pool will be named tank. Feel free to name your pool as you want. We will use ashift=12 option which is used for a hard drives with a 4096 sector size.

#   zpool create -f -o ashift=12 -o cachefile=/tmp/zpool.cache -O normalization=formD -m none -R /mnt/funtoo tank /dev/sda3 

Create the zfs datasets

We will now create some datasets. For this installation, we will create a small but future proof amount of datasets. We will have a dataset for the OS (/), and your swap. We will also show you how to create some optional datasets: /home, /usr/src, and /usr/portage. Notice, datasets are examples only and not strictly required.

Create some empty containers for organization purposes, and make the dataset that will hold /
#  zfs create -p tank/funtoo
#  zfs create -o mountpoint=/ tank/funtoo/root

Optional, Create swap
#  zfs create tank/swap -V 2G -b 4K
#  mkswap /dev/tank/swap
#  swapon /dev/tank/swap

Optional, but recommended datasets: /home
#  zfs create -o mountpoint=/home tank/funtoo/home

Optional datasets: /usr/src, /usr/portage/{distfiles,packages}
#  zfs create -o mountpoint=/usr/src tank/funtoo/src
#  zfs create -o mountpoint=/usr/portage -o compression=off tank/funtoo/portage
#  zfs create -o mountpoint=/usr/portage/distfiles tank/funtoo/portage/distfiles
#  zfs create -o mountpoint=/usr/portage/packages tank/funtoo/portage/packages

Installing Funtoo

Pre-Chroot

Go into the directory that you will chroot into
# cd /mnt/funtoo

Make a boot folder and mount your boot drive
# mkdir boot
# mount /dev/sda1 boot

Now download and extract the Funtoo stage3 ...

Note

It is trully recommended to use the current version and generic64. That reduces the risk of a broken build.

After successfull ZFS installation and successfull first boot, the kernel may be changed using the eselect profile set ... command. If you create a snapshot before, you may allways come back to your previous installation, with some simple steps ... (rollback your pool and in the worst case configure and install the bootloader again)

Once you've extracted the stage3, do a few more preparations and chroot into your new funtoo environment:

Bind the kernel related directories
# mount -t proc none proc
# mount --rbind /dev dev
# mount --rbind /sys sys

Copy network settings
# cp -f /etc/resolv.conf etc

Make the zfs folder in 'etc' and copy your zpool.cache
# mkdir etc/zfs
# cp /tmp/zpool.cache etc/zfs

Chroot into Funtoo
# env -i HOME=/root TERM=$TERM chroot . bash -l
Note

How to create zpool.cache file?

If no zpool.cache file is available, the following command will create one:

# zpool set cachefile=/etc/zfs/zpool.cache tank

Downloading the Portage tree

Note

For an alternative way to do this, see Installing Portage From Snapshot.

Now it's time to install a copy of the Portage repository, which contains package scripts (ebuilds) that tell portage how to build and install thousands of different software packages. To create the Portage repository, simply run emerge --sync from within the chroot. This will automatically clone the portage tree from GitHub:

(chroot) # emerge --sync
Important

If you receive the error with initial emerge --sync due to git protocol restrictions, change SYNC variable in /etc/portage/make.conf:

SYNC="https://github.com/funtoo/ports-2012.git"
Note

To update the Funtoo Linux system just type:

(chroot) # emerge -auDN @world

Add filesystems to /etc/fstab

Before we continue to compile and or install our kernel in the next step, we will edit the /etc/fstab file because if we decide to install our kernel through portage, portage will need to know where our /boot is, so that it can place the files in there.

Edit /etc/fstab:

/etc/fstab
# <fs>                  <mountpoint>    <type>          <opts>          <dump/pass>

/dev/sda1               /boot           ext2            defaults        0 2

# If you set up a swap partition, you will have to add this line as well
/dev/zvol/tank/swap     none            swap            defaults        0 0

Building kernel, initramfs and grub to work with zfs

Install genkernel and initial kernel build

We need to build a genkernel initially:

# emerge genkernel

Build initial kernel (required for checks in sys-kernel/spl and sys-fs/zfs):
# genkernel kernel --no-clean --no-mountboot 

Installing the ZFS userspace tools and kernel modules

Emerge sys-fs/zfs (package not on wiki - please add). This package will bring in sys-kernel/spl (package not on wiki - please add), and sys-fs/zfs-kmod (package not on wiki - please add) as its dependencies:

# emerge zfs

Check to make sure that the zfs tools are working. The zpool.cache file that you copied before should be displayed.

# zpool status
# zfs list
Note

If /etc/mtab is missing, these two commands will complaine. In that case solve that with:

# grep -v rootfs /proc/mounts > /etc/mtab

Add the zfs tools to openrc.

# rc-update add zfs-import boot
# rc-update add zfs-mount boot
# rc-update add zfs-share default
# rc-update add zfs-zed default

If everything worked, continue.

Install GRUB 2

Install grub2:

# echo "sys-boot/grub libzfs -truetype" >> /etc/portage/package.use
# emerge grub

Now install grub to the drive itself (not a partition):

# grub-install /dev/sda

Initial kernel build

Build now kernel and initramfs with --zfs

# genkernel all --zfs --no-clean --no-mountboot --callback="emerge @module-rebuild"

Using the debian-sources, the following command may give better results:

# genkernel all --zfs --no-clean --no-mountboot --callback="emerge spl zfs-kmod zfs"

Configuring the Bootloader

Using the genkernel you must add 'real_root=ZFS=<root>' and 'dozfs' to your params. Edit /etc/boot.conf:

/etc/boot.conf
"Funtoo ZFS" {
        kernel kernel[-v]
        initrd initramfs-genkernel-x86_64[-v]
        params real_root=ZFS=tank/funtoo/root
        params += dozfs=force
}

The command boot-update should take care of grub configuration:

Install boot-update (if it is missing):
# emerge boot-update

Run boot-update to update grub.cfg
# boot-update
Note

If boot-updatefails, try this:

# grub-mkconfig -o /boot/grub/grub.cfg

Now you should have a new installation of the kernel, initramfs and grub which are zfs capable. The configuration files should be updated, and the system should come up during the next reboot.

Note

If The luks integration works basically the same way.

Final configuration

Clean up and reboot

We are almost done, we are just going to clean up, set our root password, and unmount whatever we mounted and get out.

Delete the stage3 tarball that you downloaded earlier so it doesn't take up space.
# cd /
# rm stage3-latest.tar.xz

Set your root password
# passwd
>> Enter your password, you won't see what you are writing (for security reasons), but it is there!

Get out of the chroot environment
# exit

Unmount all the kernel filesystem stuff and boot (if you have a separate /boot)
# umount -l proc dev sys boot

Turn off the swap
# swapoff /dev/zvol/tank/swap

Export the zpool
# cd /
# zpool export tank

Reboot
# reboot
Important

Don't forget to set your root password as stated above before exiting chroot and rebooting. If you don't set the root password, you won't be able to log into your new system.

and that should be enough to get your system to boot on ZFS.

After reboot

Forgot to reset password?

System Rescue CD

If you aren't using bliss-initramfs, then you can reboot back into your sysresccd and reset through there by mounting your drive, chrooting, and then typing passwd.

Example:

# zpool import -f -R /mnt/funtoo tank
# chroot /mnt/funtoo bash -l
# passwd
# exit
# zpool export -f tank
# reboot

Create initial ZFS Snapshot

Continue to set up anything you need in terms of /etc configurations. Once you have everything the way you like it, take a snapshot of your system. You will be using this snapshot to revert back to this state if anything ever happens to your system down the road. The snapshots are cheap, and almost instant.

To take the snapshot of your system, type the following:

# zfs snapshot -r tank@install

To see if your snapshot was taken, type:

# zfs list -t snapshot

If your machine ever fails and you need to get back to this state, just type (This will only revert your / dataset while keeping the rest of your data intact):

# zfs rollback tank/funtoo/root@install
Important

For a detailed overview, presentation of ZFS' capabilities, as well as usage examples, please refer to the ZFS Fun page.

Troubleshooting

Starting from scratch

If your installation has gotten screwed up for whatever reason and you need a fresh restart, you can do the following from sysresccd to start fresh:

Destroy the pool and any snapshots and datasets it has
# zpool destroy -R -f tank

This deletes the files from /dev/sda1 so that even after we zap, recreating the drive in the exact sector
position and size will not give us access to the old files in this partition.
# mkfs.ext2 /dev/sda1
# sgdisk -Z /dev/sda

Now start the guide again :).

Starting again reusing the same disk partitions and the same pool

If your installation has gotten screwed up for whatever reason and you want to keep your pole named tank than you should boou into the Rescue CD / USB as done before.

import the pool reusing all existing datasets:
# zpool import -f -R /mnt/funtoo tank

Now you should wipe the previous installation off:

let's go to our base installation directory:
# cd /mnt/funtoo

and delete the old installation: 
# rm -rf *

Now start the guide again, at "Pre-Chroot"