Difference between pages "Zope HOWTO" and "ZFS Install Guide"

(Difference between pages)
(lib/python/example/interfaces.py)
 
(Installing the ZFS userspace tools and kernel modules)
 
Line 1: Line 1:
This page documents how to use Zope with Funtoo Experimental, which currently has good Zope support thanks to [[Progress Overlay Python]] integration.
+
== Introduction ==
  
== About Zope ==
+
This tutorial will show you how to install Funtoo on ZFS (rootfs). This tutorial is meant to be an "overlay" over the [[Funtoo_Linux_Installation|Regular Funtoo Installation]]. Follow the normal installation and only use this guide for steps 2, 3, and 8.
  
Zope is an Open Source application server framework written in Python. It has an interesting history which you should familiarize yourself with before starting Zope development, as it contains several interesting twists and turns.
+
=== Introduction to ZFS ===
  
=== Zope History ===
+
Since ZFS is a new technology for Linux, it can be helpful to understand some of its benefits, particularly in comparison to BTRFS, another popular next-generation Linux filesystem:
  
{{Note}} This HOWTO targets Zope 2.13, which includes Five. It is typically the version you should be using for new Zope projects.
+
* On Linux, the ZFS code can be updated independently of the kernel to obtain the latest fixes. btrfs is exclusive to Linux and you need to build the latest kernel sources to get the latest fixes.
  
* There are two versions of Zope, Zope 2 and Zope 3. One might assume that Zope 3 is the version that people should use for new software development projects by default, but this is not the case. Most Zope-based projects continue to use Zope 2. Zope 3 was an attempt to redesign Zope 2 from scratch, and is completely different from Zope 2, but it was not adopted by the community.
+
* ZFS is supported on multiple platforms. The platforms with the best support are Solaris, FreeBSD and Linux. Other platforms with varying degrees of support are NetBSD, Mac OS X and Windows. btrfs is exclusive to Linux.
  
* There is also something called [http://codespeak.net/z3/five/ Five] (named because it is "2 + 3") that backports many of the new features of Zope 3 into the Zope 2 framework. Several projects will use Zope 2 plus Five in order to use some of the newer features in Zope. Five was merged into mainline Zope 2 in early 2010, and first appeared in Zope 2.8.
+
* ZFS has the Adaptive Replacement Cache replacement algorithm while btrfs uses the Linux kernel's Last Recently Used replacement algorithm. The former often has an overwhelmingly superior hit rate, which means fewer disk accesses.
  
* You can learn more about the history of Zope 2, 3 and Five in the [http://svn.zope.org/Zope/trunk/src/Products/Five/README.txt?view=markup Five README].
+
* ZFS has the ZFS Intent Log and SLOG devices, which accelerates small synchronous write performance.
  
* To make things even more interesting, work on [http://docs.zope.org/zope2/releases/4.0/ Zope 4] is underway, and it will be based on 2.13 rather than 3.x. It includes a number of [http://docs.zope.org/zope2/releases/4.0/CHANGES.html#restructuring incompatible changes] with prior versions.
+
* ZFS handles internal fragmentation gracefully, such that you can fill it until 100%. Internal fragmentation in btrfs can make btrfs think it is full at 10%. Btrfs has no automatic rebalancing code, so it requires a manual rebalance to correct it.
=== Zope Resources ===
+
  
Now that you understand what version of Zope you should be targeting (2.13), we can point you towards the correct documentation :)
+
* ZFS has raidz, which is like RAID 5/6 (or a hypothetical RAID 7 that supports 3 parity disks), except it does not suffer from the RAID write hole issue thanks to its use of CoW and a variable stripe size. btrfs gained integrated RAID 5/6 functionality in Linux 3.9. However, its implementation uses a stripe cache that can only partially mitigate the effect of the RAID write hole.
  
; '''[http://docs.zope.org/zope2/zope2book/ The Zope 2 Book]'''
+
* ZFS send/receive implementation supports incremental update when doing backups. btrfs' send/receive implementation requires sending the entire snapshot.
: This book provides a general introduction to Zope concepts and ZMI. It is a good place to start, but doesn't provide a direct introduction to Zope development. It's recommended that you skim through this book to familiarize yourself with Zope. It generally does not assume much prior knowledge about Web development or Python.
+
; '''[http://docs.zope.org/zope2/zdgbook/ Zope Developer's Guide]'''
+
: This guide will give you a better introduction to Zope development. It assumes you already know Python. Skip chapters 1 and 2 and start in [http://docs.zope.org/zope2/zdgbook/ComponentsAndInterfaces.html chapter 3], which covers components and interfaces. [http://docs.zope.org/zope2/zdgbook/Products.html Chapter 5] covers the creation of your first product.
+
; '''[http://codespeak.net/z3/five/manual.html The Five Manual]'''
+
: We're not done yet. There is a bunch of stuff in Zope 2.13 that is not in the official documentation. Namely, the stuff in Five.
+
; '''[http://docs.zope.org/ztkpackages.html ZTK Documentation]'''
+
: ZTK 
+
; '''ZCA'''
+
: [http://www.muthukadan.net/docs/zca.html A Comprehensive Guide to Zope Component Architecture] offers a good introduction to the programming concepts of ZCA. We also have a new page on [[Zope Component Architecture]] which will help you to understand the big picture of ZCA and why it is useful. ZCML ("Z-camel") is a part of ZCA and  was introduced in Zope 3, so typically you will find ZCML documented within Zope 3 documentation and book.
+
; '''Content Components'''
+
: Views and Viewlets: [http://docs.zope.org/zope.viewlet/index.html This tutorial on viewlets] also contains some viewlet-related ZCML examples near the end. The "Content Component way" of developing in Zope seems to be a Zope 3 thing and tied to ZCML. Chapter 13+ of Stephan Richter's ''Zope 3 Developer's Handbook'' (book) seems to cover this quite well. You will probably also want to check out Philipp Weitershausen's ''Web Component Development with Zope 3'' (book).
+
; '''[http://wiki.zope.org/zope2/Zope2Wiki Zope 2 Wiki]'''
+
: Main wiki page for all things related to Zope 2.
+
; '''[http://docs.zope.org docs.zope.org]'''
+
: This is the main site for Zope documentation.
+
  
== First Steps ==
+
* ZFS supports data deduplication, which is a memory hog and only works well for specialized workloads. btrfs has no equivalent.
  
First, you will need to emerge {{Package|net-zope/zope}}:
+
* ZFS datasets have a hierarchical namespace while btrfs subvolumes have a flat namespace.
 +
 
 +
* ZFS has the ability to create virtual block devices called zvols in its namespace. btrfs has no equivalent and must rely on the loop device for this functionality, which is cumbersome.
 +
 
 +
The only area where btrfs is ahead of ZFS is in the area of small file
 +
efficiency. btrfs supports a feature called block suballocation, which
 +
enables it to store small files far more efficiently than ZFS. It is
 +
possible to use another filesystem (e.g. reiserfs) on top of a ZFS zvol
 +
to obtain similar benefits (with arguably better data integrity) when
 +
dealing with many small files (e.g. the portage tree).
 +
 
 +
=== Disclaimers ===
 +
 
 +
{{fancywarning|This guide is a work in progress. Expect some quirks.}}
 +
{{fancyimportant|'''Since ZFS was really designed for 64 bit systems, we are only recommending and supporting 64 bit platforms and installations. We will not be supporting 32 bit platforms'''!}}
 +
 
 +
== Video Tutorial ==
 +
 
 +
As a companion to the install instructions below, a YouTube video ZFS install tutorial is now available:
 +
 
 +
{{#widget:YouTube|id=kxEdSXwU0ZI|width=640|height=360}}
 +
 
 +
== Downloading the ISO (With ZFS) ==
 +
In order for us to install Funtoo on ZFS, you will need an environment that provides the ZFS tools. Therefore we will download a customized version of System Rescue CD with ZFS already included. When booting, use the "alternate"-kernel. The ZFS-module won't work with the default kernel.
 +
 
 +
<pre>
 +
Name: sysresccd-4.0.0_zfs_0.6.2.iso  (522 MB)
 +
Release Date: 2014-01-18
 +
md5sum 5a6530088e63b516765f78076a2e4859
 +
</pre>
 +
 
 +
 
 +
'''[http://ftp.osuosl.org/pub/funtoo/distfiles/sysresccd/ Download System Rescue CD with ZFS]'''<br />
 +
 
 +
== Creating a bootable USB from ISO ==
 +
After you download the iso, you can do the following steps to create a bootable USB:
  
 
<console>
 
<console>
# ##i## emerge --jobs=10 zope
+
Make a temporary directory
 +
# ##i##mkdir /tmp/loop
 +
 
 +
Mount the iso
 +
# ##i##mount -o ro,loop /root/sysresccd-4.0.0_zfs_0.6.2.iso /tmp/loop
 +
 
 +
Run the usb installer
 +
# ##i##/tmp/loop/usb_inst.sh
 
</console>
 
</console>
  
Zope is now installed.
+
That should be all you need to do to get your flash drive working.
  
== Project Skeleton ==
+
When you are booting into system rescue cd, make sure you select the '''alternative 64 bit kernel'''. ZFS support was specifically added to the alternative 64 bit kernel rather than the standard 64 bit kernel.
  
{{Note}} Zope should be run by a regular user account, not as the root user.
+
== Creating partitions ==
 +
There are two ways to partition your disk: You can use your entire drive and let ZFS automatically partition it for you, or you can do it manually.
 +
 
 +
We will be showing you how to partition it '''manually''' because if you partition it manually you get to create your own layout, you get to have your own separate /boot partition (Which is nice since not every bootloader supports booting from ZFS pools), and you get to boot into RAID10, RAID5 (RAIDZ) pools and any other layouts due to you having a separate /boot partition.
 +
 
 +
==== gdisk (GPT Style) ====
 +
 
 +
'''A Fresh Start''':
 +
 
 +
First lets make sure that the disk is completely wiped from any previous disk labels and partitions.
 +
We will also assume that <tt>/dev/sda</tt> is the target drive.<br />
  
The first step in using Zope is to ensure that you are using a regular user account. Create a new directory called ''<tt>zope_test</tt>'':
 
 
<console>
 
<console>
$##bl## cd
+
# ##i##gdisk /dev/sda
$##bl## mkdir zope_test
+
 
 +
Command: ##i##x ↵
 +
Expert command: ##i##z ↵
 +
About to wipe out GPT on /dev/sda. Proceed?: ##i##y ↵
 +
GPT data structures destroyed! You may now partition the disk using fdisk or other utilities.
 +
Blank out MBR?: ##i##y ↵
 
</console>
 
</console>
  
Now, enter the directory, and create an "instance", which is a set of files and directories that are used to contain a Zope project:
+
{{fancywarning|This is a destructive operation. Make sure you really don't want anything on this disk.}}
 +
 
 +
Now that we have a clean drive, we will create the new layout.
 +
 
 +
'''Create Partition 1''' (boot):
 
<console>
 
<console>
$##bl## cd zope_test
+
Command: ##i##n ↵
$##bl## /usr/lib/zope-2.13/bin/mkzopeinstance
+
Partition Number: ##i##
 +
First sector: ##i##↵
 +
Last sector: ##i##+250M ↵
 +
Hex Code: ##i##↵
 
</console>
 
</console>
  
You will see the following output, and will be prompted to answer a few questions:
+
'''Create Partition 2''' (BIOS Boot Partition):
 +
<console>Command: ##i##n ↵
 +
Partition Number: ##i##↵
 +
First sector: ##i##↵
 +
Last sector: ##i##+32M ↵
 +
Hex Code: ##i##EF02 ↵
 +
</console>
 +
 
 +
'''Create Partition 3''' (ZFS):
 +
<console>Command: ##i##n ↵
 +
Partition Number: ##i##↵
 +
First sector: ##i##↵
 +
Last sector: ##i##↵
 +
Hex Code: ##i##bf00 ↵
 +
 
 +
Command: ##i##p ↵
 +
 
 +
Number  Start (sector)    End (sector)  Size      Code  Name
 +
  1            2048          514047  250.0 MiB  8300  Linux filesystem
 +
  2          514048          579583  32.0 MiB    EF02  BIOS boot partition
 +
  3          579584      1953525134  931.2 GiB  BF00  Solaris root
 +
 
 +
Command: ##i##w ↵
 +
</console>
 +
 
 +
 
 +
=== Format your boot volume ===
 +
Format your separate <tt>/boot</tt> partition:
 
<console>
 
<console>
Please choose a directory in which you'd like to install
+
# ##i##mkfs.ext2 /dev/sda1
Zope "instance home" files such as database files, configuration
+
</console>
files, etc.
+
  
Directory: instance
+
=== Encryption (Optional) ===
Please choose a username and password for the initial user.
+
If you want encryption, then create your encrypted vault(s) now by doing the following:
These will be the credentials you use to initially manage
+
your new Zope instance.
+
  
Username: admin
+
<console>
Password: ****
+
# ##i##cryptsetup luksFormat /dev/sda3
Verify password: ****
+
# ##i##cryptsetup luksOpen /dev/sda3 vault_1
 
</console>
 
</console>
  
Now, we will start our Zope instance:
+
=== Create the zpool ===
 +
We will first create the pool. The pool will be named `tank` and the disk will be aligned to 4096 (using ashift=12)
 +
<console># ##i##zpool create -f -o ashift=12 -o cachefile= -O compression=on -m none -R /mnt/funtoo tank /dev/sda3</console>
 +
 
 +
{{fancyimportant|If you are using encrypted root, change '''/dev/sda3 to /dev/mapper/vault_1'''.}}
 +
 
 +
{{fancynote| '''ashift<nowiki>=</nowiki>12''' should be use if you have a newer, advanced format disk that has a sector size of 4096 bytes. If you have an older disk with 512 byte sectors, you should use '''ashift<nowiki>=</nowiki>9''' or don't add the option for auto detection.}}
 +
 
 +
{{fancynote| If you have a previous pool that you would like to import, you can do a: '''zpool import -f -R /mnt/funtoo <pool_name>'''.}}
 +
 
 +
=== Create the zfs datasets ===
 +
We will now create some datasets. For this installation, we will create a small but future proof amount of datasets. We will have a dataset for the OS (/), and your swap. We will also show you how to create some optional datasets: <tt>/home</tt>, <tt>/var</tt>, <tt>/usr/src</tt>, and <tt>/usr/portage</tt>.
 +
 
 
<console>
 
<console>
$##bl## cd instance
+
Create some empty containers for organization purposes, and make the dataset that will hold /
$##bl## bin/runzope
+
# ##i##zfs create -p tank/os/funtoo
 +
# ##i##zfs create -o mountpoint=/ tank/os/funtoo/root
 +
 
 +
Optional, but recommended datasets: /home
 +
# ##i##zfs create -o mountpoint=/home tank/os/funtoo/home
 +
 
 +
Optional datasets: /usr/src, /usr/portage/{distfiles,packages}
 +
# ##i##zfs create -o mountpoint=/usr/src tank/os/funtoo/src
 +
# ##i##zfs create -o mountpoint=/usr/portage -o compression=off tank/os/funtoo/portage
 +
# ##i##zfs create -o mountpoint=/usr/portage/distfiles tank/os/funtoo/portage/distfiles
 +
# ##i##zfs create -o mountpoint=/usr/portage/packages tank/os/funtoo/portage/packages
 
</console>
 
</console>
  
Now that Zope is running, you can visit ''<tt>localhost:8080</tt>'' in your Web browser. You will see a nice introductory page to Zope.
+
=== Create your swap zvol ===
 +
'''Make your swap +1G greater than your RAM. An 8G machine would have 9G of SWAP (This is kinda big though). For machines with this much memory, You could just make it 2G if you don't have any problems.'''
 +
<console>
 +
# ##i##zfs create -o sync=always -o primarycache=metadata -o secondarycache=none -o volblocksize=4K -V 1G tank/swap
 +
</console>
  
If you now go to the ''<tt>localhost:8080/manage</tt>'' URL, you will be prompted to log in. Enter the username and password you specified. You are now logged in to the ZMI (Zope Management Interface.)
+
=== Format your swap zvol ===
 +
<console>
 +
# ##i##mkswap -f /dev/zvol/tank/swap
 +
# ##i##swapon /dev/zvol/tank/swap
 +
</console>
  
You can stop your application by pressing Control-C. In the future, you can start and stop your Zope instance using the following commands:
+
Now we will continue to install funtoo.
  
 +
== Installing Funtoo ==
 +
[[Funtoo_Linux_Installation|Download and extract the Funtoo stage3 and continue installation as normal.]]
 +
 +
Then once you've extracted the stage3, chroot into your new funtoo environment:
 
<console>
 
<console>
$##bl## zopectl start
+
Go into the directory that you will chroot into
$##bl## zopectl stop
+
# ##i##cd /mnt/funtoo
 +
 
 +
Mount your boot drive
 +
# ##i##mount /dev/sda1 /mnt/funtoo/boot
 +
 
 +
Bind the kernel related directories
 +
# ##i##mount -t proc none /mnt/funtoo/proc
 +
# ##i##mount --rbind /dev /mnt/funtoo/dev
 +
# ##i##mount --rbind /sys /mnt/funtoo/sys
 +
 
 +
Copy network settings
 +
# ##i##cp /etc/resolv.conf /mnt/funtoo/etc/
 +
 
 +
chroot into your new funtoo environment
 +
# ##i##env -i HOME=/root TERM=$TERM chroot /mnt/funtoo /bin/bash --login
 +
 
 +
Place your mountpoints into your /etc/mtab file
 +
# ##i##cat /proc/mounts > /etc/mtab
 +
 
 +
Sync your tree
 +
# ##i##emerge --sync
 
</console>
 
</console>
  
{{Note}} ''<tt>zopectl start</tt>'' will cause your instance to run in the background rather than consuming a shell console.
+
=== Add filesystems to /etc/fstab ===
  
== First Project ==
+
Before we continue to compile and or install our kernel in the next step, we will edit the <tt>/etc/fstab</tt> file because if we decide to install our kernel through portage, portage will need to know where is your <tt>/boot</tt> so that it can place the files in there. We also need to update <tt>/etc/mtab</tt> so our system knows what is mounted
  
We will create a single very primitive Zope package, consisting of an Interface for a TODO class, and a TODO class.
+
{{File
 +
|/etc/fstab|<pre>
 +
# <fs>                  <mountpoint>    <type>          <opts>          <dump/pass>
  
Create the following files and directories relative to your project root:
+
/dev/sda1              /boot          ext2            defaults        0 2
 +
/dev/zvol/tank/swap    none            swap            sw              0 0
 +
</pre>}}
  
* Create the directory <tt>lib/python/example</tt>.
+
== Kernel Configuration ==
* Create the file <tt>lib/python/example/__init__.py</tt> by typing <tt>touch lib/python/example/__init__.py</tt>.
+
To speed up this step, you can install "bliss-kernel" since it's already properly configured for ZFS and a lot of other configurations. The kernel is also compiled and ready to go. To install {{Package|sys-kernel/bliss-kernel}} type the following:
* Create these files:
+
  
=== <tt>etc/package-includes/example-configure.zcml</tt> ===
+
<console>
 +
# ##i##emerge -av bliss-kernel
 +
</console>
  
This file registers the <tt>example</tt> directory you created in <tt>lib/python</tt> as a ''package'', so that it is seen by Zope:
+
Now make sure that your <tt>/usr/src/linux symlink</tt> is pointing to this kernel by typing the following:
 +
<console>
 +
# ##i##eselect kernel list
 +
Available kernel symlink targets:
 +
[1]  linux-3.10.10-FB.01 *
 +
</console>
 +
You should see a star next to the bliss-kernel version you installed. In this case it was 3.10.10-FB.01. If it's not set, you can type '''eselect kernel set #'''.
  
<pre>
+
== Installing the ZFS userspace tools and kernel modules ==
<include package="example" />
+
Emerge {{Package|sys-fs/zfs}}, {{Package|sys-kernel/spl}}, and {{Package|sys-fs/zfs-kmod}}:
</pre>
+
<console># ##i##emerge -av zfs spl zfs-kmod</console>
 +
Check to make sure that the zfs tools are working, the <code>zpool.cache</code> file that you copied before should be displayed.
  
=== <tt>interfaces.py</tt> ===
+
{{Fancynote| SPL stands for: Solaris Porting Layer}}
 +
<console>
 +
# ##i##zpool status
 +
# ##i##zfs list
 +
</console>
  
The following file defines the <tt>ITODO</tt> interface, and also uses some Zope Schema functions to define what kind of data we expect to store in objects that implement <tt>ITODO</tt>. Edit <code>/lib/python/example/interfaces.py</code> with your favorite text editor:
+
If everything worked, continue.
  
<pre>
+
== Install the bootloader ==
from zope.interface import Interface
+
=== GRUB 2 ===
from zope.schema import List, Text, TextLine, Int
+
Before you do this, make sure this checklist is followed:
 +
* Installed kernel and kernel modules
 +
* Installed zfs package from the tree
 +
* <code>/dev</code>, <code>/proc</code>, <code>/sys</code> are mounted in the chroot environment
  
class ITODO(Interface):
+
Once all this is checked, let's install grub2. First we need to enable the "libzfs" use flag so zfs support is compiled for grub2.
    name = TextLine(title=u'Name', required=True)
+
    todo = List(title=u"TODO Items", required=True, value_type=TextLine(title=u'TODO'))
+
    daysleft = Int(title=u'Days left to complete', required=True)
+
    description = Text(title=u'Description', required=True)
+
</pre>
+
  
=== <tt>lib/python/example/TODO.py</tt> ===
+
<console># ##i##echo "sys-boot/grub libzfs" >> /etc/portage/package.use</console>
  
Now, we define ''<tt>TODO</tt>'' to be a ''persistent'' object, meaning it can be stored in the ZODB. We specify that it implements our previously-defined ''<tt>ITODO</tt>'' interface, and provide reasonable defaults for all values when we create a new TODO object:
+
Then we will compile grub2:
<pre>
+
from persistent import Persistent
+
from zope.interface import implements
+
from example.interfaces import ITODO
+
  
class TODO(Persistent):
+
<console># ##i##emerge -av grub</console>
    implements(ITODO)
+
    name = u''
+
    todo = []
+
    daysleft = 0
+
    description = u''
+
</pre>
+
  
=== <tt>lib/python/example/configure.zcml</tt> ===
+
Once this is done, you can check that grub is version 2.00 by doing the following command:
 +
<console>
 +
# ##i##grub-install --version
 +
grub-install (GRUB) 2.00
 +
</console>
  
Create an empty ''<tt>configure.zcml</tt>'' configuration file:
+
Now try to install {{Package|sys-boot/grub}}:
 
<console>
 
<console>
<configure xmlns="http://namespaces.zope.org/zope"
+
# ##i##grub-install --recheck /dev/sda
    xmlns:five="http://namespaces.zope.org/five"
+
    xmlns:browser="http://namespaces.zope.org/browser">
+
</configure>
+
 
</console>
 
</console>
  
== Debug Mode ==
+
You should receive the following message:
 +
<console>
 +
Installation finished. No error reported.
 +
</console>
 +
 
 +
If not, then go back to the above checklist.
 +
 
 +
=== LILO ===
 +
Before you do this, make sure the following checklist is followed:
 +
* <code>/dev</code>, <tt>/proc</tt> and <tt>/sys</tt> are mounted.
 +
* Installed the {{Package|sys-fs/zfs}} package from the tree.
 +
Once the above requirements are met, LILO can be installed.
 +
 
 +
Now we will install {{Package|sys-boot/lilo}}.
 +
<console># ##i##emerge -av sys-boot/lilo</console>
 +
Once the installation of LILO is complete we will need to edit the lilo.conf file.
 +
{{File
 +
|/etc/lilo.conf|<pre>
 +
boot=/dev/sda
 +
prompt
 +
timeout=4
 +
default=Funtoo
 +
 
 +
image=/boot/bzImage
 +
      label=Funtoo
 +
      read-only
 +
      append="root=tank/os/funtoo/root"
 +
      initrd=/boot/initramfs
 +
</pre>}}
 +
All that is left now is to install the bootcode to the MBR.
  
We can test our first project by entering debug mode:
+
This can be accomplished by running:
 +
<console># ##i##/sbin/lilo</console>
 +
If it is successful you should see:
 
<console>
 
<console>
$##bl## bin/zopectl debug
+
Warning: LBA32 addressing assumed
Starting debugger (the name "app" is bound to the top-level Zope object)
+
Added Funtoo + *
 +
One warning was issued
 
</console>
 
</console>
  
Now, let's try creating a new TODO object and writing it out to a ZODB database:
+
== Create the initramfs ==
 +
There are two ways to do this, you can use genkernel, or you can use my bliss initramfs creator. I will show you both.
 +
 
 +
=== genkernel ===
 
<console>
 
<console>
>>> from ZODB import FileStorage, DB
+
# ##i##emerge -av sys-kernel/genkernel
>>> storage = FileStorage.FileStorage('mydatabase.fs')
+
# You only need to add --luks if you used encryption
>>> db = DB(storage)
+
# ##i##genkernel --zfs --luks initramfs
>>> connection = db.open()
+
>>> import transaction
+
>>> root = connection.root()
+
>>> from example.TODO import TODO
+
>>> a = TODO
+
>>> a.name = u'My TODOs'
+
>>> a.TODOS = [ u'Do Laundry', u'Wash Dishes' ]
+
>>> a.daysleft = 1
+
>>> a.description = u'Things I need to do today.'
+
>>> root[u'today'] = a
+
>>> transaction.commit()
+
 
</console>
 
</console>
 +
 +
=== Bliss Initramfs Creator ===
 +
If you are encrypting your drives, then add the "luks" use flag to your package.use before emerging:
 +
 +
<console>
 +
# ##i##echo "sys-kernel/bliss-initramfs luks" >> /etc/portage/package.use
 +
</console>
 +
 +
Now install the creator:
 +
 +
<console>
 +
# ##i##emerge bliss-initramfs
 +
</console>
 +
 +
 +
Then go into the install directory, run the script as root, and place it into /boot:
 +
<console># ##i##cd /opt/bliss-initramfs
 +
# ##i##./createInit
 +
# ##i##mv initrd-<kernel_name> /boot
 +
</console>
 +
'''<kernel_name>''' is the name of what you selected in the initramfs creator, and the name of the outputted file.
 +
 +
== Using boot-update ==
 +
=== /boot on separate partition ===
 +
If you created a separate non-zfs partition for boot then configuring boot-update is almost exactly the same as a normal install except that auto detection for root does not work. You must tell boot-update what your root is.
 +
==== Genkernel ====
 +
If your using genkernel you must add 'real_root=ZFS=<root>' and 'dozfs' to your params.
 +
Example entry for boot.conf:
 +
<console>
 +
"Funtoo ZFS" {
 +
        kernel vmlinuz[-v]
 +
        initrd initramfs-genkernel-x86_64[-v]
 +
        params real_root=ZFS=tank/os/funtoo/root
 +
        params += dozfs=force
 +
        # Also add 'params += crypt_root=/dev/sda3' if you used encryption
 +
        # Adjust the above setting to your system if needed
 +
}
 +
</console>
 +
 +
==== Bliss Initramfs Creator ====
 +
If you used the Bliss Initramfs Creator then all you need to do is add 'root=<root>' to your params.
 +
Example entry for boot.conf:
 +
<console>
 +
"Funtoo ZFS" {
 +
        kernel vmlinuz[-v]
 +
        initrd initrd[-v]
 +
        params root=tank/os/funtoo/root quiet
 +
        # If you have an encrypted device with a regular passphrase,
 +
        # you can add the following line
 +
        params += enc_root=/dev/sda3 enc_type=pass
 +
}
 +
</console>
 +
 +
After editing /etc/boot.conf, you just need to run boot-update to update grub.cfg
 +
<console># ##i##boot-update</console>
 +
 +
=== /boot on ZFS ===
 +
TBC - pending update to boot-update to support this
 +
 +
== Final configuration ==
 +
=== Add the zfs tools to openrc ===
 +
<console># ##i##rc-update add zfs boot</console>
 +
 +
=== Clean up and reboot ===
 +
We are almost done, we are just going to clean up, '''set our root password''', and unmount whatever we mounted and get out.
 +
 +
<console>
 +
Delete the stage3 tarball that you downloaded earlier so it doesn't take up space.
 +
# ##i##cd /
 +
# ##i##rm stage3-latest.tar.xz
 +
 +
Set your root password
 +
# ##i##passwd
 +
>> Enter your password, you won't see what you are writing (for security reasons), but it is there!
 +
 +
Get out of the chroot environment
 +
# ##i##exit
 +
 +
Unmount all the kernel filesystem stuff and boot (if you have a separate /boot)
 +
# ##i##umount -l proc dev sys boot
 +
 +
Turn off the swap
 +
# ##i##swapoff /dev/zvol/tank/swap
 +
 +
Export the zpool
 +
# ##i##cd /
 +
# ##i##zpool export tank
 +
 +
Reboot
 +
# ##i##reboot
 +
</console>
 +
 +
{{fancyimportant|'''Don't forget to set your root password as stated above before exiting chroot and rebooting. If you don't set the root password, you won't be able to log into your new system.'''}}
 +
 +
and that should be enough to get your system to boot on ZFS.
 +
 +
== After reboot ==
 +
=== Create initial ZFS Snapshot ===
 +
Continue to set up anything you need in terms of /etc configurations. Once you have everything the way you like it, take a snapshot of your system. You will be using this snapshot to revert back to this state if anything ever happens to your system down the road. The snapshots are cheap, and almost instant.
 +
 +
To take the snapshot of your system, type the following:
 +
<console># ##i##zfs snapshot -r tank@install</console>
 +
 +
To see if your snapshot was taken, type:
 +
<console># ##i##zfs list -t snapshot</console>
 +
 +
If your machine ever fails and you need to get back to this state, just type (This will only revert your / dataset while keeping the rest of your data intact):
 +
<console># ##i##zfs rollback tank/os/funtoo/root@install</console>
 +
 +
{{fancyimportant|'''For a detailed overview, presentation of ZFS' capabilities, as well as usage examples, please refer to the [[ZFS_Fun|ZFS Fun]] page.'''}}
  
 
[[Category:HOWTO]]
 
[[Category:HOWTO]]
 +
[[Category:Filesystems]]
 
[[Category:Featured]]
 
[[Category:Featured]]
 +
 +
__NOTITLE__

Revision as of 21:55, January 28, 2014

Introduction

This tutorial will show you how to install Funtoo on ZFS (rootfs). This tutorial is meant to be an "overlay" over the Regular Funtoo Installation. Follow the normal installation and only use this guide for steps 2, 3, and 8.

Introduction to ZFS

Since ZFS is a new technology for Linux, it can be helpful to understand some of its benefits, particularly in comparison to BTRFS, another popular next-generation Linux filesystem:

  • On Linux, the ZFS code can be updated independently of the kernel to obtain the latest fixes. btrfs is exclusive to Linux and you need to build the latest kernel sources to get the latest fixes.
  • ZFS is supported on multiple platforms. The platforms with the best support are Solaris, FreeBSD and Linux. Other platforms with varying degrees of support are NetBSD, Mac OS X and Windows. btrfs is exclusive to Linux.
  • ZFS has the Adaptive Replacement Cache replacement algorithm while btrfs uses the Linux kernel's Last Recently Used replacement algorithm. The former often has an overwhelmingly superior hit rate, which means fewer disk accesses.
  • ZFS has the ZFS Intent Log and SLOG devices, which accelerates small synchronous write performance.
  • ZFS handles internal fragmentation gracefully, such that you can fill it until 100%. Internal fragmentation in btrfs can make btrfs think it is full at 10%. Btrfs has no automatic rebalancing code, so it requires a manual rebalance to correct it.
  • ZFS has raidz, which is like RAID 5/6 (or a hypothetical RAID 7 that supports 3 parity disks), except it does not suffer from the RAID write hole issue thanks to its use of CoW and a variable stripe size. btrfs gained integrated RAID 5/6 functionality in Linux 3.9. However, its implementation uses a stripe cache that can only partially mitigate the effect of the RAID write hole.
  • ZFS send/receive implementation supports incremental update when doing backups. btrfs' send/receive implementation requires sending the entire snapshot.
  • ZFS supports data deduplication, which is a memory hog and only works well for specialized workloads. btrfs has no equivalent.
  • ZFS datasets have a hierarchical namespace while btrfs subvolumes have a flat namespace.
  • ZFS has the ability to create virtual block devices called zvols in its namespace. btrfs has no equivalent and must rely on the loop device for this functionality, which is cumbersome.

The only area where btrfs is ahead of ZFS is in the area of small file efficiency. btrfs supports a feature called block suballocation, which enables it to store small files far more efficiently than ZFS. It is possible to use another filesystem (e.g. reiserfs) on top of a ZFS zvol to obtain similar benefits (with arguably better data integrity) when dealing with many small files (e.g. the portage tree).

Disclaimers

Warning

This guide is a work in progress. Expect some quirks.

Important

Since ZFS was really designed for 64 bit systems, we are only recommending and supporting 64 bit platforms and installations. We will not be supporting 32 bit platforms!

Video Tutorial

As a companion to the install instructions below, a YouTube video ZFS install tutorial is now available:

Downloading the ISO (With ZFS)

In order for us to install Funtoo on ZFS, you will need an environment that provides the ZFS tools. Therefore we will download a customized version of System Rescue CD with ZFS already included. When booting, use the "alternate"-kernel. The ZFS-module won't work with the default kernel.

Name: sysresccd-4.0.0_zfs_0.6.2.iso   (522 MB)
Release Date: 2014-01-18
md5sum 5a6530088e63b516765f78076a2e4859


Download System Rescue CD with ZFS

Creating a bootable USB from ISO

After you download the iso, you can do the following steps to create a bootable USB:

Make a temporary directory
# mkdir /tmp/loop

Mount the iso
# mount -o ro,loop /root/sysresccd-4.0.0_zfs_0.6.2.iso /tmp/loop

Run the usb installer
# /tmp/loop/usb_inst.sh

That should be all you need to do to get your flash drive working.

When you are booting into system rescue cd, make sure you select the alternative 64 bit kernel. ZFS support was specifically added to the alternative 64 bit kernel rather than the standard 64 bit kernel.

Creating partitions

There are two ways to partition your disk: You can use your entire drive and let ZFS automatically partition it for you, or you can do it manually.

We will be showing you how to partition it manually because if you partition it manually you get to create your own layout, you get to have your own separate /boot partition (Which is nice since not every bootloader supports booting from ZFS pools), and you get to boot into RAID10, RAID5 (RAIDZ) pools and any other layouts due to you having a separate /boot partition.

gdisk (GPT Style)

A Fresh Start:

First lets make sure that the disk is completely wiped from any previous disk labels and partitions. We will also assume that /dev/sda is the target drive.

# gdisk /dev/sda

Command: x ↵
Expert command: z ↵
About to wipe out GPT on /dev/sda. Proceed?: y ↵
GPT data structures destroyed! You may now partition the disk using fdisk or other utilities.
Blank out MBR?: y ↵

Warning

This is a destructive operation. Make sure you really don't want anything on this disk.

Now that we have a clean drive, we will create the new layout.

Create Partition 1 (boot):

Command: n ↵
Partition Number: 
First sector: 
Last sector: +250M ↵
Hex Code: 

Create Partition 2 (BIOS Boot Partition):

Command: n ↵
Partition Number: 
First sector: 
Last sector: +32M ↵
Hex Code: EF02 ↵

Create Partition 3 (ZFS):

Command: n ↵
Partition Number: 
First sector: 
Last sector: 
Hex Code: bf00 ↵

Command: p ↵

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048          514047   250.0 MiB   8300  Linux filesystem
   2          514048          579583   32.0 MiB    EF02  BIOS boot partition
   3          579584      1953525134   931.2 GiB   BF00  Solaris root

Command: w ↵


Format your boot volume

Format your separate /boot partition:

# mkfs.ext2 /dev/sda1

Encryption (Optional)

If you want encryption, then create your encrypted vault(s) now by doing the following:

# cryptsetup luksFormat /dev/sda3
# cryptsetup luksOpen /dev/sda3 vault_1

Create the zpool

We will first create the pool. The pool will be named `tank` and the disk will be aligned to 4096 (using ashift=12)

# zpool create -f -o ashift=12 -o cachefile= -O compression=on -m none -R /mnt/funtoo tank /dev/sda3

Important

If you are using encrypted root, change /dev/sda3 to /dev/mapper/vault_1.

Note

ashift=12 should be use if you have a newer, advanced format disk that has a sector size of 4096 bytes. If you have an older disk with 512 byte sectors, you should use ashift=9 or don't add the option for auto detection.

Note

If you have a previous pool that you would like to import, you can do a: zpool import -f -R /mnt/funtoo <pool_name>.

Create the zfs datasets

We will now create some datasets. For this installation, we will create a small but future proof amount of datasets. We will have a dataset for the OS (/), and your swap. We will also show you how to create some optional datasets: /home, /var, /usr/src, and /usr/portage.

Create some empty containers for organization purposes, and make the dataset that will hold /
# zfs create -p tank/os/funtoo
# zfs create -o mountpoint=/ tank/os/funtoo/root

Optional, but recommended datasets: /home
# zfs create -o mountpoint=/home tank/os/funtoo/home

Optional datasets: /usr/src, /usr/portage/{distfiles,packages}
# zfs create -o mountpoint=/usr/src tank/os/funtoo/src
# zfs create -o mountpoint=/usr/portage -o compression=off tank/os/funtoo/portage
# zfs create -o mountpoint=/usr/portage/distfiles tank/os/funtoo/portage/distfiles
# zfs create -o mountpoint=/usr/portage/packages tank/os/funtoo/portage/packages

Create your swap zvol

Make your swap +1G greater than your RAM. An 8G machine would have 9G of SWAP (This is kinda big though). For machines with this much memory, You could just make it 2G if you don't have any problems.

# zfs create -o sync=always -o primarycache=metadata -o secondarycache=none -o volblocksize=4K -V 1G tank/swap

Format your swap zvol

# mkswap -f /dev/zvol/tank/swap
# swapon /dev/zvol/tank/swap

Now we will continue to install funtoo.

Installing Funtoo

Download and extract the Funtoo stage3 and continue installation as normal.

Then once you've extracted the stage3, chroot into your new funtoo environment:

Go into the directory that you will chroot into
# cd /mnt/funtoo

Mount your boot drive
# mount /dev/sda1 /mnt/funtoo/boot

Bind the kernel related directories
# mount -t proc none /mnt/funtoo/proc
# mount --rbind /dev /mnt/funtoo/dev
# mount --rbind /sys /mnt/funtoo/sys

Copy network settings
# cp /etc/resolv.conf /mnt/funtoo/etc/

chroot into your new funtoo environment
# env -i HOME=/root TERM=$TERM chroot /mnt/funtoo /bin/bash --login

Place your mountpoints into your /etc/mtab file
# cat /proc/mounts > /etc/mtab

Sync your tree
# emerge --sync

Add filesystems to /etc/fstab

Before we continue to compile and or install our kernel in the next step, we will edit the /etc/fstab file because if we decide to install our kernel through portage, portage will need to know where is your /boot so that it can place the files in there. We also need to update /etc/mtab so our system knows what is mounted

{{{name}}}
{{{body}}}

Kernel Configuration

To speed up this step, you can install "bliss-kernel" since it's already properly configured for ZFS and a lot of other configurations. The kernel is also compiled and ready to go. To install sys-kernel/bliss-kernel (package not on wiki - please add) type the following:

# emerge -av bliss-kernel

Now make sure that your /usr/src/linux symlink is pointing to this kernel by typing the following:

# eselect kernel list
Available kernel symlink targets:
[1]   linux-3.10.10-FB.01 *

You should see a star next to the bliss-kernel version you installed. In this case it was 3.10.10-FB.01. If it's not set, you can type eselect kernel set #.

Installing the ZFS userspace tools and kernel modules

Emerge sys-fs/zfs (package not on wiki - please add), sys-kernel/spl (package not on wiki - please add), and sys-fs/zfs-kmod (package not on wiki - please add):

# emerge -av zfs spl zfs-kmod

Check to make sure that the zfs tools are working, the zpool.cache file that you copied before should be displayed.

Note

SPL stands for: Solaris Porting Layer
# zpool status
# zfs list

If everything worked, continue.

Install the bootloader

GRUB 2

Before you do this, make sure this checklist is followed:

  • Installed kernel and kernel modules
  • Installed zfs package from the tree
  • /dev, /proc, /sys are mounted in the chroot environment

Once all this is checked, let's install grub2. First we need to enable the "libzfs" use flag so zfs support is compiled for grub2.

# echo "sys-boot/grub libzfs" >> /etc/portage/package.use

Then we will compile grub2:

# emerge -av grub

Once this is done, you can check that grub is version 2.00 by doing the following command:

# grub-install --version
grub-install (GRUB) 2.00

Now try to install sys-boot/grub (package not on wiki - please add):

# grub-install --recheck /dev/sda

You should receive the following message:

Installation finished. No error reported.

If not, then go back to the above checklist.

LILO

Before you do this, make sure the following checklist is followed:

  • /dev, /proc and /sys are mounted.
  • Installed the sys-fs/zfs (package not on wiki - please add) package from the tree.

Once the above requirements are met, LILO can be installed.

Now we will install Lilo.

# emerge -av sys-boot/lilo

Once the installation of LILO is complete we will need to edit the lilo.conf file.

{{{name}}}
{{{body}}}

All that is left now is to install the bootcode to the MBR.

This can be accomplished by running:

# /sbin/lilo

If it is successful you should see:

Warning: LBA32 addressing assumed
Added Funtoo + *
One warning was issued

Create the initramfs

There are two ways to do this, you can use genkernel, or you can use my bliss initramfs creator. I will show you both.

genkernel

# emerge -av sys-kernel/genkernel
# You only need to add --luks if you used encryption
# genkernel --zfs --luks initramfs

Bliss Initramfs Creator

If you are encrypting your drives, then add the "luks" use flag to your package.use before emerging:

# echo "sys-kernel/bliss-initramfs luks" >> /etc/portage/package.use

Now install the creator:

# emerge bliss-initramfs


Then go into the install directory, run the script as root, and place it into /boot:

# cd /opt/bliss-initramfs
# ./createInit
# mv initrd-<kernel_name> /boot

<kernel_name> is the name of what you selected in the initramfs creator, and the name of the outputted file.

Using boot-update

/boot on separate partition

If you created a separate non-zfs partition for boot then configuring boot-update is almost exactly the same as a normal install except that auto detection for root does not work. You must tell boot-update what your root is.

Genkernel

If your using genkernel you must add 'real_root=ZFS=<root>' and 'dozfs' to your params. Example entry for boot.conf:

"Funtoo ZFS" {
        kernel vmlinuz[-v]
        initrd initramfs-genkernel-x86_64[-v]
        params real_root=ZFS=tank/os/funtoo/root
        params += dozfs=force
        # Also add 'params += crypt_root=/dev/sda3' if you used encryption
        # Adjust the above setting to your system if needed
}

Bliss Initramfs Creator

If you used the Bliss Initramfs Creator then all you need to do is add 'root=<root>' to your params. Example entry for boot.conf:

"Funtoo ZFS" {
        kernel vmlinuz[-v]
        initrd initrd[-v]
        params root=tank/os/funtoo/root quiet
        # If you have an encrypted device with a regular passphrase,
        # you can add the following line
        params += enc_root=/dev/sda3 enc_type=pass
}

After editing /etc/boot.conf, you just need to run boot-update to update grub.cfg

# boot-update

/boot on ZFS

TBC - pending update to boot-update to support this

Final configuration

Add the zfs tools to openrc

# rc-update add zfs boot

Clean up and reboot

We are almost done, we are just going to clean up, set our root password, and unmount whatever we mounted and get out.

Delete the stage3 tarball that you downloaded earlier so it doesn't take up space.
# cd /
# rm stage3-latest.tar.xz

Set your root password
# passwd
>> Enter your password, you won't see what you are writing (for security reasons), but it is there!

Get out of the chroot environment
# exit

Unmount all the kernel filesystem stuff and boot (if you have a separate /boot)
# umount -l proc dev sys boot

Turn off the swap
# swapoff /dev/zvol/tank/swap

Export the zpool
# cd /
# zpool export tank

Reboot
# reboot

Important

Don't forget to set your root password as stated above before exiting chroot and rebooting. If you don't set the root password, you won't be able to log into your new system.

and that should be enough to get your system to boot on ZFS.

After reboot

Create initial ZFS Snapshot

Continue to set up anything you need in terms of /etc configurations. Once you have everything the way you like it, take a snapshot of your system. You will be using this snapshot to revert back to this state if anything ever happens to your system down the road. The snapshots are cheap, and almost instant.

To take the snapshot of your system, type the following:

# zfs snapshot -r tank@install

To see if your snapshot was taken, type:

# zfs list -t snapshot

If your machine ever fails and you need to get back to this state, just type (This will only revert your / dataset while keeping the rest of your data intact):

# zfs rollback tank/os/funtoo/root@install

Important

For a detailed overview, presentation of ZFS' capabilities, as well as usage examples, please refer to the ZFS Fun page.