Difference between pages "Install/fr" and "ZFS Fun"

From Funtoo
(Difference between pages)
Jump to navigation Jump to search
(Created page with "=== Préparation du disque dur ===")
 
m (fixed typo)
 
Line 1: Line 1:
{{#widget:AddThis}}
{{Fancyimportant|This tutorial is under a heavy revision to be switched from ZFS Fuse to ZFS on Linux.}}
= Install Funtoo Linux =
__NOTITLE__
<languages/>
{{Announce|To help us translate this documentation, {{CreateAccount}}, log in to the wiki. Then go to Actions -> Translate in the menu, or click the "Translate this page" link, above. You will be able to select small parts of the install docs and translate these parts to your native language.}}
== Introduction ==


Ce document a été produit afin de vous permettre d'installer Funtoo Linux sur un PC compatible. Il a été conçu de façon à réduire au minimum possible les options de configuration afin d'en accélérer et d'en faciliter l'installation.
= Introduction =


Si vous avez déjà installé Gentoo Linux, plusieurs étapes vous seront familières. Cependant, prenez le temps de lire le document au complet. Il y a quelques différences.
== ZFS features and limitations ==


Si vous installez un système basé sur Gentoo Linux pour la première fois ou si vous êtes tout simplement un nouveau venu dans le monde de Linux, nous vous souhaitons la bienvenue. Nous avons tout mis en oeuvre afin que les instructions d'installation soient comprises par tous.
ZFS offers an impressive amount of features even putting aside its hybrid nature (both a filesystem and a volume manager -- zvol) covered in detail on [http://en.wikipedia.org/wiki/ZFS Wikipedia]. One of the most fundamental points to keep in mind about ZFS is it '''targets a legendary reliability in terms of preserving data integrity'''. ZFS uses several techniques to detect and repair (self-healing) corrupted data. Simply speaking it makes an aggressive use of checksums and relies on data redundancy, the price to pay is a bit more CPU processing power. However, the [http://en.wikipedia.org/wiki/ZFS Wikipedia article about ZFS] also mention it is strongly discouraged to use ZFS over classic RAID arrays as it can not control the data redundancy, thus ruining most of its benefits.


{{Note|Si vous installez Funtoo Linux sur une architecture de type [[Funtoo Linux Installation on ARM|ARM]], veuillez consulter le document [[Funtoo Linux Installation on ARM]] pour voir les différences notables en rapport avec le support ARM.}}
In short, ZFS has the following features (not exhaustive):


== Vue d'ensemble de l'installation ==
* Storage pool dividable in one or more logical storage entities.
* Plenty of space:
** 256 zettabytes per storage pool (2^64 storages pools max in a system).
** 16 exabytes max for a single file
** 2^48 entries max per directory
* Virtual block-devices support support over a ZFS pool (zvol) - (extremely cool when jointly used  over a RAID-Z volume)
* Read-only Snapshot support (it is possible to get a read-write copy of them, those are named clones)
* Encryption support (supported only at ZFS version 30 and upper, ZFS version 31 is shipped with Oracle Solaris 11 so that version is mandatory if you plan to encrypt your ZFS datasets/pools)
* Built-in''' RAID-5-like-over-steroid capabilities known as [http://en.wikipedia.org/wiki/Non-standard_RAID_levels#RAID-Z RAID-Z] and RAID-6-like-over-steroid capabilities known as RAID-Z2'''. RAID-Z3 (triple parity) also exists.
* Copy-on-Write transactional filesystem
* Meta-attributes support (properties) allowing you to you easily drive the show like "That directory is encrypted", "that directory is limited to 5GiB", "That directory is exported via NFS" and so on. Depending on what you define, ZFS do the job for you!
* Dynamic striping to optimize data throughput
* Variable block length 
* Data deduplication
* Automatic pool re-silvering
* Transparent data compression
* Transparent encryption (Solaris 11 and later only)


Voici un aperçu de base du processus d'installation de Funtoo Linux.
Most notable limitations are:


# [[#Live CD|Téléchargement et démarrage sur un Live CD]].
* Lack a features ZFS developers knows as "Block Pointer rewrite functionality" (planned to be developed), without it ZFS suffers of currently not being able to:
#[[#Préparation du disque dur]].
** Pool defragmentation (COW techniques used in ZFS mitigates the problem)
# [[#Creating filesystems|Create]] and [[#Mounting filesystems|mount]] filesystems.
** Pool resizing
# [[#Installing the Stage 3 tarball|Install the Funtoo stage tarball]] of your choice.
** Data compression (re-applying)
# [[#Chroot into Funtoo|Chroot into your new system]].
** Adding an additional device in a RAID-Z/Z2/Z3 pool to increase it size (however, it is possible to replace in sequence each one of the disks composing a RAID-Z/Z2/Z3)
# [[#Downloading the Portage tree|Download the Portage tree]].
* '''NOT A CLUSTERED FILESYSTEM''' like Lustre, GFS or OCFS2
# [[#Configuring your system|Configure your system]] and [[#Configuring your network|network]].
* No data healing if used on a single device (corruption can still be detected), workaround if to force a data duplication on the drive
# [[#Configuring and installing the Linux kernel|Install a kernel]].
* No support of TRIMming (SSD devices)
# [[#Installing a Bootloader|Install a bootloader]].
# [[#Finishing Steps|Complete final steps]].
# [[#Restart your system|Reboot and enjoy]].


=== Live CD ===  
== ZFS on well known operating systems ==


Pour installer Funtoo, vous devrez d'abord démarrer votre ordinateur à l'aide d'un Live CD ou d'une clé USB  basé sur Linux. Nous vous recommandons SystemRescueCD basé sur Gentoo, car il offre beaucoup d'outils et d'utilitaires et supporte les systèmes 32-bit et 64-bit. Il peut être gravé sur un CD / DVD ou installé sur une clé USB. Téléchargez-le ici:
=== Linux ===


* Téléchargement depuis '''[http://ftp.osuosl.org/pub/funtoo/distfiles/sysresccd/systemrescuecd-x86-4.5.0.iso osuosl.org]'''
Despite the source code of ZFS is open, its license (Sun CDDL) is incompatible with the license governing the Linux kernel (GNU GPL v2) thus preventing its direct integration. However a couple of ports exists, but suffers of maturity and lack of features. As of writing (February 2014) two known implementations exists:
* Téléchargement depuis '''[http://build.funtoo.org/distfiles/sysresccd/systemrescuecd-x86-4.5.0.iso funtoo.org]'''


{{Note|Si vous utilisez une version ancienne de System Rescue CD, ''' et si vous installez un système 64 bits, veillez à bien sélectionner le noyau <code>rescue64</code> au menu de démarrage (boot) '''. System Rescue CD démarrait généralement en mode 32 bits alors que la dernière version tente d'utiliser le mode 64 bits en priorité.}}
* [http://zfs-fuse.net ZFS-fuse]: a totally userland implementation relying on FUSE. This implementation can now be considered as defunct as of February  2014). The original site of ZFS FUSE seems to have disappeared nevertheless the source code is still available on [http://freecode.com/projects/zfs-fuse http://freecode.com/projects/zfs-fuse]. ZFS FUSE stalled at version 0.7.0 in 2011 and never really evolved since then.
* [http://zfsonlinux.org ZFS on Linux]: a kernel mode implementation of ZFS in kernel mode which supports a lot of NFS features. The implementation is not as complete as it is under Solaris and its siblings like OpenIndiana (e.g. SMB integration is still missing, no encryption support...) but a lot of functionality is there. This is the implementation used for this article. As ZFS on Linux is an out-of-tree Linux kernel implementation, patches must be waited after each Linux kernel release. ZfsOnLinux currently supports zpools version 28 and '''since its version 0.6.2 is considered as ready for production.'''


==== Configuration réseau ====  
=== Solaris/OpenIndiana ===


Une fois le System Rescue CD démarré, vérifiez que vous avez un accès à internet. Il est nécessaire d'en avoir un pour installer un système Funtoo.
* '''Oracle Solaris:''' remains the de facto reference platform for ZFS implementation: ZFS on this platform is now considered as mature and usable on production systems. Solaris 11 uses ZFS even for its "system" pool (aka ''rpool''). A great advantage of this: it is now quite easy to revert the effect of a patch at the condition a snapshot has been taken just before applying it. In the "old good" times of Solaris 10 and before, reverting a patch was possible but could be tricky and complex when possible. ZFS is far from being new in Solaris as it takes its roots in 2005 to be, then, integrated in Solaris 10 6/06 introduced in June 2006.


* '''[http://openindiana.org OpenIndiana]:''' is based on the Illuminos kernel (a derivative of the now defunct OpenSolaris) which aims to provide absolute binary compatibility with Sun/Oracle Solaris. Worth mentioning that Solaris kernel and the [https://www.illumos.org Illumos kernel] were both sharing the same code base, however, they now follows a different path since Oracle announced the discontinuation of OpenSolaris (August 13th 2010). Like Oracle Solaris, OpenIndiana uses ZFS for its system pool. The illumos kernel ZFS support lags a bit behind Oracle: it  supports zpool version 28 where as Oracle Solaris 11 has zpool version 31 support, data encryption being supported at zpool version 30.
=== *BSD ===
* '''FreeBSD''': ZFS is present in FreeBSD since FreeBSD 7 (zpool version 6) and FreeBSD can boot on a ZFS volume (zfsboot). ZFS support has been vastly enhanced in FreeBSD 8.x (8.2 supports zpool version 15, version 8.3 supports version 28), FreeBSD 9 and FreeBSD 10 (both supports zpool version 28). ZFS in FreeBSD is now considered as fully functional and mature. FreeBSD derivatives such as the popular [http://www.freenas.org FreeNAS] takes befenits of ZFS and integrated it in their tools. In the case of that latter,  it have, for example, supports for zvol though its Web management interface (FreeNAS >= 8.0.1).
* '''NetBSD''': ZFS has been started to be ported as a GSoC project in 2007 and is present in the NetBSD mainstream since 2009 (zpool version 13).
* '''OpenBSD''': No ZFS support yet and not planned until Oracle changes some policies according to the project FAQ.
== ZFS alternatives ==
* WAFL seems to have severe limitation [http://unixconsult.org/wafl/ZFS%20vs%20WAFL.html] (document is not dated), also an interesting article lies [http://blogs.netapp.com/dave/2008/12/is-wafl-a-files.html here]
* BTRFS is advancing every week but it still lacks such features like the capability of emulating a virtual block device over a storage pool (zvol) and built-in support for RAID-5/6 is not complete yet (cf. [https://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg29169.html Btrfs mailing list]). At date of writing, it is still experimental where as ZFS is used on big production servers. 
* VxFS has also been targeted by comparisons like [http://blogs.oracle.com/dom/entry/zfs_v_vxfs_iozone this one] (a bit [http://www.symantec.com/connect/blogs/suns-comparision-vxfs-and-zfs-scalability-flawed controversial]). VxFS has been known in the industry since 1993 and is known for its legendary flexibility. Symantec acquired VxFS and proposed a basic version (no clustering for example) of it under the same [http://www.symantec.com/enterprise/sfbasic/index.jsp Veritas Storage Foundation Basic]
* An interesting discussion about modern filesystems can be found on [http://www.osnews.com/story/19665/Solaris_Filesystem_Choices OSNews.com]
== ZFS vs BTRFS at a glance ==
Some key features in no particular order of importance between ZFS and BTRFS:
{| class="wikitable"
!Feature||ZFS!!BTRFS!!Remarks
|-
|Transactional filesystem||YES||YES
|-
|Journaling||NO||YES||Not a design flaw, but ZFS is robust ''by design''...  See page 7 of [http://hub.opensolaris.org/bin/download/Community+Group+zfs/docs/zfslast.pdf ''"ZFS The last word on filesystems"''].
|-
|Dividable pool of data storage||YES||YES
|-
|Read-only snapshot support||YES||YES
|-
|Writable snapshot support||YES||YES
|-
|Sending/Receiving a snapshot over the network||YES||YES
|-
|Rollback capabilities||YES||YES||While ZFS knows where and how to rollback the data (on-line), BTRFS requires a bit more work from the system administrator (off-line).
|-
|Virtual block-device emulation||YES||NO
|-
|Data deduplication||YES||YES||Built-in in ZFS, third party tool ([https://github.com/g2p/bedup bedup]) in BTRFS
|-
|Data blocks reoptimization||NO||YES||ZFS is missing a "block pointer rewrite functionality", true on all known implementations so far. Not a major performance crippling however. BTRFS can do on-line data defragmentation.
|-
|Built-in data redundancy support||YES||YES||ZFS has a sort of RAID-5/6 (but better! RAID-Z{1,2,3}) capability, BTRFS only fully supports data mirroring at this point, however some works remains to be done on parity bits handling by BTRFS.
|-
|Management by attributes||YES||NO||Nearly everything touching ZFS management is related to attributes manipulation (quotas, sharing over NFS, encryption, compression...), BTRFS also retain the concept but it les less aggressively used.
|-
|Production quality code||NO||NO||ZFS support in Linux is not considered as production quality (yet) although it is very robust. Several operating systems like Solaris/OpenIndiana have a production quality implementation, Solaris/OpenIndiana is now installed in ZFS datasets by defaults.
|-
|Integrated within the Linux kernel tree||NO||YES||ZFS is released under the CDDL license...
|}
= ZFS resource naming restrictions =
Before going further, you must be aware of restrictions concerning the names you can use on a ZFS filesystem. The general rule is: you can can use all of the alphanumeric characters plus the following specials are allowed:
* Underscore (_)
* Hyphen (-)
* Colon (:)
* Period (.)
The name used to designate a ZFS pool has no particular restriction except:
* it can't use one of the following reserved words:
** ''mirror''
** ''raidz'' (''raidz2'', ''raidz3'' and so on)
** ''spare''
** ''cache''
** ''log''
* names must begin with an alphanumeric character (same for ZFS datasets).
= Some ZFS concepts =
Once again with no particular order of importance:
{|class="wikitable"
|-
!ZFS!!What it is!!Counterparts examples
|-
|zpool||A  group of one or many physical storage media (hard drive partition, file...). A zpool has to be divided in at least one '''ZFS dataset''' or at least one '''zvol''' to hold any data. Several zpools can coexists in a system at the condition they each hold a unique name. Also note that '''zpools can never be mounted, the only things that can are the ZFS datasets they hold.'''||
* Volume group (VG) in LVM
* BTRFS volumes
|-
|dataset||A logical subdivision of a zpool mounted in your host's VFS where your files and directories resides. Several uniquely named ZFS datasets can coexist in a single system at the conditions they each own a unique name within their zpool.||
* Logical subvolumes (LV) in LVM formatted with a filesystem like ext3.
* BTRFS subvolumes
|-
|snapshot||A read-only photo of a ZFS dataset state as is taken at a precise moment of time. ZFS has no way to cooperate on its own with applications that read and write data on ZFS datasets, if those latter still hold data at the moment the snapshot is taken, only what has been flushed will be included in the snapshot. Worth mentioning that snapshot do not take diskspace aside of sone metadata at the exact time they are created, they size will grow as more and data blocks (i.e. files) are deleted or changed on their corresponding live ZFS dataset.||
* No direct equivalent in LVM.
* BTRFS read-only snapshots
|-
|clone||What is is... A writable physical clone of snapshot||
* LVM snapshots
* BTRFS snapshots
|-
|zvol||An emulated block device whose data is hold behind the scene in the zpool the zvol has been created in.||No known equivalent even in BTRFS
|-
|}
= Your first contact with ZFS  =
== Requirements ==
* ZFS userland tools installed (package ''sys-fs/zfs'')
* ZFS kernel modules built and installed (package ''sys-fs/zfs-kmod''), there is a known issue with kernel 3.13 series see [http://forums.funtoo.org/viewtopic.php?id=2442 this thread on Funtoo's forum]
* Disk size of 64 Mbytes as a bare minimum (128 Mbytes is the minimum size of a pool). Multiple disk will be simulated through the use of several raw images accessed via the Linux loopback devices.
* At least 512 MB of RAM
== Preparing ==
Once your have emerged ''sys-fs/zfs'' and ''sys-fs/zfs-kmod'' you have two options to start using ZFS at this point :
* Either you start ''/etc/init.d/zfs'' (will load all of the zfs kernel modules for you plus a couple of other things)
* Either you load the zfs kernel modules by hand (will load all of the zfs kernel modules for you)
So :
<console>###i## rc-service zfs start</console>
Or:
<console>
<console>
# ##i##ping www.google.com
###i## modprobe zfs
PING www.google.com (216.58.217.36) 56(84) bytes of data.
###i## lsmod | grep zfs
64 bytes from den03s10-in-f4.1e100.net (216.58.217.36): icmp_seq=1 ttl=57 time=30.1 ms
zfs                  874072  0
zunicode              328120  1 zfs
zavl                  12997  1 zfs
zcommon                35739  1 zfs
znvpair                48570  2 zfs,zcommon
spl                    58011  5 zfs,zavl,zunicode,zcommon,znvpair
</console>
</console>


Si le ping réussi (vous devez voir au moins un message commençant par <code>64 bytes</code> comme ci-dessus) alors votre connexion à internet est établie. Utilisez la combinaison de touches Ctrl+C pour arrêter le ping.
== Your first ZFS pool ==
To start with, four raw disks (2 GB each) are created:


Si vous avez besoin de paramétrer une connexion Wifi, cela doit être fait en utilisant l'environnement graphique de System Rescue CD. Lancer la commant <code>startx</code> pour démarrer une session graphique.
<console>
<console>
# ##i##startx
###i## for i in 0 1 2 3; do dd if=/dev/zero of=/tmp/zfs-test-disk0${i}.img bs=2G count=1; done
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB) copied, 40.3722 s, 53.2 MB/s
...
</console>
</console>


[[File:NetworkManagerIcon.PNG|left|caption]]
Then let's see what loopback devices are in use and which is the first free:


Une fois la session graphique lancée, utiliser l'applet NetworkManager (l'icône en bas à droite, dans la barre des tâches) pour paramétrer la connexion wifi de votre choix. Ensuite, ouvrez un terminal dans lequel vous pourrez continuer les différentes étapes de cette installation.
<console>
###i## losetup -a
###i## losetup -f
/dev/loop0
</console>


==== Installation à distance ====
In the above example nothing is used and the first available loopback device is /dev/loop0. Now associate all of the disks with a loopback device (/tmp/zfs-test-disk00.img -> /dev/loop/0, /tmp/zfs-test-disk01.img -> /dev/loop/1 and so on):


Se connecter à System Rescue CD à travers le réseau via une connexion SSH est aussi une manière d'installer Funtoo Linux depuis un autre ordinateur.
<console>
###i## for i in 0 1 2 3; do losetup /dev/loop${i} /tmp/zfs-test-disk0${i}.img; done
###i## losetup -a
/dev/loop0: [000c]:781455 (/tmp/zfs-test-disk00.img)
/dev/loop1: [000c]:806903 (/tmp/zfs-test-disk01.img)
/dev/loop2: [000c]:807274 (/tmp/zfs-test-disk02.img)
/dev/loop3: [000c]:781298 (/tmp/zfs-test-disk03.img)
</console>


Si vous souhaitez continuer l'installation à distance, voici comment faire.
{{Fancynote|ZFS literature often names zpools "tank", this is not a requirement you can use whatever name of you choice (as we did here...) }}


D'abord, vous devrez vous assurer que System Rescue CD dispose d'une connexion réseau fonctionnelle. Ensuite, vous aurez besoin de définir un mot de passe root pour System Rescue CD:
Every story in ZFS takes its root with a the very first ZFS related command you will be in touch with: '''zpool'''. '''zpool''' as you might guessed manages all ZFS aspects in connection with the physical devices underlying your ZFS storage spaces and the very first task is to use this command to make what is called a ''pool'' (if you have used LVM before, volume groups can be seen as a counter part). Basically what you will do here is to tell ZFS to take a collection of physical storage stuff which can take several forms like a hard drive partition, a USB key partition or even a file and consider all of them as a single pool of storage (we will subdivide it in following paragraphs). No black magic here, ZFS will write some metadata on them behind the scene to be able to track which physical device belongs to what pool of storage.
{{console|body=
###i## passwd
New password: ##i##********
Retype new password: ##i##********
passwd: password updated successfully
}}
Une fois le mot de passe défini, il vous reste à obtenir l'adresse IP utilisée par System Rescue CD, adresse que vous utiliserez via {{c|ssh}} pour vous connecter à System RescueCD. Pour obtenir l'adresse IP utilisée par System Rescue CD, tapez {{c|ifconfig}} :


{{console|body=
<console>
###i## ifconfig
###i## zpool create myfirstpool /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3
}}
</console>
L'une des interfaces devrait afficher une adresse IP telle {{c|inet addr:}} sur votre LAN. Vous pourrez alors vous connecter à distance à System Rescue CD depuis un autre système sur votre LAN et effectuer les opérations d'installation dans le confort d'un système d'exploitation existant. Sur le système à distance, tapez la commande suivante en remplaçant {{c|1.2.3.4}} par l'adresse IP de System Rescue CD. Une connexion depuis un système Linux existant ou un système MacOS devrait ressembler à ceci :


{{console|body=
And.. nothing! Nada! The command silently returned but it ''did'' something, the next section will explain what.
(remote system) $ ##i##ssh root@1.2.3.4
Password: ##i##**********}}
{{Note|Pour vous connecter à distance à partir du système MS Windows, vous devez télécharger et installer un client SSH Windows tel [http://www.chiark.greenend.org.uk/~sgtatham/putty/ PuTTY].}}


Une fois que vous vous êtes identifié via SSH, vous avez accès à distance à System Rescue CD. Vous pouvez dès lors procéder aux étapes d'installation de Funtoo Linux.
== Your first ZFS dataset ==
<console>
###i## zpool list
NAME          SIZE  ALLOC  FREE    CAP  DEDUP  HEALTH  ALTROOT
myfirstpool  7.94G  130K  7.94G    0%  1.00x  ONLINE  -
</console>


=== Préparation du disque dur ===
What does this mean? Several things: First, your zpool is here and has a size of, roughly, 8 Go minus some space eaten by some metadata. Second is is actually usable because the column ''HEALTH'' says ''ONLINE''. Other columns are not meaningful for us for the moment just ignore them. If want more crusty details you can use the zpool command like this:


In this section, we'll learn about the different ways that Funtoo Linux can boot from a hard disk. By "boot", we mean the process by which Linux starts after you press the power button on your desktop, laptop or server. You can think of "booting" as a process that starts with your computer's firmware (built-in software) running, and then "finding" the Linux kernel and running it. The Linux kernel then takes over, identifies all your hardware, and starts.
<console>
###i## zpool status
  pool: myfirstpool
state: ONLINE
  scan: none requested
config:


==== Background ====
        NAME        STATE    READ WRITE CKSUM
        myfirstpool  ONLINE      0    0    0
          loop0    ONLINE      0    0    0
          loop1    ONLINE      0    0    0
          loop2    ONLINE      0    0    0
          loop3    ONLINE      0    0    0
</console>
Information is quite intuitive: your pool is seen as being usable (''state'' is similar to ''HEALTH'') and is composed of several devices each one listed as being in a ''healthy'' state ... at least for now because they will be salvaged for demonstration purpose in a later section. For your information the columns ''READ'',''WRITE'' and ''CKSUM'' list the number of operation failures on each of the devices respectfully:
* ''READ'' for reading failures. Having a non-zero value is not a good sign... the device is clunky and will soon fail.
* ''WRITE'' for writing failures. Having a non-zero value is not a good sign... the device is clunky and will soon fail.
* ''CKSUM'' for mismatch between the checksum of the data at the time is had been written and how it has been recomputed when read again (yes, ZFS uses checksums in a agressive manner). Having a non-zero value is not a good sign... corruption happened, ZFS will do its best to recover data by its own but this is definitely not a good sign of a healthy system.


{{Note|If you are an absolute beginner to Linux, you may be less confused if you skip to the next section, [[#Which to Use?|Which to Use?]]}}
Cool! So far so good you have a new 8 Gb usable brand new storage space on you system. Has been mounted somewhere?


In earlier times, there was only one way to boot a PC-compatible computer. All of our desktops and servers had standard firmware called the "PC BIOS," all our hard drives used Master Boot Records at the beginning of the disk, where the PC BIOS would "look" to find boot loader code which would in turn load Linux, and our hard drives were partitioned into different regions using the standard MBR partition scheme. That was just how it was done. And we liked it that way!
<console>
###i## mount | grep myfirstpool
/myfirstpool on /myfirstpool type zfs (rw,xattr)
</console>


Then, along came EFI and UEFI, which are new-style firmware designed to boot systems, along with GPT partition tables to define disk partitions on disks larger than 2.2TB. All of the sudden, we had a variety of options for installing and booting Linux systems, turning what once was a one-method-fits-all approach into something a lot more complex.
Remember the tables in the section above? A zpool in itself can '''never be mounted''', never ''ever''. It is just a container where ZFS datasets are created then mounted. So what happened here? Obscure black magic? No, of course not! Indeed a ZFS dataset named after the zpool's name should have been created automatically for us then mounted. Is is true? We will check this shortly. For the moment you will be introduced with the second command you will deal with when using ZFS : '''zfs'''. While the '''zpool''' command is used with anything related to zpools, the '''zfs''' is used to anything related to ZFS datasets '''(a ZFS dataset ''always'' resides in a zpool, ''always'' no exception on that).'''


Let's take a moment to review the options available to you for configuring a hard drive to boot Funtoo Linux. This Install Guide uses, and recommends, the old-school method of BIOS booting and using an MBR. It works and (except for rare cases) is universally supported. There's nothing wrong with it. If your system disk is 2TB or smaller in size, it won't prevent you from using all of your disk's capacity, either.
{{Fancynote|'''zfs''' and '''zpool''' commands are the two only ones you will need to remember when dealing with ZFS.}}


But, there are some situations where the old-school method isn't optimal. If you have a system disk >2TB in size, then MBR partitions won't allow you to access all your storage. So that's one reason. Another reason is that there are some so-called "PC" systems out there that don't support BIOS booting anymore, and force you to use UEFI to boot. So, out of compassion for people who fall into this predicament, this Install Guide documents UEFI booting too.
So how can we check what ZFS datasets are currently known by the system? As you might already guessed like this:


Our recommendation is still to go old-school unless you have reason not to. The boot loader we will be using to load the Linux kernel in this guide is called GRUB, so we call this method the '''BIOS + GRUB (MBR)''' method. It's the traditional method of setting up a PC-compatible system to boot Linux.
<console>
###i## zfs list
NAME          USED  AVAIL  REFER  MOUNTPOINT
myfirstpool  114K  7.81G    30K  /myfirstpool
</console>


If you need to use UEFI to boot, we recommend not using the MBR at all for booting, as some systems support this, but others don't. Instead, we recommend using UEFI to boot GRUB, which in turn will load Linux. We refer to this method as the '''UEFI + GRUB (GPT)''' method.
Tala! The mystery is busted! the ''zfs'' command tells us that not only a ZFS dataset named ''myfirstpool'' has been created but also it has been mounted in the system's VFS for us. If you check with the ''df'' command, you should also see something like this:


And yes, there are even more methods, some of which are documented on the [[Boot Methods]] page. We used to recommend a '''BIOS + GRUB (GPT)''' method but it is not consistently supported across a wide variety of hardware.
<console>
###i## df -h
Filesystem      Size  Used Avail Use% Mounted on
(...)
myfirstpool    7.9G    0  7.9G  0% /myfirstpool
</console>


==== Which to Use? ====
The $100 question:''"what to do with this band new ZFS /myfirstpool dataset ?"''. Copy some files on it of course! We used a Linux kernel source but you can of course use whatever you want:
<console>
###i## cp -a /usr/src/linux-3.13.5-gentoo /myfirstpool
###i## ln -s /myfirstpool/linux-3.13.5-gentoo /myfirstpool/linux
###i## ls -lR /myfirstpool
/myfirstpool:
total 3
lrwxrwxrwx  1 root root 32 Mar  2 14:02 linux -> /myfirstpool/linux-3.13.5-gentoo
drwxr-xr-x 25 root root 50 Feb 27 20:35 linux-3.13.5-gentoo


'''The big question is -- which boot method should you use?''' Here's how to tell.
/myfirstpool/linux-3.13.5-gentoo:
total 31689
-rw-r--r--  1 root root    18693 Jan 19 21:40 COPYING
-rw-r--r--  1 root root    95579 Jan 19 21:40 CREDITS
drwxr-xr-x 104 root root      250 Feb 26 07:39 Documentation
-rw-r--r--  1 root root    2536 Jan 19 21:40 Kbuild
-rw-r--r--   1 root root      277 Feb 26 07:39 Kconfig
-rw-r--r--  1 root root  268770 Jan 19 21:40 MAINTAINERS
(...)
</console>


;Principle 1 - Old School: If you can reliably boot System Rescue CD and it shows you an initial light blue menu, you are booting the CD using the BIOS, and it's likely that you can thus boot Funtoo Linux using the BIOS. So, go old-school and use BIOS booting, ''unless'' you have some reason to use UEFI, such as having a >2.2TB system disk. In that case, see Principle 2, as your system may also support UEFI booting.
A ZFS dataset  behaves like any other filesystem: you can create regular files, symbolic links, pipes, special devices nodes, etc. Nothing mystic here.


;Principle 2 - New School: If you can reliably boot System Rescue CD and it shows you an initial black and white menu -- congratulations, your system is configured to support UEFI booting. This means that you are ready to install Funtoo Linux to boot via UEFI. Your system may still support BIOS booting, but just be trying UEFI first. You can poke around in your BIOS boot configuration and play with this.
Now we have some data in the ZFS dataset let's see what various commands report:
<console>
###i## df -h
Filesystem      Size  Used Avail Use% Mounted on
(...)
myfirstpool    7.9G  850M  7.0G  11% /myfirstpool
</console>
<console>
###i## zfs list
NAME          USED  AVAIL  REFER  MOUNTPOINT
myfirstpool  850M  6.98G  850M  /myfirstpool
</console>
<console>
###i## zpool list
NAME          SIZE  ALLOC  FREE    CAP  DEDUP  HEALTH  ALTROOT
myfirstpool  7.94G  850M  7.11G    10%  1.00x  ONLINE  -
</console>
{{Fancynote|Notice the various sizes reported by '''zpool''' and '''zfs''' commands. In this case it is  the same however it can differ, this is true especially with zpools mounted in RAID-Z.}}


{{Note|'''Advanced Users May Wonder:''' What's the Big Difference between Old School and New School?: Here's the deal. If you go with old-school MBR partitions, your {{f|/boot}} partition will be an ext2 filesystem, and you'll use {{c|fdisk}} to create your MBR partitions. If you go with new-school GPT partitions and UEFI booting, your {{f|/boot}} partition will be a vfat filesystem, because this is what UEFI is able to read, and you will use {{c|gdisk}} to create your GPT partitions. And you'll install GRUB a bit differently. That's about all it comes down to, in case you were curious.}}
== Unmounting/remounting a ZFS dataset ==


To install Funtoo Linux to boot via the New School UEFI method, you must boot System Rescue CD using UEFI. If you successfully boot sysresccd with UEFI, you will see an initial black and white screen to select the mode in which you will boot system rescue cd. Otherwise, if you see a blue screen with black text, UEFI will not be active and you will not be able to set up UEFI booting later in the install process!


{{Note|'''Some motherboards may appear to support UEFI, but don't.''' Do your research. For example, the Award BIOS in my Gigabyte GA-990FXA-UD7 rev 1.1 has an option to enable UEFI boot for CD/DVD. '''This is not sufficient for enabling UEFI boot for hard drives and installing Funtoo Linux.''' UEFI must be supported for both removable media (so you can boot System Rescue CD using UEFI) as well as fixed media (so you can boot your new Funtoo Linux installation.) It turns out that later revisions of this board (rev 3.0) have a new BIOS that fully supports UEFI boot.  This may point to a third principle -- know thy hardware.}}
{{Fancyimportant|'''Only ZFS datasets can be mounted''' inside your host's VFS, no exception on that! Zpools cannot be mounted, never, never, never... please pay attention to the  terminology and keep things clear by not messing up with terms. We will introduce ZFS snapshots and ZFS clones but those are ZFS datasets at the basis so they can also be mounted and unmounted.}}


==== Old-School (BIOS/MBR) Method ====


{{Note|Use this method if you are booting using your BIOS, and if your System Rescue CD initial boot menu was light blue. If you're going to use the new-school method, [[#New-School (UEFI/GPT) Method|click here to jump down to UEFI/GPT.]]}}
If a ZFS dataset behaves just like any other filesystem, can we unmount it?
<console>
###i## umount /myfirstpool
###i## mount | grep myfirstpool
</console>


First, it's a good idea to make sure that you've found the correct hard disk to partition. Try this command and verify that {{f|/dev/sda}} is the disk that you want to partition:
No more ''/myfirstpool'' the line of sight! So yes, it is possible to unmount a ZFS dataset just like you would do with any other filesystem. Is the ZFS dataset still present on the system even it is unmounted? Let's check:
{{console|body=
###i## fdisk -l /dev/sda


Disk /dev/sda: 640.1 GB, 640135028736 bytes, 1250263728 sectors
<console>
Units = sectors of 1 * 512 = 512 bytes
###i## zfs list
Sector size (logical/physical): 512 bytes / 512 bytes
NAME          USED  AVAIL  REFER  MOUNTPOINT
I/O size (minimum/optimal): 512 bytes / 512 bytes
myfirstpool  850M  6.98G  850M  /myfirstpool
Disk label type: gpt
</console>


#        Start          End    Size  Type            Name
Hopefully and obviously it is else ZFS would not be very useful. Your next concern would certainly be: "How can we remount it then?" Simple! Like this:
1        2048  1250263694  596.2G  Linux filesyste Linux filesystem
<console>
}}
###i## zfs mount myfirstpool
Now, it is recommended that you erase any existing MBR or GPT partition tables on the disk, which could confuse the system's BIOS at boot time. We accomplish this using {{c|sgdisk}}:
###i## mount | grep myfirstpool
{{Warning|This will make any existing partitions inaccessible! You are '''strongly''' cautioned and advised to backup any critical data before proceeding.}}
myfirstpool on /myfirstpool type zfs (rw,xattr)
</console>


{{console|body=
The ZFS dataset is back! :-)
###i## sgdisk --zap-all /dev/sda


Creating new GPT entries.
== Your first contact with ZFS management by attributes or the end of /etc/fstab ==
GPT data structures destroyed! You may now partition the disk using fdisk or
At this point you might be curious about how the '''zfs''' command know what it has to mount and ''where'' is has to mount it. You might be familiar with the following syntax of the '''mount''' command that, behind the scenes, scans the file ''/etc/fstab'' and mount the specified entry:
other utilities.
<console>
}}
###i## mount /boot
This output is also nothing to worry about, as the command still succeded:
</console>


{{console|body=
Does ''/etc/fstab'' contain something related to our ZFS dataset?
***************************************************************
Found invalid GPT and valid MBR; converting MBR to GPT format
in memory.
***************************************************************
}}Now we will use {{c|fdisk}} to create the MBR partition table and partitions:
{{console|body=
###i## fdisk /dev/sda
}}
Within {{c|fdisk}}, follow these steps:


'''Empty the partition table''':
<console>
{{console|body=
###i## cat /etc/fstab | grep myfirstpool
Command (m for help): ##i##o ↵
#
}}
</console>
'''Create Partition 1''' (boot):


{{console|body=
Doh!!!... Obvisouly nothing there. Another mystery? Sure not! The answer lies in a extremely powerful feature of ZFS: the attributes. Simply speaking: an attribute is named property of a ZFS dataset that holds a value. Attributes govern various aspects of how the datasets are managed like: ''"Is the data has to be compressed?"'', ''"Is the data has to be encrypted?"'', ''"Is the data has to be exposed to the rest of the world by NFS or SMB/Samba?"'' and of course... '''"Where the dataset has to be mounted?"''. The answer to that latter question can be tell by the following command:
Command (m for help): ##i##n ↵
Partition type (default p): ##i##↵
Partition number (1-4, default 1): ##i##↵
First sector: ##i##↵
Last sector: ##i##+128M ↵
}}
'''Create Partition 2''' (swap):


{{console|body=
<console>
Command (m for help): ##i##n ↵
###i## zfs get mountpoint myfirstpool
Partition type (default p): ##i##↵
NAME        PROPERTY    VALUE        SOURCE
Partition number (2-4, default 2): ##i##
myfirstpool  mountpoint  /myfirstpool  default
First sector: ##i##↵
</console>
Last sector: ##i##+2G ↵
Command (m for help): ##i##t ↵
Partition number (1,2, default 2): ##i## ↵
Hex code (type L to list all codes): ##i##82 ↵
}}
'''Create the root partition:'''


{{console|body=
Bingo! When you remounted the dataset just some paragraphs ago, ZFS automatically inspected the ''mountpoint'' attribute and saw this dataset has to be mounted in the directory ''/myfirstpool''.
Command (m for help): ##i##n ↵
Partition type (default p): ##i##↵
Partition number (3,4, default 3): ##i##↵
First sector: ##i##↵
Last sector: ##i##↵
}}
'''Verify the partition table:'''


{{console|body=
= A step forward with ZFS datasets =
Command (m for help): ##i##p


Disk /dev/sda: 298.1 GiB, 320072933376 bytes, 625142448 sectors
So far you were given a quick tour of what ZFS can do for you and  it is very important at this point to distinguish a ''zpool'' from a ''ZFS dataset'' and to call a dataset for what it is (a dataset) and not for what is is not (a zpool). It is a bit confusing and an editorial choice to have choosen a confusing name just to make you familiar with the one and the other.
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x82abc9a6


Device    Boot    Start      End    Blocks  Id System
== Creating datasets ==
/dev/sda1          2048    264191    131072  83 Linux
/dev/sda2        264192  4458495  2097152  82 Linux swap / Solaris
/dev/sda3        4458496 625142447 310341976  83 Linux
}}
'''Write the parition table to disk:'''


{{console|body=Command (m for help): ##i##w}}
Obviously it is possible to have more than one ZFS dataset within a single zpool. Quizz: what command would you use to subdivide a zpool in datasets? '''zfs''' or '''zpool'''? Stops reading for two seconds and try to figure out this little question. Frankly.
Your new MBR partition table will now be written to your system disk.


{{Note|You're done with partitioning! Now, jump over to [[#Creating filesystems|Creating filesystems]].}}
Answer is... '''zfs'''! Although you want to operate on the zpool to logically subdivide it in several datasets, you manage datasets at the end thus you will use the '''zfs''' command. It is not always easy at the beginning, do not be too worry you will soon get the habit when to use one or the other. Creating a dataset in a zpool is easy: just give to the '''zfs''' command the name of the pool you want to divide and the name of the dataset you want to create in it. So let's create three datasets named ''myfirstDS'', ''mysecondDS'' and ''mythirdDS'' in ''myfirstpool''(observe how we use the zpool and datasets' names) :


==== New-School (UEFI/GPT) Method ====
<console>
###i## zfs create myfirstpool/myfirstDS
###i## zfs create myfirstpool/mysecondDS
###i## zfs create myfirstpool/mythirdDS
</console>


{{Note|Use this method if you are interested in booting using UEFI, and if your System Rescue CD initial boot menu was black and white. If it was light blue, this method will not work.}}
What happened? Let's check :


The {{c|gdisk}} commands to create a GPT partition table are as follows. Adapt sizes as necessary, although these defaults will work for most users. Start {{c|gdisk}}:
<console>
{{console|body=###i## gdisk /dev/sda}}
###i## zfs list
Within {{c|gdisk}}, follow these steps:
NAME                    USED  AVAIL  REFER  MOUNTPOINT
myfirstpool              850M  6.98G  850M  /myfirstpool
myfirstpool/myfirstDS    30K  6.98G    30K  /myfirstpool/myfirstDS
myfirstpool/mysecondDS    30K  6.98G    30K  /myfirstpool/mysecondDS
myfirstpool/mythirdDS    30K  6.98G    30K  /myfirstpool/mythirdDS
</console>


'''Create a new empty partition table''' (This ''will'' erase all data on the disk when saved):
Obviously we have there what we asked. Moreover if we inspect the contents of ''/myfirstpool'' we can notice three new directories having the same than just created:
{{console|body=
Command: ##i##o ↵
This option deletes all partitions and creates a new protective MBR.
Proceed? (Y/N): ##i##y ↵
}}
'''Create Partition 1''' (boot):


{{console|body=
<console>
Command: ##i##n ↵
###i## ls -l /myfirstpool
Partition Number: ##i##1 ↵
total 8
First sector: ##i##↵
lrwxrwxrwx  1 root root 32 Mar  2 14:02 linux -> /myfirstpool/linux-3.13.5-gentoo
Last sector: ##i##+500M ↵
drwxr-xr-x 25 root root 50 Feb 27 20:35 linux-3.13.5-gentoo
Hex Code: ##i##EF00 ↵
drwxr-xr-x  2 root root  2 Mar  2 15:26 myfirstDS
}}
drwxr-xr-x  2 root root  2 Mar  2 15:26 mysecondDS
'''Create Partition 2''' (swap):
drwxr-xr-x  2 root root  2 Mar  2 15:26 mythirdDS
</console>
No surprise here! As you might have guessed, those three new directories serves as mountpoints:


{{console|body=
<console>
Command: ##i##n ↵
###i## mount | grep myfirstpool
Partition Number: ##i##2 ↵
myfirstpool on /myfirstpool type zfs (rw,xattr)
First sector: ##i##↵
myfirstpool/myfirstDS on /myfirstpool/myfirstDS type zfs (rw,xattr)
Last sector: ##i##+4G ↵
myfirstpool/mysecondDS on /myfirstpool/mysecondDS type zfs (rw,xattr)
Hex Code: ##i##8200 ↵
myfirstpool/mythirdDS on /myfirstpool/mythirdDS type zfs (rw,xattr)
}}
</console>
'''Create Partition 3''' (root):


{{console|body=
As we did before, we can copy some files in the newly created datasets just like they were regular directories:
Command: ##i##n ↵
Partition Number: ##i##3 ↵
First sector: ##i##↵
Last sector: ##i##↵##!i## (for rest of disk)
Hex Code: ##i##↵
}}
Along the way, you can type "{{c|p}}" and hit Enter to view your current partition table. If you make a mistake, you can type "{{c|d}}" to delete an existing partition that you created. When you are satisfied with your partition setup, type "{{c|w}}" to write your configuration to disk:


'''Write Partition Table To Disk''':
<console>
{{console|body=
###i## cp -a /usr/portage /myfirstpool/mythirdDS
Command: ##i##w ↵
###i## ls -l /myfirstpool/mythirdDS/*
Do you want to proceed? (Y/N): ##i##Y ↵
total 697
}}The partition table will now be written to the disk and {{c|gdisk}} will close.
drwxr-xr-x  48 root root  49 Aug 18  2013 app-accessibility
drwxr-xr-x  238 root root  239 Jan 10 06:22 app-admin
drwxr-xr-x    4 root root    5 Dec 28 08:54 app-antivirus
drwxr-xr-x  100 root root  101 Feb 26 07:19 app-arch
drwxr-xr-x  42 root root  43 Nov 26 21:24 app-backup
drwxr-xr-x  34 root root  35 Aug 18  2013 app-benchmarks
drwxr-xr-x  66 root root  67 Oct 16 06:39 app-cdr(...)
</console>


Now, your GPT/GUID partitions have been created, and will show up as the following ''block devices'' under Linux:
Nothing really too exciting here, we have file in ''mythirdDS''. A bit more interesting output:


* {{c|/dev/sda1}}, which will be used to hold the {{c|/boot}} filesystem,
<console>
###i## zfs list
NAME                    USED  AVAIL  REFER  MOUNTPOINT
myfirstpool            1.81G  6.00G  850M  /myfirstpool
myfirstpool/myfirstDS    30K  6.00G    30K  /myfirstpool/myfirstDS
myfirstpool/mysecondDS    30K  6.00G    30K  /myfirstpool/mysecondDS
myfirstpool/mythirdDS  1002M  6.00G  1002M  /myfirstpool/mythirdDS
</console>
<console>
###i## df -h
Filesystem              Size  Used Avail Use% Mounted on
(...)
myfirstpool            6.9G  850M  6.1G  13% /myfirstpool
myfirstpool/myfirstDS  6.1G    0  6.1G  0% /myfirstpool/myfirstDS
myfirstpool/mysecondDS  6.1G    0  6.1G  0% /myfirstpool/mysecondDS
myfirstpool/mythirdDS  7.0G 1002M  6.1G  15% /myfirstpool/mythirdDS
</console>


* {{c|/dev/sda2}}, which will be used for swap space, and  
Noticed the size given for the 'AVAIL' column? At the very beginning of this tutorial we had slightly less than  8 Gb of available space, it now has a value of roughly 6 Gb. The datasets are just a subdivision of the zpool, they '''compete with each others''' for using the available storage within the zpool, no miracle here. To what limit? The pool itself as we never imposed a ''quota'' on datasets. Hopefully '''df''' and '''zfs list''' gives a coherent result.


* {{c|/dev/sda3}}, which will hold your root filesystem.
== Second contact with attributes: quota management ==


{{Tip|You can verify that the block devices above were correctly created by running the command {{c|lsblk}}.}}
Remember how painful is the quota management under Linux? Now you can say goodbye to '''setquota''', '''edquota''' and other '''quotacheck''' commands, ZFS handle this in the snap of fingers! Guess with what? An ZFS dataset attribute of course! ;-) Just to make you drool here is how a 2Gb limit can be set on ''myfirstpool/mythirdDS'' :
==== Creating filesystems ====


{{Note|This section covers both BIOS ''and'' UEFI installs. Don't skip it!}}
<console>
###i## zfs set quota=2G myfirstpool/mythirdDS
</console>


Before your newly-created partitions can be used, the block devices that were created in the previous step need to be initialized with filesystem ''metadata''. This process is known as ''creating a filesystem'' on the block devices. After filesystems are created on the block devices, they can be mounted and used to store files.
''Et voila!'' The '''zfs''' command is bit silent however if we check we can see that ''myfirstpool/mythirdDS'' is now capped to 2 Gb (forget about 'REFER' for the moment): around 1 Gb of data has been copied in this dataset thus leaving a big 1 Gb of available space.


Let's keep this simple. Are you using old-school MBR partitions? If so, let's create an ext2 filesystem on {{f|/dev/sda1}}:
<console>
{{console|body=###i## mkfs.ext2 /dev/sda1}}
###i## zfs list
If you're using new-school GPT partitions for UEFI, you'll want to create a vfat filesystem on {{c|/dev/sda1}}, because this is what UEFI is able to read:
NAME                    USED  AVAIL  REFER  MOUNTPOINT
{{console|body=###i## mkfs.vfat -F 32 /dev/sda1}}
myfirstpool            1.81G  6.00G  850M  /myfirstpool
Now, let's create a swap partition. This partition will be used as disk-based virtual memory for your Funtoo Linux system.
myfirstpool/myfirstDS    30K  6.00G    30K  /myfirstpool/myfirstDS
myfirstpool/mysecondDS    30K  6.00G    30K  /myfirstpool/mysecondDS
myfirstpool/mythirdDS  1002M  1.02G  1002M  /myfirstpool/mythirdDS
</console>


You will not create a filesystem on your swap partition, since it is not used to store files. But it is necessary to initialize it using the {{c|mkswap}} command. Then we'll run the {{c|swapon}} command to make your newly-initialized swap space immediately active within the live CD environment, in case it is needed during the rest of the install process:
Using the '''df''' command:
{{console|body=
# ##i##mkswap /dev/sda2
# ##i##swapon /dev/sda2
}}Now, we need to create a root filesystem. This is where Funtoo Linux will live. We generally recommend ext4 or XFS root filesystems. If you're not sure, choose ext4. Here's how to create a root ext4 filesystem:


{{console|body=###i## mkfs.ext4 /dev/sda3}}
<console>
...and here's how to create an XFS root filesystem, if you prefer to use XFS instead of ext4:
###i## df -h                               
Filesystem              Size  Used Avail Use% Mounted on
(...)
myfirstpool            6.9G  850M  6.1G  13% /myfirstpool
myfirstpool/myfirstDS  6.1G    0  6.1G  0% /myfirstpool/myfirstDS
myfirstpool/mysecondDS  6.1G    0  6.1G  0% /myfirstpool/mysecondDS
myfirstpool/mythirdDS  2.0G 1002M  1.1G  49% /myfirstpool/mythirdDS
</console>


{{console|body=###i## mkfs.xfs /dev/sda3}}
Of course you can use this technique for the home directories of your users /home this also having the a advantage of being much less forgiving than a soft/hard user quota: when the limit is reached, it is reached period and no more data can be written in the dataset. The user must do some cleanup and cannot procastinate anymore :-)
Your filesystems (and swap) have all now been initialized, so that that can be mounted (attached to your existing directory heirarchy) and used to store files. We are ready to begin installing Funtoo Linux on these brand-new filesystems.


{{Warning|When deploying an OpenVZ host, please use ext4 exclusively. The Parallels development team tests extensively with ext4, and modern versions of {{c|openvz-rhel6-stable}} are '''not''' compatible with XFS, and you may experience kernel bugs.}}
To remove the quota:


==== Mounting filesystems ====  
<console>
###i## zfs set quota=none myfirstpool/mythirdDS
</console>
 
''none'' is simply the original value for the ''quota'' attribute (we did not demonstrate it, you can check by doing a '''zfs get quota  myfirstpool/mysecondDS''' for example).
 
== Destroying datasets ==
{{Fancyimportant|There is no way to resurrect a destroyed ZFS dataset and the data it contained! Once you destroy a dataset the corresponding metadata is cleared and gone forever so be careful when using ''zfs destroy'' notably with the ''-r'' option ... }}
 
 
We have three datasets, but the third is pretty useless and contains a lot of garbage. Is it possible to remove it with a simple '''rm -rf'''? Let's try:


Mount the newly-created filesystems as follows, creating {{c|/mnt/funtoo}} as the installation mount point:
<console>
{{console|body=
###i## rm -rf /myfirstpool/mythirdDS
###i## mkdir /mnt/funtoo
rm: cannot remove `/myfirstpool/mythirdDS': Device or resource busy
###i## mount /dev/sda3 /mnt/funtoo
</console>
###i## mkdir /mnt/funtoo/boot
###i## mount /dev/sda1 /mnt/funtoo/boot
}}
Optionally, if you have a separate filesystem for {{f|/home}} or anything else:


{{console|body=
This is perfectly normal, remember that datasets are indeed something '''mounted''' in your VFS. ZFS might be ZFS and do alot for you, it cannot enforce the nature of a mounted filesystem under Linux/Unix. The "ZFS way" to remove a dataset is to use the ''zfs'' command like this at the reserve no process owns open files on it (once again, ZFS can do miracles for you but not that kind of miracles as it has to unmount the dataset before deleting it):
###i## mkdir /mnt/funtoo/home
###i## mount /dev/sda4 /mnt/funtoo/home
}}
If you have {{f|/tmp}} or {{f|/var/tmp}} on a separate filesystem, be sure to change the permissions of the mount point to be globally-writeable after mounting, as follows:
{{console|body=###i## chmod 1777 /mnt/funtoo/tmp}}
==== Setting the Date ====


{{Important|If your system's date and time are too far off (typically by months or years,) then it may prevent Portage from properly downloading source tarballs. This is because some of our sources are downloaded via HTTPS, which use SSL certificates and are marked with an activation and expiration date. However, if your system time is relatively close to correct, you can probably skip this step for now.}}
<console>
###i## zfs destroy myfirstpool/mythirdDS
###i## zfs list
NAME                    USED  AVAIL  REFER  MOUNTPOINT
myfirstpool              444M  7.38G  444M  /myfirstpool
myfirstpool/myfirstDS    21K  7.38G    21K  /myfirstpool/myfirstDS
myfirstpool/mysecondDS    21K  7.38G    21K  /myfirstpool/mysecondDS
</console>


Now is a good time to verify the date and time are correctly set to UTC. Use the {{c|date}} command to verify the date and time:
''Et voila''! No more ''myfirstpool/mythirdDS'' dataset. :-)
{{console|body=
 
###i## date
A bit more subtle case would be to try to destroy a ZFS dataset whenever another ZFS dataset is nested in it. Before doing that nasty experiment  ''myfirstpool/mythirdDS'' must be created again this time with another nested dataset (''myfirstpool/mythirdDS/nestedSD1''):
Fri Jul 15 19:47:18 UTC 2011
 
}}
<console>
If the date and/or time need to be corrected, do so using {{c|date MMDDhhmmYYYY}}, keeping in mind {{c|hhmm}} are in 24-hour format. The example below changes the date and time to "July 16th, 2011 @ 8:00PM" UTC:
###i## zfs create myfirstpool/mythirdDS
###i## zfs create myfirstpool/mythirdDS/nestedSD1
###i## zfs list
NAME                              USED  AVAIL  REFER  MOUNTPOINT
myfirstpool                      851M  6.98G  850M  /myfirstpool
myfirstpool/myfirstDS              30K  6.98G    30K  /myfirstpool/myfirstDS
myfirstpool/mysecondDS            30K  6.98G    30K  /myfirstpool/mysecondDS
myfirstpool/mythirdDS            124K  6.98G    34K  /myfirstpool/mythirdDS
myfirstpool/mythirdDS/nestedDS1    30K  6.98G    30K  /myfirstpool/mythirdDS/nestedDS1
</console>
 
Now let's try to destroy ''myfirstpool/mythirdDS'' again:
 
<console>
###i## zfs destroy myfirstpool/mythirdDS
cannot destroy 'myfirstpool/mythirdDS': filesystem has children
use '-r' to destroy the following datasets:
myfirstpool/mythirdDS/nestedDS1
</console>
 
The zfs command detected the situation  and refused to proceed on the deletion without your consent to make a recursive destruction (-r parameter). Before going any step further let's create some more nested datasets plus a couple of directories inside ''myfirstpool/mythirdDS'':
 
<console>
###i## zfs create myfirstpool/mythirdDS/nestedDS1
###i## zfs create myfirstpool/mythirdDS/nestedDS2
###i## zfs create myfirstpool/mythirdDS/nestedDS3
###i## zfs create myfirstpool/mythirdDS/nestedDS3/nestednestedDS
###i## mkdir /myfirstpool/mythirdDS/dir1
###i## mkdir /myfirstpool/mythirdDS/dir2
###i## mkdir /myfirstpool/mythirdDS/dir3
</console>
<console>
###i## zfs list
NAME                                            USED  AVAIL  REFER  MOUNTPOINT
myfirstpool                                      851M  6.98G  850M  /myfirstpool
myfirstpool/myfirstDS                            30K  6.98G    30K  /myfirstpool/myfirstDS
myfirstpool/mysecondDS                            30K  6.98G    30K  /myfirstpool/mysecondDS
myfirstpool/mythirdDS                            157K  6.98G    37K  /myfirstpool/mythirdDS
myfirstpool/mythirdDS/nestedDS1                  30K  6.98G    30K  /myfirstpool/mythirdDS/nestedDS1
myfirstpool/mythirdDS/nestedDS2                  30K  6.98G    30K  /myfirstpool/mythirdDS/nestedDS2
myfirstpool/mythirdDS/nestedDS3                  60K  6.98G    30K  /myfirstpool/mythirdDS/nestedDS3
myfirstpool/mythirdDS/nestedDS3/nestednestedDS    30K  6.98G    30K  /myfirstpool/mythirdDS/nestedDS3/nestednestedDS
</console>
 
Now what happens if ''myfirstpool/mythirdDS'' is destroyed again with '-r'?
 
<console>
###i## zfs destroy -r myfirstpool/mythirdDS
###i## zfs list                           
NAME                    USED  AVAIL  REFER  MOUNTPOINT
myfirstpool              851M  6.98G  850M  /myfirstpool
myfirstpool/myfirstDS    30K  6.98G    30K  /myfirstpool/myfirstDS
myfirstpool/mysecondDS    30K  6.98G    30K  /myfirstpool/mysecondDS
</console>
 
''myfirstpool/mythirdDS'' and everything it contained is now gone!
 
== Snapshotting and rolling back datasets ==
 
This is, by far, one of the coolest features of ZFS. You can:
# take a photo of a dataset (this photo is called a ''snapshot'')
# do ''whatever'' you want with the data contained in the dataset
# restore (roll back) the dataset in  in the '''exact''' same state it was before you did your changes just as if nothing had ever happened in the middle.
 
=== Single snapshot ===
 
{{Fancyimportant|'''Only ZFS datasets''' can be snapshotted and rolled back, not the zpool.}}
 
 
To start with, let's copy some files in ''mysecondDS'':
 
<console>
###i## cp -a /usr/portage /myfirstpool/mysecondDS
###i## ls /myfirstpool/mysecondDS/portage
total 672
drwxr-xr-x  48 root root  49 Aug 18  2013 app-accessibility
drwxr-xr-x  238 root root  239 Jan 10 06:22 app-admin
drwxr-xr-x    4 root root    5 Dec 28 08:54 app-antivirus
drwxr-xr-x  100 root root  101 Feb 26 07:19 app-arch
drwxr-xr-x  42 root root  43 Nov 26 21:24 app-backup
drwxr-xr-x  34 root root  35 Aug 18  2013 app-benchmarks
(...)
drwxr-xr-x  62 root root  63 Feb 20 06:47 x11-wm
drwxr-xr-x  16 root root  17 Aug 18 2013 xfce-base
drwxr-xr-x  64 root root  65 Dec 14 19:09 xfce-extra
</console>
 
Now, let's take a snapshot of ''mysecondDS''. What command would be used? '''zpool''' or '''zfs'''? In that case it is '''zfs''' because we manipulate a ZFS dataset (this time you problably got it right!):
 
<console>
###i## zfs snapshot myfirstpool/mysecondDS@Charlie
</console>
 
{{fancynote|The syntax is always ''pool/dataset@snapshot'', the snapshot's name is left at your discretion however '''you must use an arobase  sign (@)''' to separate the snapshot's name from the rest of the path.}}
 
Let's check what ''/myfirstpool/mysecondDS'' contains after taking the snapshot:
<console>
###i## ls -la /myfirstpool/mysecondDS   
total 9
drwxr-xr-x  3 root root  3 Mar  2 18:22 .
drwxr-xr-x  5 root root  6 Mar  2 17:58 ..
drwx------ 170 root root 171 Mar  2 18:36 portage
</console>
 
Nothing really new the ''portage'' directory is here nothing more ''a priori''. If you have used BTRFS before reading this tutorial you probably expected to see a ''@Charlie'' lying in ''/myfirstpool/mysecondDS''? So where the check is ''Charlie''? In ZFS a dataset snapshot is not visible from within the VFS tree (if you are not convinced you can search for it with the '''find''' command but it will never find it). Let's check with the '''zfs''' command:
 
<console>
###i## zfs list
###i## zfs list -t all   
NAME                            USED  AVAIL  REFER  MOUNTPOINT
myfirstpool                    1.81G  6.00G  850M  /myfirstpool
myfirstpool/myfirstDS            30K  6.00G    30K  /myfirstpool/myfirstDS
myfirstpool/mysecondDS          1001M  6.00G  1001M  /myfirstpool/mysecondDS
</console>
 
Wow... No sign of the snapshot. What you mus know is that indeed '''zfs list''' shows only datasets by default and omits snapshots. If the command is invoked with the parameter ''-t'' set to ''all'' it will list everything:
 
<console>
###i## zfs list
###i## zfs list -t all   
NAME                            USED  AVAIL  REFER  MOUNTPOINT
myfirstpool                    1.81G  6.00G  850M  /myfirstpool
myfirstpool/myfirstDS            30K  6.00G    30K  /myfirstpool/myfirstDS
myfirstpool/mysecondDS          1001M  6.00G  1001M  /myfirstpool/mysecondDS
myfirstpool/mysecondDS@Charlie      0      -  1001M  -
</console>
 
So yes, ''@Charlie'' is here! Also notice here the power of copy-on-write filesystems: ''@Charlie'' takes only a couple of kilobytes (some ZFS metadata) just like any ZFS snapshot at the time they are taken. The reason snapshots occupy very little space in the datasets is because data and metadata blocks are the same and no physical copy of them are made. At the time goes on and more and more changes happens in the original dataset (''myfirstpool/mysecondDS'' here), ZFS will allocate new data and metadata blocks to accommodate the changes but will leave the blocks used by the snapshot untouched and the snapshot will tend to eat more and more pool space. It seems odd at first glance because a snapshot is a frozen in time copy of a ZFS dataset but this the way ZFS manage them. So caveat emptor: remove any unused snapshot to not full your zpool...
 
Now we have found Charlie, let's do some changes in the ''mysecondDS'':
 
<console>
###i## rm -rf /myfirstpool/mysecondDS/portage/[a-h]*
###i## echo "Hello, world" >  /myfirstpool/mysecondDS/hello.txt
###i## cp /lib/firmware/radeon/* /myfirstpool/mysecondDS
###i## ls -l  /myfirstpool/mysecondDS
/myfirstpool/mysecondDS:
total 3043
-rw-r--r--  1 root root  8704 Mar  2 19:29 ARUBA_me.bin
-rw-r--r--  1 root root  8704 Mar  2 19:29 ARUBA_pfp.bin
-rw-r--r--  1 root root  6144 Mar  2 19:29 ARUBA_rlc.bin
-rw-r--r--  1 root root  24096 Mar  2 19:29 BARTS_mc.bin
-rw-r--r--  1 root root  5504 Mar  2 19:29 BARTS_me.bin
(...)
-rw-r--r--  1 root root  60388 Mar  2 19:29 VERDE_smc.bin
-rw-r--r--  1 root root    13 Mar  2 19:28 hello.txt
drwx------ 94 root root    95 Mar  2 19:28 portage
 
/myfirstpool/mysecondDS/portage:
total 324
drwxr-xr-x  16 root root  17 Oct 26 07:30 java-virtuals
drwxr-xr-x 303 root root  304 Jan 21 06:53 kde-base
drwxr-xr-x 117 root root  118 Feb 21 06:24 kde-misc
drwxr-xr-x  2 root root  756 Feb 23 08:44 licenses
drwxr-xr-x  20 root root  21 Jan  7 06:56 lxde-base
(...)
</console>
 
Now let's check again what the '''zpool''' command gives:
 
<console>
###i## zfs list -t all                     
NAME                            USED  AVAIL  REFER  MOUNTPOINT
myfirstpool                    1.82G  6.00G  850M  /myfirstpool
myfirstpool/myfirstDS            30K  6.00G    30K  /myfirstpool/myfirstDS
myfirstpool/mysecondDS          1005M  6.00G  903M  /myfirstpool/mysecondDS
myfirstpool/mysecondDS@Charlie  102M      -  1001M  -
</console>
 
Noticed the size's increase of ''myfirstpool/mysecondDS@Charlie''? This is mainly due to new files copied in the snasphot: ZFS had to retained the original blocks of data. Now time to roll  this ZFS dataset back to its original state (if some processes would have open files in the dataset to be rolled back, you should terminate them first) :
 
<console>
###i## zfs rollback myfirstpool/mysecondDS@Charlie
###i## ls -l /myfirstpool/mysecondDS
total 6
drwxr-xr-x 164 root root 169 Aug 18 18:25 portage
</console>
 
Again, ZFS handled everything for you and you now have the contents of ''mysecondDS'' exactly as it was at the time the snapshot ''Charlie'' was taken. Not more complicated than that. Not illustrated here but if you look at the output given by '''zfs list -t all''' at this point you will notice that the ''Charlie'' snapshot only eat very little space. This is normal: the modified blocks have been dropped so ''myfirstpool/mysecondDS'' and its ''myfirstpool/mysecondDS@Charlie'' snapshot are the same module some metadata (hence the few kilobytes of space taken).
 
=== the .zfs pseudo-directory or the secret passage to your snapshots ===
 
Any directory where  a ZFS dataset is mounted (having snapshots or not) secretly contains a pseudo-directory named '''.zfs''' (dot-ZFS) and you will not see it even with the option ''-a'' given to a '''ls''' command unless you specify it. It is a contradiction to Unix and Unix-like systems' philosophy to not hide anything to the system administrator. It is not a bug of ZFS On Linux implementation and the Solaris implementation of ZFS exposes the exact behavior. So what is inside this little magic box?
 
<console>
###i## cd /myfirstpool/mysecondDS
###i## ls -la | grep .zfs       
###i## ls -lad .zfs             
dr-xr-xr-x 1 root root 0 Mar  2 15:26 .zfs
</console>
<console>
###i## cd .zfs
###i## pwd
/myfirstpool/mysecondDS/.zfs
###i## ls -la
total 4
dr-xr-xr-x 1 root root  0 Mar  2 15:26 .
drwxr-xr-x 3 root root 145 Mar  2 19:29 ..
dr-xr-xr-x 2 root root  2 Mar  2 19:47 shares
dr-xr-xr-x 2 root root  2 Mar  2 18:46 snapshot
</console>


{{console|body=
We will focus on the ''snapshot'' directory and since we did not dropped the ''Charlie'' snapshot (yet) let's see what lies there:
###i## date 071620002011
Fri Jul 16 20:00:00 UTC 2011
}}
Once you have set the system clock, it's a very good idea to copy the time to the hardware clock, so it persists across reboots:


{{console|body=###i## hwclock --systohc}}
<console>
=== Installing the Stage 3 tarball ===
###i## cd snapshot
###i## ls -l
total 0
dr-xr-xr-x 1 root root 0 Mar  2 20:16 Charlie
</console>


Now that filesystems are created and your hardware and system clock are set, the next step is downloading the initial Stage 3 tarball. The Stage 3 is a pre-compiled system used as a starting point to install Funtoo Linux.  Notice: if you're using virtual machines (like Vbox) generic stage3 images are preferred rather than cpu-optimized ones.
Yes we found Charlie here (also!), the snapshot is seen as regular directory but pay attention to its permissions:
* owning user (root) has read+execute
* owning group (root) has read+execute
* rest of the world has read+execute


To download the correct build of Funtoo Linux for your system, head over to the [[Subarches]] page. Subarches are builds of Funtoo Linux that are designed to run on a particular type of CPU, to offer the best possible performance. They also take advantage of the instruction sets available for each CPU.
Did you notice? Not a single ''write'' permission on this directory, the only action any user can do is to enter in the directory and list its contents. This not a bug but the nature of ZFS snapshots: they are read-only stuff at the basis. Next question is naturally: can we change something in it? For that we have to enter inside the ''Charlie'' directory:


The [[Subarches]] page lists all CPU-optimized versions of Funtoo Linux. Find the one that is appropriate for the type of CPU that your system has, and then click on its name in the first column (such as {{c|corei7}}, for example.) You will then go to a page dedicated to that subarch, and the available stage3's available for download will be listed.
<console>
###i## cd Charlie
###i## ls -la
total 7
drwxr-xr-x  3 root root  3 Mar  2 18:22 .
dr-xr-xr-x  3 root root  3 Mar  2 18:46 ..
drwx------ 170 root root 171 Mar  2 18:36 portage
</console>


For most subarches, you will have several stage3's available to choose from. This next section will help you understand which one to pick.
No surprise here: at the time we took the snapshot, ''myfirstpool/mysecondDS'' held a copy of the portage tree stored in a directory named ''portage''. At first glance this one ''seems'' to be writable for the root user let's try to create a file in it:


==== Which Build? ====
<console>
###i## cd portage
###i## touch test
touch: cannot touch ‘test’: Read-only file system
</console>


'''If you're not sure, pick {{c|funtoo-current}}.'''
Thing are a bit tricky here: indeed nothing has been mounted (check with the '''mount''' command!), we are walking though a pseudo-directory exposed by ZFS that holds the ''Charlie'' snapshot. ''Pseudo-directory'' because in fact ''.zfs'' had no physical existence even in the ZFS metadata as they exists in the zpool. It is just a convenient way provided by the ZFS kernel modules to walk inside the various snapshots' content. You can see but you cannot touch :-)


Funtoo Linux has various different 'builds':
=== Backtracking changes between a dataset and its snapshot ===
Is it possible to know what is the difference between a a live dataset and its snapshot? Answer to this question is '''yes''' and the '''zfs''' command will help us in this task. Now we rolled back the ''myfirstpool/mysecondDS'' ZFS dataset back to its original state we have to botch it again:
<console>
###i## cp -a /lib/firmware/radeon/C* /myfirstpool/mysecondDS
</console>


{{TableStart}}
Now inspect the difference between the live ZFS dataset ''myfirstpool/mysecondDS'' and its snasphot Charlie, this is done via '''zfs diff''' and by giving only the snapshot's name (you can inspect the difference between snasphot with that command with a slightly change in parameters):
{{2ColHead|Build|Description}}
<tr><td>{{c|funtoo-current}}</td><td>The most commonly-selected build of Funtoo Linux. Receives rapid updates and preferred by desktop users.</td></tr>
<tr><td>{{c|funtoo-stable}}</td><td>Emphasizes less-frequent package updates and trusted, reliable versions of packages over the latest versions.</td></tr>
{{TableEnd}}


==== Which Variant? ====
<console>
###i## # zfs diff myfirstpool/mysecondDS@Charlie
M      /myfirstpool/mysecondDS/
+      /myfirstpool/mysecondDS/CAICOS_mc.bin
+      /myfirstpool/mysecondDS/CAICOS_me.bin
+      /myfirstpool/mysecondDS/CAICOS_pfp.bin
+      /myfirstpool/mysecondDS/CAICOS_smc.bin
+      /myfirstpool/mysecondDS/CAYMAN_mc.bin
+      /myfirstpool/mysecondDS/CAYMAN_me.bin
(...)
</console>


'''If you're not sure, pick {{c|standard}}.'''
So do we have here? Two things: First it shows we have changed something in ''/myfirstpool/mysecondDS'' (notice the 'M' for Modified), second it shows the addition of several files (CAICOS_mc.bin, CAICOS_me.bin, CAICOS_pfp.bin...) by putting a plus sign ('+') on their left.


Our "regular" stage3's are listed with a variant of {{c|standard}}. The following variant builds are available:
If we botch a bit more ''myfirstpool/mysecondDS'' by removing the file ''/myfirstpool/mysecondDS/portage/sys-libs/glibc/Manifest'' :


{{TableStart}}
<console>
{{2ColHead|Variant|Description}}
###i## rm /myfirstpool/mysecondDS/portage/sys-libs/glibc/Manifest
<tr><td>{{c|standard}}</td><td>The "standard" version of Funtoo Linux</td></tr>
###i## zfs diff myfirstpool/mysecondDS@Charlie
<tr><td>{{c|pure64}}</td><td>A 64-bit build that drops multilib (32-bit compatibility) support. Can be ideal for server systems.</td></tr>
M      /myfirstpool/mysecondDS/
<tr><td>{{c|hardened}}</td><td>Includes PIE/SSP toolchain for enhanced security. PIE does require the use of PaX in the kernel, while SSP works with any kernel, and provides enhanced security in user-space to avoid stack-based exploits. For expert users.</td></tr>
M      /myfirstpool/mysecondDS/portage/sys-libs/glibc
{{TableEnd}}
-       /myfirstpool/mysecondDS/portage/sys-libs/glibc/Manifest
+      /myfirstpool/mysecondDS/CAICOS_mc.bin
+      /myfirstpool/mysecondDS/CAICOS_me.bin
+      /myfirstpool/mysecondDS/CAICOS_pfp.bin
+      /myfirstpool/mysecondDS/CAICOS_smc.bin
+      /myfirstpool/mysecondDS/CAYMAN_mc.bin
+      /myfirstpool/mysecondDS/CAYMAN_me.bin
(...)
</console>


==== Download the Stage3 ====
Obviously deleted content is marked by a minus sign ('-').


Once you have found the stage3 that you would like to download, use {{c|wget}} to download the Stage 3 tarball you have chosen to use as the basis for your new Funtoo Linux system. It should be saved to the {{f|/mnt/funtoo}} directory as follows:
Now a real butchery:
{{console|body=
<console>
###i## cd /mnt/funtoo
###i## rm -rf /myfirstpool/mysecondDS/portage/sys-devel/gcc
###i## wget http://build.funtoo.org/funtoo-current/x86-64bit/generic_64/stage3-latest.tar.xz
###i## zfs diff myfirstpool/mysecondDS@Charlie
}}
# zfs diff myfirstpool/mysecondDS@Charlie           
Note that 64-bit systems can run 32-bit or 64-bit stages, but 32-bit systems can only run 32-bit stages. Make sure that you select a Stage 3 build that is appropriate for your CPU. If you are not certain, it is a safe bet to choose the {{c|generic_64}} or {{c|generic_32}} stage. Consult the [[Subarches]] page for more information.
M      /myfirstpool/mysecondDS/
M      /myfirstpool/mysecondDS/portage/sys-devel
-      /myfirstpool/mysecondDS/portage/sys-devel/gcc
-      /myfirstpool/mysecondDS/portage/sys-devel/gcc/files
-      /myfirstpool/mysecondDS/portage/sys-devel/gcc/files/awk
-      /myfirstpool/mysecondDS/portage/sys-devel/gcc/files/awk/fixlafiles.awk
-      /myfirstpool/mysecondDS/portage/sys-devel/gcc/files/awk/fixlafiles.awk-no_gcc_la
-      /myfirstpool/mysecondDS/portage/sys-devel/gcc/files/c89
-      /myfirstpool/mysecondDS/portage/sys-devel/gcc/files/c99
-      /myfirstpool/mysecondDS/portage/sys-devel/gcc/files/gcc-4.6.4-fix-libgcc-s-path-with-vsrl.patch
-      /myfirstpool/mysecondDS/portage/sys-devel/gcc/files/gcc-spec-env.patch
-      /myfirstpool/mysecondDS/portage/sys-devel/gcc/files/gcc-spec-env-r1.patch
-      /myfirstpool/mysecondDS/portage/sys-devel/gcc/files/gcc-4.8.2-fix-cache-detection.patch
-      /myfirstpool/mysecondDS/portage/sys-devel/gcc/files/fix_libtool_files.sh
-      /myfirstpool/mysecondDS/portage/sys-devel/gcc/files/gcc-configure-texinfo.patch
-      /myfirstpool/mysecondDS/portage/sys-devel/gcc/files/gcc-4.8.1-bogus-error-with-int.patch
-      /myfirstpool/mysecondDS/portage/sys-devel/gcc/gcc-4.3.3-r2.ebuild
-      /myfirstpool/mysecondDS/portage/sys-devel/gcc/metadata.xml
-      /myfirstpool/mysecondDS/portage/sys-devel/gcc/gcc-4.6.4-r2.ebuild
-      /myfirstpool/mysecondDS/portage/sys-devel/gcc/gcc-4.6.4.ebuild
-      /myfirstpool/mysecondDS/portage/sys-devel/gcc/gcc-4.8.1-r1.ebuild
-      /myfirstpool/mysecondDS/portage/sys-devel/gcc/gcc-4.8.1-r2.ebuild
-      /myfirstpool/mysecondDS/portage/sys-devel/gcc/gcc-4.6.2-r1.ebuild
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc/gcc-4.8.1-r3.ebuild
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc/gcc-4.8.2.ebuild
-      /myfirstpool/mysecondDS/portage/sys-devel/gcc/gcc-4.8.1-r4.ebuild
-      /myfirstpool/mysecondDS/portage/sys-devel/gcc/Manifest
-      /myfirstpool/mysecondDS/portage/sys-devel/gcc/gcc-4.7.3-r1.ebuild
-      /myfirstpool/mysecondDS/portage/sys-devel/gcc/gcc-4.8.2-r1.ebuild
M      /myfirstpool/mysecondDS/portage/sys-libs/glibc
-      /myfirstpool/mysecondDS/portage/sys-libs/glibc/Manifest
+      /myfirstpool/mysecondDS/CAICOS_mc.bin
+      /myfirstpool/mysecondDS/CAICOS_me.bin
+      /myfirstpool/mysecondDS/CAICOS_pfp.bin
+      /myfirstpool/mysecondDS/CAICOS_smc.bin
+      /myfirstpool/mysecondDS/CAYMAN_mc.bin
+      /myfirstpool/mysecondDS/CAYMAN_me.bin
(...)
</console>


Once the stage is downloaded, extract the contents with the following command, substituting in the actual name of your stage 3 tarball:
No need to explain that digital mayhem! What happens if, in addition, we change the contents of the file ''/myfirstpool/mysecondDS/portage/sys-devel/autoconf/Manifest''?
{{console|body=
<console>
###i## tar xpf stage3-latest.tar.xz
###i## zfs diff myfirstpool/mysecondDS@Charlie
}}{{Important|It is very important to use {{c|tar's}} "{{c|'''p'''}}" option when extracting the Stage 3 tarball - it tells {{c|tar}} to ''preserve'' any permissions and ownership that exist within the archive. Without this option, your Funtoo Linux filesystem permissions will be incorrect.}}
M      /myfirstpool/mysecondDS/
M      /myfirstpool/mysecondDS/portage/sys-devel
M      /myfirstpool/mysecondDS/portage/sys-devel/autoconf/Manifest
-      /myfirstpool/mysecondDS/portage/sys-devel/gcc
-      /myfirstpool/mysecondDS/portage/sys-devel/gcc/files
-      /myfirstpool/mysecondDS/portage/sys-devel/gcc/files/awk
-      /myfirstpool/mysecondDS/portage/sys-devel/gcc/files/awk/fixlafiles.awk
-      /myfirstpool/mysecondDS/portage/sys-devel/gcc/files/awk/fixlafiles.awk-no_gcc_la
(...)
</console>
ZFS shows that the file ''/myfirstpool/mysecondDS/portage/sys-devel/autoconf/Manifest'' has changed. So ZFS can help to track files deletion, creation and modifications. What it does not show is the difference of a file's content between as it exists in a live dataset and this dataset's snapshot. Not a big issue! You can explore a snapshot's content via the ''.zfs'' pseudo-directory and use a command like '''/usr/bin/diff''' to examine the difference with the file as it exists on the corresponding live dataset.


=== Chroot into Funtoo ===
<console>
To install Funtoo Linux, the {{c|chroot}} command is first used. The chroot command will "switch into" the new Funtoo Linux system, so the commands you execute after running "chroot" will run within your newly-extracted Funtoo Linux system.
###i## diff -u /myfirstpool/mysecondDS/.zfs/snapshot/Charlie/portage/sys-devel/autoconf/Manifest /myfirstpool/mysecondDS/portage/sys-devel/autoconf/Manifest
--- /myfirstpool/mysecondDS/.zfs/snapshot/Charlie/portage/sys-devel/autoconf/Manifest  2013-08-18 08:52:01.742411902 -0400
+++ /myfirstpool/mysecondDS/portage/sys-devel/autoconf/Manifest 2014-03-02 21:36:50.582258990 -0500
@@ -4,7 +4,4 @@
DIST autoconf-2.62.tar.gz 1518427 SHA256 83aa747e6443def0ebd1882509c53f5a2133f50...
DIST autoconf-2.63.tar.gz 1562665 SHA256 b05a6cee81657dd2db86194a6232b895b8b2606a...
DIST autoconf-2.64.tar.bz2 1313833 SHA256 872f4cadf12e7e7c8a2414e047fdff26b517c7...
-DIST autoconf-2.65.tar.bz2 1332522 SHA256 db11944057f3faf229ff5d6ce3fcd819f56545...
-DIST autoconf-2.67.tar.bz2 1369605 SHA256 00ded92074999d26a7137d15bd1d51b8a8ae23...
-DIST autoconf-2.68.tar.bz2 1381988 SHA256 c491fb273fd6d4ca925e26ceed3d177920233c...
DIST autoconf-2.69.tar.xz 1214744 SHA256 64ebcec9f8ac5b2487125a86a7760d2591ac9e1d3...
(...)
</console>


Before chrooting, there are a few things that need to be done to set up the chroot environment. You will need to mount {{f|/proc}}, {{f|/sys}} and {{f|/dev}} inside your new system. Use the following commands to do so:
=== Dropping a snapshot ===
{{console|body=
A snapshot is no more than a dataset frozen in time and thus can be destroyed in the exact same way seen in the paragraphs before. Now we do not need the ''Charlie'' snapshot we can remove it. Simple:
# ##i##cd /mnt/funtoo
<console>
# ##i##mount -t proc none proc
###i## zfs destroy myfirstpool/mysecondDS@Charlie
# ##i##mount --rbind /sys sys
###i## zfs list -t all
# ##i##mount --rbind /dev dev
NAME                    USED  AVAIL  REFER  MOUNTPOINT
}}You'll also want to copy over {{f|resolv.conf}} in order to have proper resolution of Internet hostnames from inside the chroot:
myfirstpool            1.71G  6.10G  850M  /myfirstpool
{{console|body=
myfirstpool/myfirstDS    30K  6.10G    30K  /myfirstpool/myfirstDS
# ##i##cp /etc/resolv.conf /mnt/funtoo/etc/
myfirstpool/mysecondDS  903M  6.10G  903M  /myfirstpool/mysecondDS
}}Now you can chroot into your new system. Use <code>env</code> before <code>chroot</code> to ensure that no environment settings from the installation media are pulled in to your new system:
</console>


{{console|body=###i## env -i HOME=/root TERM=$TERM chroot . bash -l}}
And Charlie is gone forever ;-)
{{Note|For users of live CDs with 64-bit kernels installing 32-bit systems: Some software may use {{c|uname -r}} to check whether the system is 32 or 64-bit. You may want append linux32 to the chroot command as a workaround, but it's generally not needed.}}
{{Important|If you receive the error "{{c|chroot: failed to run command `/bin/bash': Exec format error}}", it is most likely because you are running a 32-bit kernel and trying to execute 64-bit code. Make sure that you have selected the proper type of kernel when booting SystemRescueCD.}}


It's also a good idea to change the default command prompt while inside the chroot. This will avoid confusion if you have to change terminals. Use this command:
=== The time travelling machine part 1: examining differences between snapshots ===
{{console|body=# ##i##export PS1="(chroot) $PS1"}}
So far we only used a single snapshot just to keep things simple. However a dataset can hold several snapshots and you can do everything seen so far with them like rolling back, destroying them or examining the difference not only between a snapshot and its corresponding live dataset but also between two snapshots. For this part we will consider the ''myfirstpool/myfirstDS'' dataset which should be empty at this point.
Test internet name resolution from within the chroot:
{{console|body=###i## ping -c 5 google.com}}
If you can't ping, make sure {{f|/etc/resolv.conf}} doesn't contain things like {{c|127.0.x.x}} addresses, if it does, change the {{c|127.0.x.x}} entry to {{c|8.8.8.8}} -- Google's public dns address.  Make sure to replace this with your dns of choice once the system is installed.


<console>
# ls -la /myfirstpool/myfirstDS
total 3
drwxr-xr-x 2 root root 2 Mar 2 21:14 .
drwxr-xr-x 5 root root 6 Mar 2 17:58 ..
</console>


Congratulations! You are now chrooted inside a Funtoo Linux system. Now it's time to get Funtoo Linux properly configured so that Funtoo Linux will start successfully, without any manual assistance, when your system is restarted.
Now let's generate some contents, take a snapshot (snapshot-1), add more content, take a snapshot again (snapshot-2), do some modifications again and take a third snapshot (snapshot-3):
=== Downloading the Portage tree ===


{{note|For an alternative way to do this, see [[Installing Portage From Snapshot]].}}
<console>
Now it's time to install a copy of the Portage repository, which contains package scripts (ebuilds) that tell portage how to build and install thousands of different software packages. To create the Portage repository, simply run <code>emerge --sync</code> from within the chroot. This will automatically clone the portage tree from [https://github.com/funtoo/ports-2012 GitHub]:
###i## echo "Hello, world" > /myfirstpool/myfirstDS/hello.txt
###i## cp -R /lib/firmware/radeon /myfirstpool/myfirstDS
###i## ls -l /myfirstpool/myfirstDS
total 5
-rw-r--r-- 1 root root 13 Mar 3 06:41 hello.txt
drwxr-xr-x 2 root root 143 Mar 3 06:42 radeon
###i## zfs snapshot myfirstpool/myfirstDS@snapshot-1
</console>
<console>
###i## echo "Goodbye, world" > /myfirstpool/myfirstDS/goodbye.txt
###i## echo "Are you there?" >> /myfirstpool/myfirstDS/hello.txt
###i## cp /proc/config.gz /myfirstpool/myfirstDS
###i## rm /myfirstpool/myfirstDS/radeon/CAYMAN_me.bin
###i## zfs snapshot myfirstpool/myfirstDS@snapshot-2
</console>
<console>
###i## echo "Still there?" >> /myfirstpool/myfirstDS/goodbye.txt
###i## mv /myfirstpool/myfirstDS/hello.txt /myfirstpool/myfirstDS/hello_new.txt
###i## cat /proc/version > /myfirstpool/myfirstDS/version.txt
###i## zfs snapshot myfirstpool/myfirstDS@snapshot-3
</console>
<console>
###i## zfs list -t all
NAME                              USED  AVAIL  REFER  MOUNTPOINT
myfirstpool                      1.81G  6.00G  850M  /myfirstpool
myfirstpool/myfirstDS            3.04M  6.00G  2.97M  /myfirstpool/myfirstDS
myfirstpool/myfirstDS@snapshot-1    47K      -  2.96M  -
myfirstpool/myfirstDS@snapshot-2    30K      -  2.97M  -
myfirstpool/myfirstDS@snapshot-3      0      -  2.97M  -
myfirstpool/mysecondDS            1003M  6.00G  1003M  /myfirstpool/mysecondDS
</console>


{{console|body=
You saw to how use '''zfs diff''' to compare the difference between a snapshot and its corresponding "live" dataset in the above paragraphs. Doing the same exercise with two snapshots is not that much different as you just have to explicitly tell the command what datasets are to be compared against and the command will oputput the result in the exact same manner.So what are the differences between snapshots ''myfirstpool/myfirstDS@snapshot-1'' and ''myfirstpool/myfirstDS@snapshot-2''? Let's make the '''zfs''' command work for us:
(chroot) ###i## emerge --sync
}}
{{important|If you receive the error with initial {{c|emerge --sync}} due to git protocol restrictions, set the {{c|SYNC}} variable in {{c|/etc/portage/make.conf}} to
{{c|"https://github.com/funtoo/ports-2012.git"}}
}}


=== Configuring your system ===
<console>
As is expected from a Linux distribution, Funtoo Linux has its share of configuration files. The one file you are absolutely required to edit in order to ensure that Funtoo Linux boots successfully is {{f|/etc/fstab}}. The others are optional.  
###i## zfs diff myfirstpool/myfirstDS@snapshot-1 myfirstpool/myfirstDS@snapshot-2
M      /myfirstpool/myfirstDS/
M      /myfirstpool/myfirstDS/hello.txt
M      /myfirstpool/myfirstDS/radeon
-      /myfirstpool/myfirstDS/radeon/CAYMAN_me.bin
+      /myfirstpool/myfirstDS/goodbye.txt
+      /myfirstpool/myfirstDS/config.gz
</console>


==== Using Nano ====
Before digging farther, let's think about what we did between the time we created the first snapshot and the second snapshot:
* We modified the file /myfirstpool/myfirstDS/hello.txt hence the 'M' shown on left of the second line (thus we changed something under ''/myfirstpool/myfirstDS'' hence a 'M' is also shown on the left of the first line)
* We deleted the file ''/myfirstpool/myfirstDS/radeon/CAYMAN_me.bin'' hence the minus sign ('-') shown on the left of the fourth line (and the 'M' shown on left of the third line)
* We added two files which were ''/myfirstpool/myfirstDS/goodbye.txt'' and ''/myfirstpool/myfirstDS/config.gz'' hence the plus sign ('+') shown on the left of the fifth and sixth lines (also this is a change happening in ''/myfirstpool/myfirstDS'' hence another reason to show a 'M' on the left of the first line)


The default editor included in the chroot environment is called {{c|nano}}. To edit one of the files below, run nano as follows:
Now same exercise this time with snapshots ''myfirstpool/myfirstDS@snapshot-2'' and ''myfirstpool/myfirstDS@snapshot-3'':


{{console|body=
<console>
(chroot) ###i## nano /etc/fstab
###i## zfs diff myfirstpool/myfirstDS@snapshot-2 myfirstpool/myfirstDS@snapshot-3
}}
M      /myfirstpool/myfirstDS/
When in the editor, you can use arrow keys to move the cursor, and common keys like backspace and delete will work as expected. To save the file, press Control-X, and answer {{c|y}} when prompted to save the modified buffer if you would like to save your changes.
R      /myfirstpool/myfirstDS/hello.txt -> /myfirstpool/myfirstDS/hello_new.txt
M      /myfirstpool/myfirstDS/goodbye.txt
+      /myfirstpool/myfirstDS/version.txt
</console>


==== Configuration Files ====
Try to interpret what you see except for the second line where a "R" (standing for "Rename") is shown. ZFS is smart enough to also show both the old the new names!


Here are a full list of files that you may want to edit, depending on your needs:
Why not push the limit and try a few fancy things. First things first: what happens if we tell to compare two snapshots but in a reverse order?
{{TableStart}}
{{3ColHead|File|Do I need to change it?|Description}}
<tr  class="danger">
<td>{{c|/etc/fstab}}</td>
<td>'''YES - required'''</td>
<td>Mount points for all filesystems to be used at boot time. This file must reflect your disk partition setup. We'll guide you through modifying this file below.</td>
</tr><tr>
<td>{{c|/etc/localtime}}</td>
<td>''Maybe - recommended''</td>
<td>Your timezone, which will default to UTC if not set. This should be a symbolic link to something located under /usr/share/zoneinfo (e.g. /usr/share/zoneinfo/America/Montreal) </td>
</tr><tr>
<td>{{c|/etc/make.conf}} (symlink) - also known as:<br/>{{c|/etc/portage/make.conf}}</td>
<td>''Maybe - recommended''</td>
<td>Parameters used by gcc (compiler), portage, and make. It's a good idea to set MAKEOPTS. This is covered later in this document.</td>
</tr><tr>
<td>{{c|/etc/conf.d/hostname}}</td>
<td>''Maybe - recommended''</td>
<td>Used to set system hostname. Set the {{c|hostname}} variable to the fully-qualified (with dots, ie. {{c|foo.funtoo.org}}) name if you have one. Otherwise, set to the local system hostname (without dots, ie. {{c|foo}}). Defaults to {{c|localhost}} if not set.</td>
</tr><tr>
<td>{{c|/etc/hosts}}</td>
<td>''No''</td>
<td> You no longer need to manually set the hostname in this file. This file is automatically generated by {{c|/etc/init.d/hostname}}.</td>
</tr><tr>
<td>{{c|/etc/conf.d/keymaps}}</td>
<td>Optional</td>
<td>Keyboard mapping configuration file (for console pseudo-terminals). Set if you have a non-US keyboard. See [[Funtoo Linux Localization]].</td>
</tr><tr>
<td>{{c|/etc/conf.d/hwclock}}</td>
<td>Optional</td>
<td>How the time of the battery-backed hardware clock of the system is interpreted (UTC or local time). Linux uses the battery-backed hardware clock to initialize the system clock when the system is booted.</td>
</tr><tr>
<td>{{c|/etc/conf.d/modules}}</td>
<td>Optional</td>
<td>Kernel modules to load automatically at system startup. Typically not required. See [[Additional Kernel Resources]] for more info.</td>
</tr><tr>
<td>{{c|/etc/conf.d/consolefont}}</td>
<td>Optional</td>
<td>Allows you to specify the default console font. To apply this font, enable the consolefont service by running rc-update add consolefont.</td>
</tr><tr>
<td>{{c|profiles}}</td>
<td>Optional</td>
<td>Some useful portage settings that may help speed up intial configuration.</td>
</tr>
{{TableEnd}}


If you're installing an English version of Funtoo Linux, you're in luck, as most of the configuration files can be used as-is. If you're installing for another locale, don't worry. We will walk you through the necessary configuration steps on the [[Funtoo Linux Localization]] page, and if needed, there's always plenty of friendly, helpful support available. (See [[#Community portal|Community]])
<console>
###i## zfs diff myfirstpool/myfirstDS@snapshot-3 myfirstpool/myfirstDS@snapshot-2
Unable to obtain diffs:
  Not an earlier snapshot from the same fs
</console>


Let's go ahead and see what we have to do. Use {{c|nano -w <name_of_file>}} to edit files -- the "{{c|-w}}" argument disables word-wrapping, which is handy when editing configuration files. You can copy and paste from the examples.
Is ZFS would be a bit more happy if we ask the difference between two snapshots this time with a gap in between (so snapshot 1 with snapshot 3):


{{Warning|It's important to edit your {{c|/etc/fstab}} file before you reboot! You will need to modify both the "fs" and "type" columns to match the settings for your partitions and filesystems that you created with {{c|gdisk}} or {{c|fdisk}}. Skipping this step may prevent Funtoo Linux from booting successfully.}}
<console>
###i## zfs diff myfirstpool/myfirstDS@snapshot-1 myfirstpool/myfirstDS@snapshot-3
M      /myfirstpool/myfirstDS/
R      /myfirstpool/myfirstDS/hello.txt -> /myfirstpool/myfirstDS/hello_new.txt
M      /myfirstpool/myfirstDS/radeon
-      /myfirstpool/myfirstDS/radeon/CAYMAN_me.bin
+      /myfirstpool/myfirstDS/goodbye.txt
+      /myfirstpool/myfirstDS/config.gz
+      /myfirstpool/myfirstDS/version.txt
</console>


==== /etc/fstab ====
Amazing! Here again, take a couple of minutes to think about all operations you did on the dataset between the time you took the first snapshot and the time you took the last snapshot: this summary is the exact reflect of all your previous operations.


{{f|/etc/fstab}} is used by the {{c|mount}} command which is run when your system boots. Lines in this file inform {{c|mount}} about filesystems to be mounted and how they should be mounted. In order for the system to boot properly, you must edit {{f|/etc/fstab}} and ensure that it reflects the partition configuration you used earlier in the install process. If you can't remember the partition configuration that you used earlier, the {{c|lsblk}} command may be of help to you:
Just to put a conclusion on this subject, let's see the differences between the ''myfirstpool/myfirstDS'' dataset and its various snapshots:
{{console|body=
(chroot) ###i## nano -w /etc/fstab
}}
{{file|name=/etc/fstab|desc=An example fstab file|body=
# The root filesystem should have a pass number of either 0 or 1.
# All other filesystems should have a pass number of 0 or greater than 1.
#
# NOTE: If your BOOT partition is ReiserFS, add the notail option to opts.
#
# See the manpage fstab(5) for more information.
#
# <fs>     <mountpoint>  <type>  <opts>        <dump/pass>


/dev/sda1    /boot        ext2    noauto,noatime 1 2
<console>
/dev/sda2    none          swap    sw            0 0
###i## zfs diff myfirstpool/myfirstDS@snapshot-1                               
/dev/sda3    /             ext4    noatime        0 1
M      /myfirstpool/myfirstDS/
#/dev/cdrom  /mnt/cdrom    auto    noauto,ro      0 0
R      /myfirstpool/myfirstDS/hello.txt -> /myfirstpool/myfirstDS/hello_new.txt
}}
M      /myfirstpool/myfirstDS/radeon
{{Note|Currently, our default {{f|/etc/fstab}}file  has the root filesystem as {{c|/dev/sda4}} and the swap partition as {{c|/dev/sda3}}. These will need to be changed to {{c|/dev/sda3}} and {{c|/dev/sda2}}, respectively.}}
-      /myfirstpool/myfirstDS/radeon/CAYMAN_me.bin
+      /myfirstpool/myfirstDS/goodbye.txt
+      /myfirstpool/myfirstDS/config.gz
+      /myfirstpool/myfirstDS/version.txt
</console>
<console>
###i## zfs diff myfirstpool/myfirstDS@snapshot-2
M      /myfirstpool/myfirstDS/
R      /myfirstpool/myfirstDS/hello.txt -> /myfirstpool/myfirstDS/hello_new.txt
M      /myfirstpool/myfirstDS/goodbye.txt
+      /myfirstpool/myfirstDS/version.txt
</console>
<console>
###i##  zfs diff myfirstpool/myfirstDS@snapshot-3
</console>


{{Note|If you're using UEFI to boot, change the {{f|/dev/sda1}} line so that it says {{c|vfat}} instead of {{c|ext2}}. Similarly, make sure that the {{f|/dev/sda3}} line specifies either {{c|xfs}} or {{c|ext4}}, depending on which filesystem you chose earlier on in the installation process when you created filesystems.}}
Having nothing reported for the last '''zfs diff''' is normal as changed in the dataset since the snapshot has been taken.


==== /etc/localtime ====  
=== The time travelling machine part 2: rolling back with multiple snapshots ===
Examining the differences between the various snapshots of a dataset or the dataset itself would be quite useless if we would not be able to roll the dataset back to one of its previous states. How we have salvaged ''myfirstpool/myfirstDS'' a bit, it would the time to restore it at it was when the first snapshot had been taken:


{{f|/etc/localtime}} is used to specify the timezone that your machine is in, and defaults to UTC. If you would like your Funtoo Linux system to use local time, you should replace {{f|/etc/localtime}} with a symbolic link to the timezone that you wish to use.
<console>
###i## zfs rollback myfirstpool/myfirstDS@snapshot-1
cannot rollback to 'myfirstpool/myfirstDS@snapshot-1': more recent snapshots exist
use '-r' to force deletion of the following snapshots:
myfirstpool/myfirstDS@snapshot-3
myfirstpool/myfirstDS@snapshot-2
</console>


{{console|body=
Err... Well, ZFS just tells us that several more recent snapshots exists and it refuses to proceed without dropping those latter. Unfortunately for us there is no way to circumvent that: once you jump backward you have no way to move forward again. We could demonstrate the rollback to ''myfirstpool/myfirstDS@snapshot-3'' then ''myfirstpool/myfirstDS@snapshot-2'' then ''myfirstpool/myfirstDS@snapshot-1'' but it would be of very little interest previous sections of this tutorial did that already so second attempt:
(chroot) ###i## ln -sf /usr/share/zoneinfo/MST7MDT /etc/localtime
}}
The above sets the timezone to Mountain Standard Time (with daylight savings). Type {{c|ls /usr/share/zoneinfo}} to list available timezones. There are also sub-directories containing timezones described by location.


==== /etc/portage/make.conf ====
<console>
###i## zfs rollback -r myfirstpool/myfirstDS@snapshot-1
###i## zfs list -t all                                                         
NAME                              USED  AVAIL  REFER  MOUNTPOINT
myfirstpool                      1.81G  6.00G  850M  /myfirstpool
myfirstpool/myfirstDS            2.96M  6.00G  2.96M  /myfirstpool/myfirstDS
myfirstpool/myfirstDS@snapshot-1    1K      -  2.96M  -
myfirstpool/mysecondDS            1003M  6.00G  1003M  /myfirstpool/mysecondDS
</console>


{{c|MAKEOPTS}} can be used to define how many parallel compilations should occur when you compile a package, which can speed up compilation significantly. A rule of thumb is the number of CPUs (or CPU threads) in your system plus one. If, for example, you have a dual core processor without [[wikipedia:Hyper-threading|hyper-threading]], then you would set {{c|MAKEOPTS}} to 3:
''myfirstpool/myfirstDS'' effectively returned to the desired state (notice the size of ''myfirstpool/myfirstDS@snapshot-1'') and the snapshots ''snapshot-2'' and ''snapshot-3'' vanished. Just to convince you:
<console>
###i## zfs diff myfirstpool/myfirstDS@snapshot-1
###i##
</console>


<pre>
No differences at all!
MAKEOPTS="-j3"
</pre>


If you are unsure about how many processors/threads you have, then use {{c|nproc}} to help you.
=== Snapshots and clones ===
{{console|body=
(chroot) ###i## nproc
16
}}
Set {{c|MAKEOPTS}} to this number plus one:


<pre>
A clone and a snapshot are two very close things in ZFS:
MAKEOPTS="-j17"
</pre>


{{c|USE}} flags define what functionality is enabled when packages are built. It is not recommended to add a lot of USE flags during installation; you should wait until you have a working, bootable system before changing your USE flags. A USE flag prefixed with a minus ("{{c|-}}") sign tells Portage not to use the flag when compiling. A Funtoo guide to USE flags will be available in the future. For now, you can find out more information about USE flags in the [http://www.gentoo.org/doc/en/handbook/handbook-amd64.xml?part=2&chap=2 Gentoo Handbook].
* A clone appears as mounted dataset (i.e. you can read and write data in it) while a snapshot stays apart and is always read-only
* A clone is always spawned from a snapshot


{{c|LINGUAS}} tells Portage which local language to compile the system and applications in (those who use LINGUAS variable like OpenOffice). It is not usually necessary to set this if you use English. If you want another language such as French (fr) or German (de), set LINGUAS appropriately:
So it is absolutely true to say that a clone is just indeed a writable snapshot. The copy-on-write feature of ZFS plays its role even there: the data blocks hold by the snapshot are only duplicated upon modification. So cloning 20Gb snapshot of data does not lead to an additional 20 Gb of data being eaten from the pool.  


<pre>
How to make a clone? Simple, once again with the '''zfs''' command used like this:
LINGUAS="fr"
</pre>


==== /etc/conf.d/hwclock ====
<console>
If you dual-boot with Windows, you'll need to edit this file and change the value of '''clock''' from '''UTC''' to '''local''', because Windows will set your hardware clock to local time every time you boot Windows. Otherwise you normally wouldn't need to edit this file.
###i## zfs clone myfirstpool/myfirstDS@snapshot-1 myfirstpool/myfirstDS_clone1
{{console|body=
###i## fs list -t all
(chroot) ###i## nano -w /etc/conf.d/hwclock
NAME                              USED  AVAIL  REFER  MOUNTPOINT
}}
myfirstpool                      1.81G  6.00G  850M  /myfirstpool
==== Localization ====
myfirstpool/myfirstDS            2.96M  6.00G  2.96M  /myfirstpool/myfirstDS
myfirstpool/myfirstDS@snapshot-1    1K      -  2.96M  -
myfirstpool/myfirstDS_clone1        1K  6.00G  2.96M  /myfirstpool/myfirstDS_clone1
myfirstpool/mysecondDS            1003M  6.00G  1003M  /myfirstpool/mysecondDS
</console>


By default, Funtoo Linux is configured with Unicode (UTF-8) enabled, and for the US English locale and keyboard. If you would like to configure your system to use a non-English locale or keyboard, see [[Funtoo Linux Localization]].
Noticed the value of ''MOUNTPOINT'' for ''myfirstpool/myfirstDS_clone1''? No we have a dataset that is mounted! Let's check with the '''mount''' command:


=== Introducing Portage ===
<console>
###i## mount | grep clone
myfirstpool/myfirstDS_clone1 on /myfirstpool/myfirstDS_clone1 type zfs (rw,xattr)
</console>


Portage, the Funtoo Linux package manager has a command called <code>emerge</code> which is used to build and install packages from source. It also takes care of installing all of the package's dependencies. You call emerge like this:
In theory we can change or write additional data in the clone as it is mounted as being writable (rw). Let it be!


<console>
<console>
(chroot) # ##i##emerge packagename
###i## # ls /myfirstpool/myfirstDS_clone1
hello.txt  radeon
</console>
<console>
###i## cp /proc/config.gz /myfirstpool/myfirstDS_clone1
###i## echo 'This is a clone!' >> /myfirstpool/myfirstDS_clone1/hello.txt
</console>
<console>
###i## ls /myfirstpool/myfirstDS_clone1
config.gz  hello.txt  radeon
###i## cat /myfirstpool/myfirstDS_clone1/hello.txt                     
Hello, world
This is a clone!
</console>
</console>


When you install a package by specifying its name in the command-line, Portage records its name in the <code>/var/lib/portage/world</code> file. It does so because it assumes that, since you have installed it by name, you want to consider it part of your system and want to keep the package updated in the future. This is a handy feature, since when packages are being added to the <code>world</code> set, we can update our entire system by typing:
Unfortunately it is not possible to ask the difference between a clone and a snapshot, '''zfs diff''' expects to see either a snapshot name either two snapshots names. Once spawned, a clone starts its own existence and the clone that served as a seed for it remains attached to its own original dataset.  
 
Because clones are nothing more than a ZFS dataset they can be destroyed just like any ZFS dataset:


<console>
<console>
(chroot) # ##i##emerge --sync
###i## zfs destroy myfirstpool/myfirstDS_clone1
(chroot) # ##i##emerge -auDN @world
###i## zfs list -t all                                                       
NAME                              USED  AVAIL  REFER  MOUNTPOINT
myfirstpool                      1.81G  6.00G  850M  /myfirstpool
myfirstpool/myfirstDS            2.96M  6.00G  2.96M  /myfirstpool/myfirstDS
myfirstpool/myfirstDS@snapshot-1    1K      -  2.96M  -
myfirstpool/mysecondDS            1003M  6.00G  1003M  /myfirstpool/mysecondDS
</console>
</console>


This is the "official" way to update your Funtoo Linux system. Above, we first update our Portage tree using git to grab the latest ebuilds (scripts), and then run an emerge command to update the <code>world</code> set of packages. The options specified tell <code>emerge</code> to:
=== Streaming ZFS datasets ===
A ZFS snapshot can not only be cloned or explored but also streamed in a local file or even over the network thus allowing to back up or simply an exact bit to bit copy of a ZFS dataset between two machines for example. Snapshots being differential (i.e. incremental) by nature very little network overhead is induced when consecutive snapshots are streamed over the network. A nifty move from the designers was to use ''stdin'' and ''stdout'' as transmission/reception channels thus allowing great a flexibility in processing the ZFS stream. You can envisage, for instance, to compress your stream then crypt it then encode it in base64 then sign it and so on. It sounds a bit overkill but it is possible and in the general case you can use any tool that swallows the data from ''stdin'' and spit it through ''stdout'' in your plumbing.


* '''<code>a</code>''' - show us what will be emerged, and '''ask''' us if we want to proceed
First things first, just to illustrate some basic concepts here is how to stream a ZFS dataset snapshot to a local file:
* '''<code>u</code>''' - '''update''' the packages we specify -- don't emerge them again if they are already emerged.
* '''<code>D</code>''' - Consider the entire dependency tree of packages when looking for updates. In other words, do a '''deep''' update.
* '''<code>N</code>''' - Update any packages that have changed ('''new''') USE settings.


You should also consider passing <code>--with-bdeps=y</code> when emerging @world, at least once in a while. This will update build dependencies as well.
<console>
###i## zfs send myfirstpool/myfirstDS@snapshot-1 > /tmp/myfirstpool-myfirstDS@snapshot-snap1
###i## cat /tmp/myfirstpool-myfirstDS@snapshot-snap1 | zfs receive myfirstpool/myfirstDS@testrecv
</console>


Of course, sometimes we want to install a package but not add it to the <code>world</code> file. This is often done because you only want the package installed temporarily or because you know the package in question is a dependnecy of another package. If this behavior is desired, you call emerge like this:
Now let's stream it back:


<console>
<console>
(chroot) # ##i##emerge -1 packagename
###i## cannot receive new filesystem stream: destination 'myfirstpool/myfirstDS' exists
must specify -F to overwrite it
</console>
</console>


Advanced users may be interested in the [[Emerge]] wiki page.
Ouch... ZFS refuses to go any step further because some data would be overwritten.  We do now own any critical data on the dataset so we could destroy it and try again or use a different name nevertheless, just for the sake of the demonstration, let's create another zpool prior restoring the dataset there:


==== Updating World ====
<console>
###i## dd if=/dev/zero of=/tmp/zfs-test-disk04.img bs=2G count=1
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB) copied, 6.35547 s, 338 MB/s
###i## losetup -f           
/dev/loop4
###i## losetup /dev/loop4 /tmp/zfs-test-disk04.img
###i## zpool create testpool /dev/loop4
###i## zpool list
NAME          SIZE  ALLOC  FREE    CAP  DEDUP  HEALTH  ALTROOT
myfirstpool  7.94G  1.81G  6.12G    22%  1.00x  ONLINE  -
testpool    1.98G  89.5K  1.98G    0%  1.00x  ONLINE  -
</console>


Now is actually a very good time to update the entire system and it can be a good idea to do so prior to first boot.
Take two:


<console>
<console>
(chroot) # ##i##emerge --sync
###i## cat /tmp/myfirstpool-myfirstDS@snapshot-snap1 | zfs receive testpool/myfirstDS@testrecv
(chroot) # ##i##emerge -auDN @world
###i## zfs list -t all
NAME                              USED  AVAIL  REFER  MOUNTPOINT
myfirstpool                      1.81G  6.00G  850M  /myfirstpool
myfirstpool/myfirstDS            2.96M  6.00G  2.96M  /myfirstpool/myfirstDS
myfirstpool/myfirstDS@snapshot-1    1K      -  2.96M  -
myfirstpool/mysecondDS            1003M  6.00G  1003M  /myfirstpool/mysecondDS
testpool                          3.08M  1.95G    31K  /testpool
testpool/myfirstDS                2.96M  1.95G  2.96M  /testpool/myfirstDS
testpool/myfirstDS@testrecv          0      -  2.96M  -
</console>
</console>


{{fancyimportant|1=
Very interesting things happened there! First the data previously stored in the file ''/tmp/myfirstpool-myfirstDS@snapshot-snap1'' been copied as a snapshot in the destination zpool (''testpool'' here) and it has been copied exactly in the same manner given on the command line. Second a clone of this snapshot has been crated for you by ZFS and the snapshot ''myfirstpool/myfirstDS@snapshot-1'' now appears as a live ZFS dataset where data can be read and  
Make sure you read any post emerge messages and follow their instructions. This is especially true if you have upgraded perl or python.}}
written! Think two seconds about the error message we got just above, the reason ZFS protested becomes clear now.


=== Kernel ===
An alternative would have been to use the original zpool but this time with a different name for the dataset:


Starting mid-May 2015, Funtoo Linux stage3's include a pre-built {{c|debian-sources}} kernel to make installation faster and easier. To see if debian-sources is installed, type:
<console>
{{console|body=
###i## cat /tmp/myfirstpool-myfirstDS@snapshot-snap1 | zfs receive myfirstpool/myfirstDS_copy@testrecv
(chroot) # ##i##emerge -s debian-sources
###i## zfs list -t all                                                                               
Searching...  
NAME                                  USED  AVAIL  REFER  MOUNTPOINT
[ Results for search key : ##b##debian-sources##!b## ]
myfirstpool                          1.82G  6.00G  850M  /myfirstpool
[ Applications found : ##b##1##!b## ]
myfirstpool/myfirstDS                2.96M  6.00G  2.96M  /myfirstpool/myfirstDS
myfirstpool/myfirstDS@snapshot-1        1K      - 2.96M  -
myfirstpool/myfirstDS_copy          2.96M  6.00G  2.96M  /myfirstpool/myfirstDS_copy
myfirstpool/myfirstDS_copy@testrecv      0      -  2.96M  -
myfirstpool/mysecondDS              1003M  6.00G  1003M  /myfirstpool/mysecondDS
</console>


* ##b##sys-kernel/debian-sources##!b##
Now something a bit more interesting: instead of using a local file, we will stream the dataset to a Solaris 11 machine (OpenIndiana can be used also) over the network using the GNU flavour of '''netcat''' (''net-analyzer/gnu-netcat'') over the port TCP/7000 , in that case the Solaris host is a x86 machine but a SPARC machine would have given the exact same result as ZFS contrary to UFS is platform agnostic.
      ##g##Latest version available:##!g## 3.19.3
      ##g##Latest version installed:##!g## 3.19.3
      ##g##Size of files:##!g## 81,292 kB
      ##g##Homepage:##!g##      http://www.debian.org
      ##g##Description:##!g##  Debian Sources (and optional binary kernel)
      ##g##License:##!g##      GPL-2
}}
If a version is listed under {{c|Latest version installed}}, then debian-sources is already pre-built for you and you can skip the rest of the Kernel section, and proceed to the [[#Installing a Bootloader|Installing a Bootloader section]].


==== Building the Kernel ====
On the Solaris machine:
<console>
###i## nc -l -p 7000 | zfs receive nas/zfs-stream-test@s1
</console>


If you need to build a kernel for Funtoo Linux, please follow these steps:
On the Linux machine:
<console>
###i##  zfs send myfirstpool/myfirstDS@snapshot-1 | netcat -c 192.168.1.13 7000
</console>


{{Fancynote|1=
{{Fancywarning|The '''nc''' command coming with the ''net-analyzer/netcat'' package '''does not automatically close the network connection when its input stream is closed''' (i.e. when '''zfs send''' command terminates its job) thus  its Solaris conterpart also waits "forever" at the other end of the "pipe". It is not possible to override this behaviour hence the reason we use its GNU variant (package ''net-analyzer/netcat'').}}
See [[Funtoo Linux Kernels]] for a full list of kernels supported in Funtoo Linux. We recommend <code>debian-sources</code> for new users.}}


{{fancyimportant|1=
After the dataset has been received on the Solaris machine the ''nas'' zpool now contains the sent snapshot and its corresponding clone, that latter being  automatically created:<console>
<code>debian-sources</code> with <code>binary</code> USE flag requires at least 14GB free in <code>/var/tmp</code> and takes around 1 hour to build on a Intel Core i7 Processor.}}
###i## zfs list -t snapshot
NAME                                          USED  AVAIL  REFER  MOUNTPOINT
(...)
nas/zfs-stream-test                          3.02M  6.17T  3.02M  /nas/zfs-stream-test
nas/zfs-stream-test@s1                          0      -  3.02M  -
</console>


Let's emerge our kernel:
A quick look in the ''/san/zfs-stream-test'' directory on the same Solaris machine gives:


<console>
<console>
(chroot) # ##i##emerge debian-sources
###i## ls -lR /nas/zfs-stream-test
/nas/zfs-stream-test/:
total 12
-rw-r--r--  1 root    root          13 Mar  3 18:59 hello.txt
drwxr-xr-x  2 root    root        143 Mar  3 18:59 radeon
 
/nas/zfs-stream-test/radeon:
total 6144
-rw-r--r--  1 root    root        8704 Mar  3 18:59 ARUBA_me.bin
-rw-r--r--  1 root    root        8704 Mar  3 18:59 ARUBA_pfp.bin
-rw-r--r--  1 root    root        6144 Mar  3 18:59 ARUBA_rlc.bin
-rw-r--r--  1 root    root      24096 Mar  3 18:59 BARTS_mc.bin
-rw-r--r--  1 root    root        5504 Mar  3 18:59 BARTS_me.bin
-rw-r--r--  1 root    root        4480 Mar  3 18:59 BARTS_pfp.bin
(...)
</console>
</console>


Once <code>emerge</code> completes, you'll have a brand new kernel and initramfs installed to <code>/boot</code>, plus kernel headers installed in <code>/usr/src/linux</code>, and you'll be ready to configure the boot loader to load these to boot your Funtoo Linux system.
The dataset is exactly what it is on the Linux machine!


{{warning|If you have a RAID in your machine, the kernel installation will pull in the <code>mdadm</code> tool as a dependency. It is important to edit the <code>/etc/mdadm.conf</code> file prior to rebooting the machine so the RAID is properly recognised and set up before the kernel attempts to mount it in the tree. Failing to do so can result in an unusable or even unbootable system! For specific details, consult the mdadm man page <code>man mdadm</code> or the [[Package:Mdadm|mdadm]] ebuild page.}}
{{Fancynote|We took only a simple case here: ZFS can is able to handle snapshots is a very flexible way. You can ask, for example, to combine several consecutive snapshots then send them as a single snapshot or you can choose to proceed in incremental steps. A '''man zfs''' will tell you the art of streaming your snapshots.}}


{{fancynote|NVIDIA card users: the <code>binary</code> USE flag installs the Nouveau drivers which cannot be loaded at the same time as the proprietary drivers, and cannot be unloaded at runtime because of KMS. You need to blacklist it under <code>/etc/modprobe.d/</code>.}}
== Govern a dataset by attributes ==
In the ZFS world, many aspects are now managed by simply setting/clearing a property attached to a ZFS dataset through the now so well-known command '''zfs'''. You can, for example:
* put a size limit on a dataset
* control if new files are encrypted and/or compressed
* define a quota
* control checksum usage  => '''never turn that property off unless having very good reasons you are likely to never have''' (no checksums = no silent data corruption detection)
* share a dataset by NFS/CIFS (Samba)
* control data deduplication


{{fancynote|For an overview of other kernel options for Funtoo Linux, see [[Funtoo Linux Kernels]]. There may be modules that the Debian kernel doesn't include, a situation where [http://www.funtoo.org/wiki/Funtoo_Linux_Kernels#Using_Debian-Sources_with_Genkernel genkernel] would be useful. Also be sure to see [[:Category:Hardware Compatibility|hardware compatibility]] information.}}
Not all of a dataset properties are settable, some of them are set and managed by the operating system in the background for you and thus cannot be modified. Like any other action concerning datasets, properties are sets and unset via the '''zfs''' command. Let's start by checking the value of '''all'''  supported attributes for the dataset ''myfirstpool/myfirstDS'':


=== Installing a Bootloader ===
<console>
###i## zfs get all myfirstpool/myfirstDS
NAME                  PROPERTY              VALUE                  SOURCE
myfirstpool/myfirstDS  type                  filesystem              -
myfirstpool/myfirstDS  creation              Sun Mar  2 15:26 2014  -
myfirstpool/myfirstDS  used                  2.96M                  -
myfirstpool/myfirstDS  available            6.00G                  -
myfirstpool/myfirstDS  referenced            2.96M                  -
myfirstpool/myfirstDS  compressratio        1.00x                  -
myfirstpool/myfirstDS  mounted              yes                    -
myfirstpool/myfirstDS  quota                none                    default
myfirstpool/myfirstDS  reservation          none                    default
myfirstpool/myfirstDS  recordsize            128K                    default
myfirstpool/myfirstDS  mountpoint            /myfirstpool/myfirstDS  default
myfirstpool/myfirstDS  sharenfs              off                    default
myfirstpool/myfirstDS  checksum              on                      default
myfirstpool/myfirstDS  compression          off                    default
myfirstpool/myfirstDS  atime                on                      default
myfirstpool/myfirstDS  devices              on                      default
myfirstpool/myfirstDS  exec                  on                      default
myfirstpool/myfirstDS  setuid                on                      default
myfirstpool/myfirstDS  readonly              off                    default
myfirstpool/myfirstDS  zoned                off                    default
myfirstpool/myfirstDS  snapdir              hidden                  default
myfirstpool/myfirstDS  aclinherit            restricted              default
myfirstpool/myfirstDS  canmount              on                      default
myfirstpool/myfirstDS  xattr                on                      default
myfirstpool/myfirstDS  copies                1                      default
myfirstpool/myfirstDS  version              5                      -
myfirstpool/myfirstDS  utf8only              off                    -
myfirstpool/myfirstDS  normalization        none                    -
myfirstpool/myfirstDS  casesensitivity      sensitive              -
myfirstpool/myfirstDS  vscan                off                    default
myfirstpool/myfirstDS  nbmand                off                    default
myfirstpool/myfirstDS  sharesmb              off                    default
myfirstpool/myfirstDS  refquota              none                    default
myfirstpool/myfirstDS  refreservation        none                    default
myfirstpool/myfirstDS  primarycache          all                    default
myfirstpool/myfirstDS  secondarycache        all                    default
myfirstpool/myfirstDS  usedbysnapshots      1K                      -
myfirstpool/myfirstDS  usedbydataset        2.96M                  -
myfirstpool/myfirstDS  usedbychildren        0                      -
myfirstpool/myfirstDS  usedbyrefreservation  0                      -
myfirstpool/myfirstDS  logbias              latency                default
myfirstpool/myfirstDS  dedup                off                    default
myfirstpool/myfirstDS  mlslabel              none                    default
myfirstpool/myfirstDS  sync                  standard                default
myfirstpool/myfirstDS  refcompressratio      1.00x                  -
myfirstpool/myfirstDS  written              1K                      -
myfirstpool/myfirstDS  snapdev              hidden                  default
</console>


These install instructions show you how to use GRUB to boot using BIOS (old-school) or UEFI (new-school). As of boot-update-1.7.2, now in Portage, the steps are very similar.
{{Fancynote|the manual page of the '''zfs''' command gives a list and description of every attributes supported by a dataset.}}


First, emerge <code>boot-update</code>. This will also cause <code>grub-2</code> and {{c|efibootmgr}} to be merged, since they are dependencies:
May be something poked your curiosity: "what ''SOURCE'' means?". ''SOURCE'' describes how the property has been determined for the dataset and can have several values:
* '''local''': the property has been explicitly set for this dataset
* '''default''': a default value has been assigned by the operating system if not explicitely set by the system adminsitrator
* '''dash (-)''': immutable property (e.g. dataset creation time, whether the dataset is currently mounted or not...)
 
Of course you can get the property of a single attribute if you know its name instead of asking for ''all'' properties.
 
=== Compressing data ===


<console>
<console>
(chroot) # ##i##emerge boot-update
###i## zfs get compression myfirstpool/myfirstDS
NAME                  PROPERTY    VALUE    SOURCE
myfirstpool/myfirstDS  compression  off      default
</console>
</console>


Then, edit <code>/etc/boot.conf</code> using {{c|nano}} and specify "<code>Funtoo Linux genkernel</code>" as the <code>default</code> setting at the top of the file, replacing <code>"Funtoo Linux"</code>.
Let's activate the compression on the volume (notice the change in the ''SOURCE'' column). That is being achieved through an attribute simply named ''compression'' which can be changed by running the '''zfs''' command with the '''set''' sub-command followed by the attribute's name (''compression'' here) and value (''on'' here) like this:


<code>/etc/boot.conf</code> should now look like this:
<console>
{{file|name=/etc/boot.conf|body=
###i## zfs set compression=on myfirstpool/myfirstDS
boot {
###i## zfs get compression myfirstpool/myfirstDS
generate grub
NAME                  PROPERTY    VALUE    SOURCE
default "Funtoo Linux genkernel"
myfirstpool/myfirstDS  compression  on        local
timeout 3
</console>
}


"Funtoo Linux" {
The attribute's new value becomes immediately effective no need to unmount and remount anything. ''compression'' set to ''on'' will only affect new data and not what already exists on the dataset. For your information, the lzjb compression algorithms is used when ''compression'' is set to ''on'', you can override and use another compression algorithm by explicitly tell your choice. For example if you want to activate LZ4 compression on the dataset:
kernel bzImage[-v]
}


"Funtoo Linux genkernel" {
<console>
kernel kernel[-v]
###i## zfs get compression myfirstpool/myfirstDS
initrd initramfs[-v]
NAME                  PROPERTY    VALUE    SOURCE
params += real_root=auto
myfirstpool/myfirstDS  compression  off      default
}
</console>
<console>
###i## zfs set compression=lz4 myfirstpool/myfirstDS
###i## zfs get compression myfirstpool/myfirstDS
NAME                  PROPERTY    VALUE    SOURCE
myfirstpool/myfirstDS  compression  lz4        local
</console>
Assuming ''myfirstpool/myfirstDS'' is empty with no snapshots:
<console>
###i## cp -a /usr/src/linux-3.13.5-gentoo /-a /usr/src/linux-3.13.5-gentoo
###i## zfs get all myfirstpool/myfirstDS
NAME                  PROPERTY              VALUE                  SOURCE
myfirstpool/myfirstDS  type                  filesystem              -
myfirstpool/myfirstDS  creation              Sun Mar  2 15:26 2014  -
myfirstpool/myfirstDS  used                  584M                    -
myfirstpool/myfirstDS  available            5.43G                  -
myfirstpool/myfirstDS  referenced            584M                    -
myfirstpool/myfirstDS  compressratio        1.96x                  -      <<<< Compression ratio
myfirstpool/myfirstDS  mounted              yes                    -
myfirstpool/myfirstDS  quota                none                    default
myfirstpool/myfirstDS  reservation          none                    default
myfirstpool/myfirstDS  recordsize            128K                    default
myfirstpool/myfirstDS  mountpoint            /myfirstpool/myfirstDS  default
myfirstpool/myfirstDS  sharenfs              off                    default
myfirstpool/myfirstDS  checksum              on                      default
myfirstpool/myfirstDS  compression          on                      local  <<<< LZJB compression active
myfirstpool/myfirstDS  atime                on                      default
myfirstpool/myfirstDS  devices              on                      default
myfirstpool/myfirstDS  exec                  on                      default
myfirstpool/myfirstDS  setuid                on                      default
myfirstpool/myfirstDS  readonly              off                    default
myfirstpool/myfirstDS  zoned                off                    default
myfirstpool/myfirstDS  snapdir              hidden                  default
myfirstpool/myfirstDS  aclinherit            restricted              default
myfirstpool/myfirstDS  canmount              on                      default
myfirstpool/myfirstDS  xattr                on                      default
myfirstpool/myfirstDS  copies                1                      default
myfirstpool/myfirstDS  version              5                      -
myfirstpool/myfirstDS  utf8only              off                    -
myfirstpool/myfirstDS  normalization        none                    -
myfirstpool/myfirstDS  casesensitivity      sensitive              -
myfirstpool/myfirstDS  vscan                off                    default
myfirstpool/myfirstDS  nbmand                off                    default
myfirstpool/myfirstDS  sharesmb              off                    default
myfirstpool/myfirstDS  refquota              none                    default
myfirstpool/myfirstDS  refreservation        none                    default
myfirstpool/myfirstDS  primarycache          all                    default
myfirstpool/myfirstDS  secondarycache        all                    default
myfirstpool/myfirstDS  usedbysnapshots      0                      -
myfirstpool/myfirstDS  usedbydataset        584M                    -
myfirstpool/myfirstDS  usedbychildren        0                      -
myfirstpool/myfirstDS  usedbyrefreservation  0                      -
myfirstpool/myfirstDS  logbias              latency                default
myfirstpool/myfirstDS  dedup                off                    default
myfirstpool/myfirstDS  mlslabel              none                    default
myfirstpool/myfirstDS  sync                  standard                default
myfirstpool/myfirstDS  refcompressratio      1.96x                  -
myfirstpool/myfirstDS  written              584M                    -
myfirstpool/myfirstDS  snapdev              hidden                  default
</console>


"Funtoo Linux better-initramfs" {
Notice the value for ''compressionratio'': it no longer shows 1.00x but a shiny 1.96 here (1.96:1 ratio). We have a high compression ratio here because we copied a lot of source code files but if we put a lot of compressed data (images in jpeg or png format for example) the ratio would have decreased a lot.
kernel vmlinuz[-v]
initrd /initramfs.cpio.gz
}
}}
If you are booting a custom or non-default kernel, please read <code>man boot.conf</code> for information on the various options available to you.


==== Old School (BIOS) MBR ====  
=== Changing the mountpoint ===


When using "old school" BIOS booting, run the following command to install GRUB to your MBR, and generate the {{c|/boot/grub/grub.cfg}} configuration file that GRUB will use for booting:
Let's change the mount point of ''myfirstpool/myfirstDS'' to something like ''/mnt/floppy'' instead of ''/myfirstpool/myfirstDS'' for the sake of demonstration purposes. Changing a dataset mountpoint is done via its ''mountpoint'' attribute:


<console>
<console>
(chroot) # ##i##grub-install --target=i386-pc --no-floppy /dev/sda
###i## zfs get mountpoint myfirstpool/myfirstDS
(chroot) # ##i##boot-update
NAME                  PROPERTY    VALUE                  SOURCE
myfirstpool/myfirstDS  mountpoint  /myfirstpool/myfirstDS  default
</console>
<console>
###i## zfs set mountpoint=/mnt/floppy myfirstpool/myfirstDS
###i## zfs list 
NAME                    USED  AVAIL  REFER  MOUNTPOINT
myfirstpool            2.38G  5.43G  850M  /myfirstpool
myfirstpool/myfirstDS    584M  5.43G  584M  /mnt/floppy
myfirstpool/mysecondDS  1003M  5.43G  1003M  /myfirstpool/mysecondDS
</console>
<console>
###i## mount | grep floppy
myfirstpool/myfirstDS on /mnt/floppy type zfs (rw,xattr)
</console>
</console>


==== New School (UEFI) Boot Entry ====
Notice the dataset has been automatically unmounted and remounted at the new location for you and once again the change is effective immediately. If the indicated mountpoint would not be empty ZFS is smart enough to warn you and to not remount it.


If you're using "new school" UEFI booting, run of the following sets of commands, depending on whether you are installing a 64-bit or 32-bit system. This will add GRUB as a UEFI boot entry.
=== Sharing a dataset through NFS ===


For x86-64bit systems:
Now that you are a bit more familiar with ZFS properties you won't be that much surprised to learn that sharing a dataset can be done by setting one of its properties. You can, of course, go the "traditional" way and edit Samba's or NFS related configuration files by hand however why hassle with manual editing since ZFS can do that for you? ZFS On Linux has support for both systems.


Next let's share the ''myfirstpool/myfirstDS'' dataset by NFS to any host within the network 192.168.1.0/24 (read-write access) . An important detail here : '''the zfs command will use NFS v4 by default''' so any options related to NFS v4 can be passed on the command line, refer to options supported by your NFS server documentation for further information on what is supported and how use the feature. To share the dataset by NFS, you must change a property named ''sharenfs'':
<console>
<console>
(chroot) # ##i##grub-install --target=x86_64-efi --efi-directory=/boot --bootloader-id="Funtoo Linux [GRUB]" --recheck /dev/sda
###i## zfs set sharenfs='rw=@192.168.1.0/24' myfirstpool/myfirstDS
(chroot) # ##i##boot-update
</console>
</console>


For x86-32bit systems:
What happened? Simple:


<console>
<console>
(chroot) # ##i##grub-install --target=i386-efi --efi-directory=/boot --bootloader-id="Funtoo Linux [GRUB]" --recheck /dev/sda
###i## zfs get sharenfs myfirstpool/myfirstDS
(chroot) # ##i##boot-update
NAME                  PROPERTY  VALUE              SOURCE
myfirstpool/mfirstDS  sharenfs  rw=@192.168.1.0/24  local
</console>
<console>
###i## cat /etc/dfs/sharetab
/myfirstpool/myfirstDS  -       nfs    rw=@192.168.1.0/24
</console>
</console>


==== First Boot, and in the future... ====
{{Fancyimportant|The syntax and behaviour is similar to what is found under Solaris 11: '''zfs share''' reads and updates entries coming from the file ''/etc/dfs/sharetab'' '''(not ''/etc/exports'')'''. This is a Solaris ''touch'': under Solaris 11 the '''zfs'' and '''share''' commands now acts on ''/etc/dfs/sharetab'', ''/etc/dfs/dfstab'' being no longer supported.}}


OK -- you are ready to boot!
By a checking with the '''showmount''' command:


You only need to run <code>grub-install</code> when you first install Funtoo Linux, but you need to re-run <code>boot-update</code> every time you modify your <code>/etc/boot.conf</code> file or add new kernels to your system. This will regenerate {{c|/boot/grub/grub.cfg}} so that you will have new kernels available in your GRUB boot menu, the next time you reboot.
<console>
###i## showmount -
Export list for .... :
/myfirstpool/myfirstDS 192.168.1.0/24
</console>


=== Configuring your network ===
At this point it should be possible to mount the dataset from another host on the network (here a Solaris 11 machine) and write some data in it:


It's important to ensure that you will be able to connect to your local-area network after you reboot into Funtoo Linux. There are three approaches you can use for configuring your network: NetworkManager, dhcpcd, and the [[Funtoo Linux Networking]] scripts. Here's how to choose which one to use based on the type of network you want to set up.
<console>
###i## mkdir -p /mnt/myfirstDS
###i## mount 192.168.1.19:/myfirstpool/myfirstDS /mnt/myfirstDS
###i##  mount | grep myfirst
/mnt/myfirstDS on 192.168.1.19:/myfirstpool/myfirstDS remote/read/write/setuid/devices/rstchown/xattr/dev=89c0002 on Sun Mar  9 14:28:55 2014
</console>
<console>
###i## cp /kernel/amd64/genunix /mnt/myfirstDS
</console>


==== Wi-Fi ====
''Et voila!''No sign of protest so the file has been copied. If we check what the ZFS dataset looks like on the Linux host where the ZFS dataset resides,  the copied file (a Solaris kernel image here) is present:


For laptop/mobile systems where you will be using Wi-Fi, roaming, and connecting to various networks NetworkManager is strongly recommended. 
<console>
Since Wi-Fi cards require firmware to operate, it is also recommended that you emerge the linux-firmware ebuild:
###i## ls -l /myfirstpool/myfirstDS/genunix
-rwxr-xr-x 1 root root 5769456 Mar  9 14:32 /myfirstpool/myfirstDS/genunix
</console>


{{console|body=(chroot) # ##i##emerge linux-firmware networkmanager
$100 question: How to "unshare" the dataset? Simple: just set ''sharenfs'' to ''off''! Be aware that the NFS server will cease to share the dataset no matter if this one is still in use by client machines. Any NFS client still having the dataset mounted at this point will encounter RPC errors whenever an I/O operation is attempted on the share (Solaris NFS client here):
(chroot) ###i## rc-update add NetworkManager default
<console>
}}
###i## ls /mnt/myfirstDS
The above command will ensure that NetworkManager starts after you boot into Funtoo Linux. Once you've completed these installation steps and have booted into Funtoo Linux, you can use the {{c|addwifi}} command to connect to a Wi-Fi access point:
NFS compound failed for server 192.168.1.19: error 7 (RPC: Authentication error)
{{console|body=# ##i##addwifi -S wpa -K 'wifipassword' mywifinetwork}}
</console>
For more information about NetworkManager, see the [[Package:NetworkManager|NetworkManager package page]].


{{Note|wpa_supplicant is also a good choice for wireless network connections. See the {{package|net-wireless/wpa_supplicant}} package for steps involved in setting up wpa_supplicant.}}
=== Sharing a dataset through Samba/SMB ===


==== Desktop (Wired DHCP) ====
Let's push the limit a bit and use Samba instead of NFS. ZFS relies on [http:///www.samba.org Samba] (''net-fs/samba'' on Gentoo/Funtoo) to get the job done as it does not implement a SMBFS server on its own. So Samba must be emerged first '''making sure :
* it has built-in ACL support''' (''acl'' use flag)
* client tools are built (''client'' use flag) as ZoL invokes the '''net''' command behind the scene (i.e. ''net usershare ... '')
* usershare must be functional


For a home desktop or workstation with wired Ethernet that will use DHCP, the simplest and most effective option to enable network connectivity is to simply add {{c|dhcpcd}} to the default runlevel:
Quoting the '''zfs''' command's manual page, your Samba server must also be configured like this:


{{console|body=
* Samba will need to listen to 'localhost' (127.0.0.1) for the zfs utilities to communicate with samba.  This is the default behaviour for most Linux distributions.
(chroot) # ##i##rc-update add dhcpcd default}}
* Samba must be able to authenticate a user. This can be done in a number of ways, depending on if using the system password file, LDAP or the Samba specific smbpasswd file. How to do this is outside the scope of this manual. Please refer to the smb.conf(5) manpage for more information.
When you reboot, {{c|dhcpcd}} will run in the background and manage all network interfaces and use DHCP to acquire network addresses from a DHCP server.
* See the USERSHARE section of the smb.conf(5) man page for all configuration options in case you need to modify any options to the share afterwards. Do note that any changes done with the 'net' command will be undone if the share is every unshared (such as at a reboot etc). In the future, ZoL will be able to set specific options directly using sharesmb=<option>.


If your upstream DHCP server is dnsmasq, it can be configured to assign addresses via mac address to make servers on DHCP feasible.
What you have to know at this point is that, once emerged on your Funtoo box, Samba has no configuration file thus will refuse to start. You can use the provided  example file ''/etc/samba/smb.conf.example'' as a starting point for ''/etc/samba/smb.conf'', just copy it:


==== Server (Static IP) ====
<console>
###i## cd /etc/samba
###i## cp smb.conf.example smb.conf
</console>


For servers, the [[Funtoo Linux Networking]] scripts are recommended. They are optimized for static configurations and things like virtual ethernet bridging for virtualization setups. See [[Funtoo Linux Networking]] for information on how to use Funtoo Linux's template-based network configuration system.
Now create the directory ''/var/lib/samba/usershares'' (will host the definitions of all usershares), leaving default permissions (0755) and owner (root:root) untouched for the context of this tutorial, unless you use ZFS delegation, is acceptable.
<console>
###i## mkdir /var/lib/samba/usershares
</console>


==== Hostname ====
Several important things to know unless you have hours to waste with your friend Google:
By default Funtoo uses "localhost" as hostname. Although the system will work perfectly fine using this name, some ebuilds refuse to install when detecting localhost as hostname. It also may create confusion if several systems use the same hostname. Therefore, it is advised to change it to a more meaningful name. The hostname itself is arbitrary, meaning you can choose almost any combination of characters, as long as it makes sense to the system administrator. To change the hostname, edit
* When you set the ''sharesmb'' property to ''on'', the '''zfs''' command will invoke Samba's ''net'' command behind the scenes to create a usershare (comment and ACL are values are both specified).  E.g. ''zfs sharesmb=on myfirstpool/myfirstDS'' => ''net usershare add myfirstpool_myfirstDS /myfirstpool/myfirstDS "Comment:/myfirstpool/myfirstDS" "Everyone:F" guest_ok=n''
* Under which user the ''net usershare'' command will be invoked? Unless ZFS delegation is used, ''root'' will be the owner of the usershare created by ''root'' which is specified in a textual file (named after the usershare's name) located in the directory ''/var/lib/samba/usershares''. There is per Samba requirement three very important details about the directory ''/var/lib/samba/usershares'' :
** Its owner '''must be ''root'' ''', the group is of secondary importance and left to your discretion
** Its permissions '''must be 1775''' (so owner = rwx, group = rwx, others = r-x with sticky bit armed).
** If the directory is not set as above Samba will simply ignore any usershares you define so if you have errors like BAD_NETWORK_NAME when connecting a usershare created by ZFS double check the owner and permissions set for ''/var/lib/samba/usershares'' or the directory you use on your Funtoo box to hold usershares definition...
* Unless explicitly overridden in ''/etc/samba/smb.conf'':
** ''usershare max shares'' default value is zero so '''no usershare can be created'''. If you forget to set a value greater than zero for ''usershare max shares'' any '''zfs set sharesmb=on''' command will complain with the message  ''cannot share (...) smb add share failed'' (also any ''net usershare add'' command will show the error message ''net usershare: usershares are currently disabled'').  
** ''usershare path = /var/lib/samba/usershares''
** ''usershare owner only'' is set to ''true'' by default so Samba will refuse the share to any remote user not opening a session as ''root'' on the share


{{console|body=
So basically a super-minimalistic configuration for Samba would be:
(chroot) # ##i##nano /etc/conf.d/hostname
}}


Look for the line starting with hostname and change the entry between the quotes. Save the file, on the next boot Funtoo will use the new hostname.
<pre>
[global]
  workgroup = MYGROUP
  server string = Samba Server
  security = user
  log file = /var/log/samba/log.%m
  max log size = 50
  # Permits the usershares of being accessed by any other user than 'root' from a remote client machine
  usershare owner only = False
  # WARNING: default value for usershare max shares is 0 so No usershares possible...
  usershare max shares = 10


{{warning|Do not use special characters in the hostname, as the shell may interpret these, leading to unpredictable results. Use the Latin alphabet: a-z, A-Z, 0-9}}
</pre>
{{tip|Use short hostnames (up to 8 or 10 characters) to prevent the terminal screen being filled with the hostname, leaving little space for the command itself. This become particularly poignant when coding long command strings in various programming languages like Bash, Python, SQL and Perl}}
{{Fancywarning|This configuration is obviously for the sake of demonstration purposes within the scope of this tutorial, '''do not use it for the real world!'''}}
 
At this point reload or restart Samba if you have altered ''/etc/samba/smb.conf''. Now the usershares are possible, let's share a ZFS dataset over Samba:


=== Finishing Steps ===
==== Set your root password ====
It's imperative that you set your root password before rebooting so that you can log in.
<console>
<console>
(chroot) # ##i##passwd
###i## zfs set sharesmb=on myfirstpool/myfirstDS
###i## zfs get sharesmb myfirstpool/myfirstDS
NAME                  PROPERTY  VALUE    SOURCE
myfirstpool/myfirstDS  sharesmb  on        local
</console>
</console>


===Restart your system ===  
The command must return without any error message, '''if you have something like ''"cannot share myfirstpool/myfirstDS smb add share failed"'' then usershares are not functional on your machine (see the notes just above)'''. Now a Samba usershare named after the zpool and the dataset names should exist:
 
<console>
###i## net usershare list
myfirstpool_myfirstDS
</console>
<console>
###i## net usershare info myfirstpool_myfirstDS
[myfirstpool_myfirstDS]
path=/myfirstpool/myfirstDS
comment=Comment: /myfirstpool/myfirstDS
usershare_acl=Everyone:F,
guest_ok=n
</console>


Now is the time to leave chroot, to unmount Funtoo Linux partitions and files and to restart your computer. When you restart, the GRUB boot loader will start, load the Linux kernel and initramfs, and your system will begin booting.
So far so good! So let's try this on the machine itself:


Leave the chroot, change directory to /mnt, unmount your Funtoo partitions, and reboot.
<console>
<console>
(chroot) # ##i##exit
###i##  
# ##i##cd /mnt
# ##i##umount -lR funtoo
# ##i##reboot
</console>
</console>


{{fancynote|System Rescue CD will gracefully unmount your new Funtoo filesystems as part of its normal shutdown sequence.}}
= Data redundancy with ZFS =
 
Nothing is perfect and the storage medium (even in datacenter-class equipment) is prone to failures and fails on a regular basis. Having data redundancy is mandatory to help in preventing single-points of failure (SPoF). Over the past decades, RAID technologies were powerful however their power is precisely their weakness: as operating at the block level, they do not care about what is stored on the data blocks and have no ways to interact with the filesystems stored on them to ensure data integrity is properly handled.
 
== Some statistics ==
 
It is not a secret to tell that a general trend in the IT industry is the exponential growth of data quantities. Just thinking about the amount of data Youtube, Google or Facebook generates every day taking the case of the first [http://www.website-monitoring.com/blog/2010/05/17/youtube-facts-and-figures-history-statistics some statistics] gives:
* 24 hours of video is generated every ''minute'' in March 2010 (May 2009 - 20h / October 2008 - 15h / May 2008 - 13h)
* More than 2 ''billions'' views a day
* More video is produced on Youtube every 60 days than 3 major US broadcasting networks did in the last 60 years
 
Facebook is also impressive (Facebook own stats):
 
* over 900 million objects that people interact with (pages, groups, events and community pages)
* Average user creates 90 pieces of content each month (750 millions users active)
* More than 2.5 million websites have integrated with Facebook
 
What is true with Facebook and Youtube is also true with many other cases (think one minutes about the amount of data stored in iTunes) especially with the growing popularity of cloud computing infrastructures. Despite the progress of the technology a "bottleneck" still exists: the storage reliability is nearly the same over the years. If only one organization in the world generate huge quantities of data it would be the [http://public.web.cern.ch CERN] (''Conseil Européen pour la Recherche Nucléaire'', now officially known as ''European Organization for Nuclear Research'') as their experiments can generate spikes of many terabytes of data within a few seconds. A study done in 2007 quoted by a [http://www.zdnet.com/blog/storage/data-corruption-is-worse-than-you-know/191 ZDNet article] reveals that:
 
* Even ECC memory cannot be always be helpful: 3 double-bit errors (uncorrectable) occurred in 3 months on 1300 nodes. Bad news: it should be '''zero'''.
* RAID systems cannot protect in all cases: monitoring 492 RAID controller for 4 weeks showed an average error rate of 1 per ~10^14 bits, giving roughly 300 errors for every 2.4 petabytes
* Magnetic storage is still not reliable even on high-end datacenter class drives: 500 errors found over 100 nodes while writing 2 GB file to 3000+ nodes every 2 hours then read it again and again for 5 weeks.
 
Overall this means: 22 corrupted files (1 in every 1500 files) for a grand total of 33700 files holding 8.7TB of data. And this study is 5 years old....
 
== Source of silent data corruption ==
 
http://www.zdnet.com/blog/storage/50-ways-to-lose-your-data/168
 
Not an exhaustive list but we can quote:
 
* Cheap controller or buggy driver that does not reports errors/pre-failure conditions to the operating system;
* "bit-leaking": an harddrive consists of many concentric magnetic tracks. When the hard drive magnetic head writes bits on the magnetic surface it generates a very weak magnetic field however sufficient to "leak" on the next track and change some bits. Drives can generally, compensate those situations because they also records some error correction data on the magnetic surface
* magnetic surface defects (weak sectors)
* Hard drives firmware bugs
* Cosmic rays hitting your RAM chips or hard drives cache memory/electronics
*
 
== Building a mirrored pool ==
 
 
== ZFS RAID-Z ==
 
=== ZFS/RAID-Z vs RAID-5 ===
 
RAID-5 is very commonly used nowadays because of its simplicity, efficiency and fault-tolerance. Although the technology did its proof over decades, it has a major drawback known as "The RAID-5 write hole". if you are familiar with RAID-5 you already know that is consists of spreading the stripes across all of the disks within the array and interleaving them with a special stripe called the parity. Several schemes of spreading stripes/parity between disks exists in the natures, each one with its own pros and cons, however the "standard" one (also known as ''left-asynchronous'') is:
 
<pre>
Disk_0  | Disk_1  | Disk_2  | Disk_3
[D0_S0] | [D0_S1] | [D0_S2] | [D0_P]
[D1_S0] | [D1_S1] | [D1_P]  | [D1_S2]
[D2_S0] | [D2_P]  | [D2_S1] | [D2_S2]
[D2_P]  | [D2_S0] | [D2_S1] | [D2_S2]
</pre>
 
The parity is simply computed by XORing the stripes of the same "row", thus giving the general equation:
* [Dn_S0] XOR [Dn_S1] XOR ... XOR [Dn_Sm] XOR [Dn_P] = 0
This equation can be rewritten in several ways:
* [Dn_S0] XOR [Dn_S1] XOR ... XOR [Dn_Sm] = [Dn_P]
* [Dn_S1] XOR [Dn_S2] XOR ... XOR [Dn_Sm] XOR [Dn_P] = [Dn_S0]
* [Dn_S0] XOR [Dn_S2] XOR ... XOR [Dn_Sm] XOR [Dn_P] = [Dn_S1]
* ...and so on!
 
Because the equations are a combinations of exclusive-or, it is  possible to easily compute a parameter if it is missing. Let say we have 3 stripes plus one parity composed of 4 bits each but one of them is missing due to a disk failure:
 
* D0_S0 = 1011
* D0_S1 = 0010
* D0_S2 = <missing>
* D0_P  = 0110
 
However we know that:
* D0_S0 XOR D0_S1 XOR D0_S2 XOR D0_P = 0000 also rewritten as:
* D0_S2 = D0_S1 XOR D0_S2 XOR D0_P
 
Applying boolean algebra it gives:''' D0_S2 = 1011 XOR 0010 XOR 0110 = 1111'''.
Proof: '''1011 XOR 0010 XOR 1111 = 0110''' this is the same as '''D0_P'''


You should now see your system reboot, the GRUB boot loader appear for a few seconds, and then see the Linux kernel and initramfs loading. After this, you should see Funtoo Linux itself start to boot, and you should be greeted with a <code>login:</code> prompt. Funtoo Linux has been successfully installed!
''''''So what's the deal?''''''
Okay now the funny part, forgot the above hypothesis and imagine we have this:


=== Profiles ===  
* D0_S0 = 1011
* D0_S1 = 0010
* D0_S2 = 1101
* D0_P  = 0110


Once you have rebooted into Funtoo Linux, you can further customize your system to your needs by using [[Funtoo Profiles]]. A quick introduction to profiles is included below -- consult the [[Funtoo Profiles]] page for more detailed information. There are five basic profile types: arch, build, subarch, flavors and mix-ins:
Applying boolean algebra magics gives 1011 XOR 0010 XOR 1101 => 0100. Problem: this is different of D0_P  (0110). Can you tell which one (or which ONES) of the four terms lies? If you find a mathematically acceptable solution, found your company because you have just solved a big computer science problem. If humans can't solve the question, imagine how hard it is for the poor little RAID-5 controller to determine which stripe is right and which one lies and the resulting "datageddon" (i.e. massive data corruption on the RAID-5 array) when the RAID-5 controller detect error and start to rebuild the array.


{{TableStart}}
This is not science fiction, this a pure reality and the weakness stays in the RAID-5 simplicity. Here is how it can happen: an urban legend with RAID-5 arrays is that they update stripes in an atomic transaction (all of the stripes+parity are written or none of them). Too bad, this is just not true, the data is written on the fly and if for a reason or another the machine where the RAID-5 array has a power outage or crash, the RAID-5 controller will simply have no idea about what he was doing and which stripes are up to date which ones are not up to date. Of course, RAID controllers in servers do have a replaceable on-board battery and most of the time the server they reside in is connected to an auxiliary source like a battery-based UPS or a diesel/gas electricity generator. However, Murphy laws or unpredictable hazards can, sometimes, happens....  
{{2ColHead|Sub-Profile Type|Description}}
{{2Col|{{c|arch}}|Typically {{c|x86-32bit}} or {{c|x86-64bit}}, this defines the processor type and support of your system. This is defined when your stage was built and should not be changed.}}
{{2Col|{{c|build}}|Defines whether your system is a {{c|current}}, {{c|stable}} or {{c|experimental}} build. {{c|current}} systems will have newer packages unmasked than {{c|stable}} systems. This is defined when your stage is built and is typically not changed.}}
{{2Col|{{c|subarch}}|Defines CPU optimizations for your system. The subarch is set at the time the stage3 is built, but can be changed later to better settings if necessary. Be sure to pick a setting that is compatible with your CPU.}}
{{2Col|{{c|flavor}}|Defines the general type of system, such as {{c|server}} or {{c|desktop}}, and will set default USE flags appropriate for your needs.}}
{{2Col|{{c|mix-ins}}|Defines various optional settings that you may be interested in enabling.}}
{{TableEnd}}


One arch, build and flavor must be set for each Funtoo Linux system, while mix-ins are optional and you can enable more than one if desired. Often, flavors and mix-ins inherit settings from other sub-profiles. Use {{c|epro show}} to view your current profile settings, in addition to any inheritance information:
Another funny scenario: imagine a machine with a RAID-5 array (on UPS this time) but with non ECC memory. the RAID-5 controller splits the data buffer in stripes, computes a data stripe and starts to write them on the different disks of the array. But...but...but... For some odd reason, only one bit in one of the stripes flips (cosmic rays, RFI...) after the parity calculation. Too bad too sad, one of the written stripes contains corrupted data and it is silently written on the array. Datageddon in sight!
{{console|body=
(chroot) # ##i## epro show


=== ##g##Enabled Profiles##!g##: ===
Not to make you freaking: storage units have sophisticated error correction capability (a magnetic surface or an optical recording surface is not perfect and reading/writing error occurs) masking most the cases. However, some  established statistics estimates that even with error correction mechanism one bit over 10^16 bits transferred is incorrect. 10^16 is really huge but unfortunately in this beginning of the XXIst century with datacenters brewing massive amounts of data with several hundreds to not say thousands servers this this number starts to give headaches:  '''a big datacenter can face to silent data corruption every 15 minutes''' (Wikepedia). No typo here, a potential disaster may silently appear 5 times an hour for every single day of the year. Detection techniques exists but traditional RAID-5 arrays in them selves can be a problem. Ironic for a so popular and widely used solution :)


        arch: ##c## x86-64bit
If RAID-5 was an acceptable trade-off in the past decades, it simply made its time.  RAID-5 is dead? '''*Horray!*'''
      build: ##c## current
    subarch: ##c## intel64-haswell
      flavor: ##c## desktop
    mix-ins: ##c## gnome


= More advanced topics =


=== ##g##All inherited flavors from desktop flavor##!g##: ===
== Z-Volumes (ZVOLs) ==


                    ##c##workstation##!c## (from desktop flavor)
== ZFS Intention Log (ZIL) ==
                            ##c##core##!c## (from workstation flavor)
                        ##c##minimal##!c## (from core flavor)


=== ##g##All inherited mix-ins from desktop flavor##!g##: ===
== Permission delegation ==


                              ##c##X##!c## (from workstation flavor)
ZFS brings a feature known as delegated administration. Delegated administration enables ordinary users to handle administrative tasks on a dataset without being administrators. '''It is however not a sudo replacement as it covers only ZFS related tasks''' such as sharing/unsharing, disk quota management and so on. Permission delegation shines in flexibility because such delegation can be handled by inheritance though nested datasets. Pewrmission deleguation is handled via '''zfs''' through its '''allow''' and '''disallow''' options.
                          ##c##audio##!c## (from workstation flavor)
                            ##c##dvd##!c## (from workstation flavor)
                          ##c##media##!c## (from workstation flavor)
      ##c##mediadevice-audio-consumer##!c## (from media mix-in)
                ##c##mediadevice-base##!c## (from mediadevice-audio-consumer mix-in)
      ##c##mediadevice-video-consumer##!c## (from media mix-in)
                ##c##mediadevice-base##!c## (from mediadevice-video-consumer mix-in)
        ##c##mediaformat-audio-common##!c## (from media mix-in)
          ##c##mediaformat-gfx-common##!c## (from media mix-in)
        ##c##mediaformat-video-common##!c## (from media mix-in)
                  ##c##console-extras##!c## (from workstation flavor)
                          ##c##print##!c## (from desktop flavor)
}}
Here are some basic examples of {{c|epro}} usage:
{{TableStart}}
{{2ColHead|Description|Command}}
{{2Col|View available profiles. Enabled profiles will be highlighted in cyan. Directly enabled profiles will be in bold and have a {{c|*}} appended.|{{console|body=(chroot) # ##i##epro list}}}}
{{2Col|Change the system flavor.|{{console|body=(chroot) # ##i##epro flavor desktop}}}}
{{2Col|Add a mix-in.|{{console|body=(chroot) # ##i##epro mix-in +gnome}}}}
{{TableEnd}}


===Next Steps===  
= Final words and lessons learned =


If you are brand new to Funtoo Linux and Gentoo Linux, please check out [[Funtoo Linux First Steps]], which will help get you acquainted with your new system. We also have a category for our [[:Category:Official Documentation|official documentation]], which includes all docs that we officially maintain for installation and operation of Funtoo Linux.
ZFS on Linux, while still in development, showed strong capabilities and supported many of the features found in the Solaris/OpenIndiana implementation. It also seems to be very stable as no crashes or kernel oops happened while writing this tutorial. Funtoo does not officially support an installations over ZFS datasets however you can always read [[ZFS Install Guide]] to have a Funtoo box relying on ZFS!


We also have a number of pages dedicated to setting up your system. See [[:Category:First Steps|First Steps]] for a list of these pages.
= Footnotes & references =
Source: [http://docs.huihoo.com/opensolaris/solaris-zfs-administration-guide/html/index.html solaris-zfs-administration-guide]
[[Category:Labs]]
[[Category:Articles]]
[[Category:Filesystems]]


If your system did not boot correctly, see [[Installation Troubleshooting]] for steps you can take to resolve the problem.
<references/>
[[Category:HOWTO]]
[[Category:Install]]
[[Category:Official Documentation]]

Revision as of 09:38, September 13, 2015

   Important

This tutorial is under a heavy revision to be switched from ZFS Fuse to ZFS on Linux.

Introduction

ZFS features and limitations

ZFS offers an impressive amount of features even putting aside its hybrid nature (both a filesystem and a volume manager -- zvol) covered in detail on Wikipedia. One of the most fundamental points to keep in mind about ZFS is it targets a legendary reliability in terms of preserving data integrity. ZFS uses several techniques to detect and repair (self-healing) corrupted data. Simply speaking it makes an aggressive use of checksums and relies on data redundancy, the price to pay is a bit more CPU processing power. However, the Wikipedia article about ZFS also mention it is strongly discouraged to use ZFS over classic RAID arrays as it can not control the data redundancy, thus ruining most of its benefits.

In short, ZFS has the following features (not exhaustive):

  • Storage pool dividable in one or more logical storage entities.
  • Plenty of space:
    • 256 zettabytes per storage pool (2^64 storages pools max in a system).
    • 16 exabytes max for a single file
    • 2^48 entries max per directory
  • Virtual block-devices support support over a ZFS pool (zvol) - (extremely cool when jointly used over a RAID-Z volume)
  • Read-only Snapshot support (it is possible to get a read-write copy of them, those are named clones)
  • Encryption support (supported only at ZFS version 30 and upper, ZFS version 31 is shipped with Oracle Solaris 11 so that version is mandatory if you plan to encrypt your ZFS datasets/pools)
  • Built-in RAID-5-like-over-steroid capabilities known as RAID-Z and RAID-6-like-over-steroid capabilities known as RAID-Z2. RAID-Z3 (triple parity) also exists.
  • Copy-on-Write transactional filesystem
  • Meta-attributes support (properties) allowing you to you easily drive the show like "That directory is encrypted", "that directory is limited to 5GiB", "That directory is exported via NFS" and so on. Depending on what you define, ZFS do the job for you!
  • Dynamic striping to optimize data throughput
  • Variable block length
  • Data deduplication
  • Automatic pool re-silvering
  • Transparent data compression
  • Transparent encryption (Solaris 11 and later only)

Most notable limitations are:

  • Lack a features ZFS developers knows as "Block Pointer rewrite functionality" (planned to be developed), without it ZFS suffers of currently not being able to:
    • Pool defragmentation (COW techniques used in ZFS mitigates the problem)
    • Pool resizing
    • Data compression (re-applying)
    • Adding an additional device in a RAID-Z/Z2/Z3 pool to increase it size (however, it is possible to replace in sequence each one of the disks composing a RAID-Z/Z2/Z3)
  • NOT A CLUSTERED FILESYSTEM like Lustre, GFS or OCFS2
  • No data healing if used on a single device (corruption can still be detected), workaround if to force a data duplication on the drive
  • No support of TRIMming (SSD devices)

ZFS on well known operating systems

Linux

Despite the source code of ZFS is open, its license (Sun CDDL) is incompatible with the license governing the Linux kernel (GNU GPL v2) thus preventing its direct integration. However a couple of ports exists, but suffers of maturity and lack of features. As of writing (February 2014) two known implementations exists:

  • ZFS-fuse: a totally userland implementation relying on FUSE. This implementation can now be considered as defunct as of February 2014). The original site of ZFS FUSE seems to have disappeared nevertheless the source code is still available on http://freecode.com/projects/zfs-fuse. ZFS FUSE stalled at version 0.7.0 in 2011 and never really evolved since then.
  • ZFS on Linux: a kernel mode implementation of ZFS in kernel mode which supports a lot of NFS features. The implementation is not as complete as it is under Solaris and its siblings like OpenIndiana (e.g. SMB integration is still missing, no encryption support...) but a lot of functionality is there. This is the implementation used for this article. As ZFS on Linux is an out-of-tree Linux kernel implementation, patches must be waited after each Linux kernel release. ZfsOnLinux currently supports zpools version 28 and since its version 0.6.2 is considered as ready for production.

Solaris/OpenIndiana

  • Oracle Solaris: remains the de facto reference platform for ZFS implementation: ZFS on this platform is now considered as mature and usable on production systems. Solaris 11 uses ZFS even for its "system" pool (aka rpool). A great advantage of this: it is now quite easy to revert the effect of a patch at the condition a snapshot has been taken just before applying it. In the "old good" times of Solaris 10 and before, reverting a patch was possible but could be tricky and complex when possible. ZFS is far from being new in Solaris as it takes its roots in 2005 to be, then, integrated in Solaris 10 6/06 introduced in June 2006.
  • OpenIndiana: is based on the Illuminos kernel (a derivative of the now defunct OpenSolaris) which aims to provide absolute binary compatibility with Sun/Oracle Solaris. Worth mentioning that Solaris kernel and the Illumos kernel were both sharing the same code base, however, they now follows a different path since Oracle announced the discontinuation of OpenSolaris (August 13th 2010). Like Oracle Solaris, OpenIndiana uses ZFS for its system pool. The illumos kernel ZFS support lags a bit behind Oracle: it supports zpool version 28 where as Oracle Solaris 11 has zpool version 31 support, data encryption being supported at zpool version 30.

*BSD

  • FreeBSD: ZFS is present in FreeBSD since FreeBSD 7 (zpool version 6) and FreeBSD can boot on a ZFS volume (zfsboot). ZFS support has been vastly enhanced in FreeBSD 8.x (8.2 supports zpool version 15, version 8.3 supports version 28), FreeBSD 9 and FreeBSD 10 (both supports zpool version 28). ZFS in FreeBSD is now considered as fully functional and mature. FreeBSD derivatives such as the popular FreeNAS takes befenits of ZFS and integrated it in their tools. In the case of that latter, it have, for example, supports for zvol though its Web management interface (FreeNAS >= 8.0.1).
  • NetBSD: ZFS has been started to be ported as a GSoC project in 2007 and is present in the NetBSD mainstream since 2009 (zpool version 13).
  • OpenBSD: No ZFS support yet and not planned until Oracle changes some policies according to the project FAQ.

ZFS alternatives

  • WAFL seems to have severe limitation [1] (document is not dated), also an interesting article lies here
  • BTRFS is advancing every week but it still lacks such features like the capability of emulating a virtual block device over a storage pool (zvol) and built-in support for RAID-5/6 is not complete yet (cf. Btrfs mailing list). At date of writing, it is still experimental where as ZFS is used on big production servers.
  • VxFS has also been targeted by comparisons like this one (a bit controversial). VxFS has been known in the industry since 1993 and is known for its legendary flexibility. Symantec acquired VxFS and proposed a basic version (no clustering for example) of it under the same Veritas Storage Foundation Basic
  • An interesting discussion about modern filesystems can be found on OSNews.com

ZFS vs BTRFS at a glance

Some key features in no particular order of importance between ZFS and BTRFS:

Feature ZFS BTRFS Remarks
Transactional filesystem YES YES
Journaling NO YES Not a design flaw, but ZFS is robust by design... See page 7 of "ZFS The last word on filesystems".
Dividable pool of data storage YES YES
Read-only snapshot support YES YES
Writable snapshot support YES YES
Sending/Receiving a snapshot over the network YES YES
Rollback capabilities YES YES While ZFS knows where and how to rollback the data (on-line), BTRFS requires a bit more work from the system administrator (off-line).
Virtual block-device emulation YES NO
Data deduplication YES YES Built-in in ZFS, third party tool (bedup) in BTRFS
Data blocks reoptimization NO YES ZFS is missing a "block pointer rewrite functionality", true on all known implementations so far. Not a major performance crippling however. BTRFS can do on-line data defragmentation.
Built-in data redundancy support YES YES ZFS has a sort of RAID-5/6 (but better! RAID-Z{1,2,3}) capability, BTRFS only fully supports data mirroring at this point, however some works remains to be done on parity bits handling by BTRFS.
Management by attributes YES NO Nearly everything touching ZFS management is related to attributes manipulation (quotas, sharing over NFS, encryption, compression...), BTRFS also retain the concept but it les less aggressively used.
Production quality code NO NO ZFS support in Linux is not considered as production quality (yet) although it is very robust. Several operating systems like Solaris/OpenIndiana have a production quality implementation, Solaris/OpenIndiana is now installed in ZFS datasets by defaults.
Integrated within the Linux kernel tree NO YES ZFS is released under the CDDL license...

ZFS resource naming restrictions

Before going further, you must be aware of restrictions concerning the names you can use on a ZFS filesystem. The general rule is: you can can use all of the alphanumeric characters plus the following specials are allowed:

  • Underscore (_)
  • Hyphen (-)
  • Colon (:)
  • Period (.)

The name used to designate a ZFS pool has no particular restriction except:

  • it can't use one of the following reserved words:
    • mirror
    • raidz (raidz2, raidz3 and so on)
    • spare
    • cache
    • log
  • names must begin with an alphanumeric character (same for ZFS datasets).

Some ZFS concepts

Once again with no particular order of importance:

ZFS What it is Counterparts examples
zpool A group of one or many physical storage media (hard drive partition, file...). A zpool has to be divided in at least one ZFS dataset or at least one zvol to hold any data. Several zpools can coexists in a system at the condition they each hold a unique name. Also note that zpools can never be mounted, the only things that can are the ZFS datasets they hold.
  • Volume group (VG) in LVM
  • BTRFS volumes
dataset A logical subdivision of a zpool mounted in your host's VFS where your files and directories resides. Several uniquely named ZFS datasets can coexist in a single system at the conditions they each own a unique name within their zpool.
  • Logical subvolumes (LV) in LVM formatted with a filesystem like ext3.
  • BTRFS subvolumes
snapshot A read-only photo of a ZFS dataset state as is taken at a precise moment of time. ZFS has no way to cooperate on its own with applications that read and write data on ZFS datasets, if those latter still hold data at the moment the snapshot is taken, only what has been flushed will be included in the snapshot. Worth mentioning that snapshot do not take diskspace aside of sone metadata at the exact time they are created, they size will grow as more and data blocks (i.e. files) are deleted or changed on their corresponding live ZFS dataset.
  • No direct equivalent in LVM.
  • BTRFS read-only snapshots
clone What is is... A writable physical clone of snapshot
  • LVM snapshots
  • BTRFS snapshots
zvol An emulated block device whose data is hold behind the scene in the zpool the zvol has been created in. No known equivalent even in BTRFS

Your first contact with ZFS

Requirements

  • ZFS userland tools installed (package sys-fs/zfs)
  • ZFS kernel modules built and installed (package sys-fs/zfs-kmod), there is a known issue with kernel 3.13 series see this thread on Funtoo's forum
  • Disk size of 64 Mbytes as a bare minimum (128 Mbytes is the minimum size of a pool). Multiple disk will be simulated through the use of several raw images accessed via the Linux loopback devices.
  • At least 512 MB of RAM

Preparing

Once your have emerged sys-fs/zfs and sys-fs/zfs-kmod you have two options to start using ZFS at this point :

  • Either you start /etc/init.d/zfs (will load all of the zfs kernel modules for you plus a couple of other things)
  • Either you load the zfs kernel modules by hand (will load all of the zfs kernel modules for you)

So :

root # rc-service zfs start

Or:

root # modprobe zfs
root # lsmod | grep zfs
zfs                   874072  0 
zunicode              328120  1 zfs
zavl                   12997  1 zfs
zcommon                35739  1 zfs
znvpair                48570  2 zfs,zcommon
spl                    58011  5 zfs,zavl,zunicode,zcommon,znvpair

Your first ZFS pool

To start with, four raw disks (2 GB each) are created:

root # for i in 0 1 2 3; do dd if=/dev/zero of=/tmp/zfs-test-disk0${i}.img bs=2G count=1; done
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB) copied, 40.3722 s, 53.2 MB/s
...

Then let's see what loopback devices are in use and which is the first free:

root # losetup -a
root # losetup -f
/dev/loop0

In the above example nothing is used and the first available loopback device is /dev/loop0. Now associate all of the disks with a loopback device (/tmp/zfs-test-disk00.img -> /dev/loop/0, /tmp/zfs-test-disk01.img -> /dev/loop/1 and so on):

root # for i in 0 1 2 3; do losetup /dev/loop${i} /tmp/zfs-test-disk0${i}.img; done
root # losetup -a
/dev/loop0: [000c]:781455 (/tmp/zfs-test-disk00.img)
/dev/loop1: [000c]:806903 (/tmp/zfs-test-disk01.img)
/dev/loop2: [000c]:807274 (/tmp/zfs-test-disk02.img)
/dev/loop3: [000c]:781298 (/tmp/zfs-test-disk03.img)
   Note

ZFS literature often names zpools "tank", this is not a requirement you can use whatever name of you choice (as we did here...)

Every story in ZFS takes its root with a the very first ZFS related command you will be in touch with: zpool. zpool as you might guessed manages all ZFS aspects in connection with the physical devices underlying your ZFS storage spaces and the very first task is to use this command to make what is called a pool (if you have used LVM before, volume groups can be seen as a counter part). Basically what you will do here is to tell ZFS to take a collection of physical storage stuff which can take several forms like a hard drive partition, a USB key partition or even a file and consider all of them as a single pool of storage (we will subdivide it in following paragraphs). No black magic here, ZFS will write some metadata on them behind the scene to be able to track which physical device belongs to what pool of storage.

root # zpool create myfirstpool /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3

And.. nothing! Nada! The command silently returned but it did something, the next section will explain what.

Your first ZFS dataset

root # zpool list
NAME          SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
myfirstpool  7.94G   130K  7.94G     0%  1.00x  ONLINE  -

What does this mean? Several things: First, your zpool is here and has a size of, roughly, 8 Go minus some space eaten by some metadata. Second is is actually usable because the column HEALTH says ONLINE. Other columns are not meaningful for us for the moment just ignore them. If want more crusty details you can use the zpool command like this:

root # zpool status
  pool: myfirstpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        myfirstpool  ONLINE       0     0     0
          loop0     ONLINE       0     0     0
          loop1     ONLINE       0     0     0
          loop2     ONLINE       0     0     0
          loop3     ONLINE       0     0     0

Information is quite intuitive: your pool is seen as being usable (state is similar to HEALTH) and is composed of several devices each one listed as being in a healthy state ... at least for now because they will be salvaged for demonstration purpose in a later section. For your information the columns READ,WRITE and CKSUM list the number of operation failures on each of the devices respectfully:

  • READ for reading failures. Having a non-zero value is not a good sign... the device is clunky and will soon fail.
  • WRITE for writing failures. Having a non-zero value is not a good sign... the device is clunky and will soon fail.
  • CKSUM for mismatch between the checksum of the data at the time is had been written and how it has been recomputed when read again (yes, ZFS uses checksums in a agressive manner). Having a non-zero value is not a good sign... corruption happened, ZFS will do its best to recover data by its own but this is definitely not a good sign of a healthy system.

Cool! So far so good you have a new 8 Gb usable brand new storage space on you system. Has been mounted somewhere?

root # mount | grep myfirstpool
/myfirstpool on /myfirstpool type zfs (rw,xattr)

Remember the tables in the section above? A zpool in itself can never be mounted, never ever. It is just a container where ZFS datasets are created then mounted. So what happened here? Obscure black magic? No, of course not! Indeed a ZFS dataset named after the zpool's name should have been created automatically for us then mounted. Is is true? We will check this shortly. For the moment you will be introduced with the second command you will deal with when using ZFS : zfs. While the zpool command is used with anything related to zpools, the zfs is used to anything related to ZFS datasets (a ZFS dataset always resides in a zpool, always no exception on that).

   Note

zfs and zpool commands are the two only ones you will need to remember when dealing with ZFS.

So how can we check what ZFS datasets are currently known by the system? As you might already guessed like this:

root # zfs list
NAME          USED  AVAIL  REFER  MOUNTPOINT
myfirstpool   114K  7.81G    30K  /myfirstpool

Tala! The mystery is busted! the zfs command tells us that not only a ZFS dataset named myfirstpool has been created but also it has been mounted in the system's VFS for us. If you check with the df command, you should also see something like this:

root # df -h
Filesystem      Size  Used Avail Use% Mounted on
(...)
myfirstpool     7.9G     0  7.9G   0% /myfirstpool

The $100 question:"what to do with this band new ZFS /myfirstpool dataset ?". Copy some files on it of course! We used a Linux kernel source but you can of course use whatever you want:

root # cp -a /usr/src/linux-3.13.5-gentoo /myfirstpool
root # ln -s /myfirstpool/linux-3.13.5-gentoo /myfirstpool/linux
root # ls -lR /myfirstpool
/myfirstpool:
total 3
lrwxrwxrwx  1 root root 32 Mar  2 14:02 linux -> /myfirstpool/linux-3.13.5-gentoo
drwxr-xr-x 25 root root 50 Feb 27 20:35 linux-3.13.5-gentoo

/myfirstpool/linux-3.13.5-gentoo:
total 31689
-rw-r--r--   1 root root    18693 Jan 19 21:40 COPYING
-rw-r--r--   1 root root    95579 Jan 19 21:40 CREDITS
drwxr-xr-x 104 root root      250 Feb 26 07:39 Documentation
-rw-r--r--   1 root root     2536 Jan 19 21:40 Kbuild
-rw-r--r--   1 root root      277 Feb 26 07:39 Kconfig
-rw-r--r--   1 root root   268770 Jan 19 21:40 MAINTAINERS
(...)

A ZFS dataset behaves like any other filesystem: you can create regular files, symbolic links, pipes, special devices nodes, etc. Nothing mystic here.

Now we have some data in the ZFS dataset let's see what various commands report:

root # df -h
Filesystem      Size  Used Avail Use% Mounted on
(...)
myfirstpool     7.9G  850M  7.0G  11% /myfirstpool
root # zfs list
NAME          USED  AVAIL  REFER  MOUNTPOINT
myfirstpool   850M  6.98G   850M  /myfirstpool
root # zpool list
NAME          SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
myfirstpool  7.94G   850M  7.11G    10%  1.00x  ONLINE  -
   Note

Notice the various sizes reported by zpool and zfs commands. In this case it is the same however it can differ, this is true especially with zpools mounted in RAID-Z.

Unmounting/remounting a ZFS dataset

   Important

Only ZFS datasets can be mounted inside your host's VFS, no exception on that! Zpools cannot be mounted, never, never, never... please pay attention to the terminology and keep things clear by not messing up with terms. We will introduce ZFS snapshots and ZFS clones but those are ZFS datasets at the basis so they can also be mounted and unmounted.


If a ZFS dataset behaves just like any other filesystem, can we unmount it?

root # umount /myfirstpool
root # mount | grep myfirstpool

No more /myfirstpool the line of sight! So yes, it is possible to unmount a ZFS dataset just like you would do with any other filesystem. Is the ZFS dataset still present on the system even it is unmounted? Let's check:

root # zfs list 
NAME          USED  AVAIL  REFER  MOUNTPOINT
myfirstpool   850M  6.98G   850M  /myfirstpool

Hopefully and obviously it is else ZFS would not be very useful. Your next concern would certainly be: "How can we remount it then?" Simple! Like this:

root # zfs mount myfirstpool
root # mount | grep myfirstpool
myfirstpool on /myfirstpool type zfs (rw,xattr)

The ZFS dataset is back! :-)

Your first contact with ZFS management by attributes or the end of /etc/fstab

At this point you might be curious about how the zfs command know what it has to mount and where is has to mount it. You might be familiar with the following syntax of the mount command that, behind the scenes, scans the file /etc/fstab and mount the specified entry:

root # mount /boot

Does /etc/fstab contain something related to our ZFS dataset?

root # cat /etc/fstab | grep myfirstpool
root #

Doh!!!... Obvisouly nothing there. Another mystery? Sure not! The answer lies in a extremely powerful feature of ZFS: the attributes. Simply speaking: an attribute is named property of a ZFS dataset that holds a value. Attributes govern various aspects of how the datasets are managed like: "Is the data has to be compressed?", "Is the data has to be encrypted?", "Is the data has to be exposed to the rest of the world by NFS or SMB/Samba?" and of course... '"Where the dataset has to be mounted?". The answer to that latter question can be tell by the following command:

root # zfs get mountpoint myfirstpool
NAME         PROPERTY    VALUE         SOURCE
myfirstpool  mountpoint  /myfirstpool  default

Bingo! When you remounted the dataset just some paragraphs ago, ZFS automatically inspected the mountpoint attribute and saw this dataset has to be mounted in the directory /myfirstpool.

A step forward with ZFS datasets

So far you were given a quick tour of what ZFS can do for you and it is very important at this point to distinguish a zpool from a ZFS dataset and to call a dataset for what it is (a dataset) and not for what is is not (a zpool). It is a bit confusing and an editorial choice to have choosen a confusing name just to make you familiar with the one and the other.

Creating datasets

Obviously it is possible to have more than one ZFS dataset within a single zpool. Quizz: what command would you use to subdivide a zpool in datasets? zfs or zpool? Stops reading for two seconds and try to figure out this little question. Frankly.

Answer is... zfs! Although you want to operate on the zpool to logically subdivide it in several datasets, you manage datasets at the end thus you will use the zfs command. It is not always easy at the beginning, do not be too worry you will soon get the habit when to use one or the other. Creating a dataset in a zpool is easy: just give to the zfs command the name of the pool you want to divide and the name of the dataset you want to create in it. So let's create three datasets named myfirstDS, mysecondDS and mythirdDS in myfirstpool(observe how we use the zpool and datasets' names) :

root # zfs create myfirstpool/myfirstDS
root # zfs create myfirstpool/mysecondDS
root # zfs create myfirstpool/mythirdDS

What happened? Let's check :

root # zfs list
NAME                     USED  AVAIL  REFER  MOUNTPOINT
myfirstpool              850M  6.98G   850M  /myfirstpool
myfirstpool/myfirstDS     30K  6.98G    30K  /myfirstpool/myfirstDS
myfirstpool/mysecondDS    30K  6.98G    30K  /myfirstpool/mysecondDS
myfirstpool/mythirdDS     30K  6.98G    30K  /myfirstpool/mythirdDS

Obviously we have there what we asked. Moreover if we inspect the contents of /myfirstpool we can notice three new directories having the same than just created:

root # ls -l /myfirstpool 
total 8
lrwxrwxrwx  1 root root 32 Mar  2 14:02 linux -> /myfirstpool/linux-3.13.5-gentoo
drwxr-xr-x 25 root root 50 Feb 27 20:35 linux-3.13.5-gentoo
drwxr-xr-x  2 root root  2 Mar  2 15:26 myfirstDS
drwxr-xr-x  2 root root  2 Mar  2 15:26 mysecondDS
drwxr-xr-x  2 root root  2 Mar  2 15:26 mythirdDS

No surprise here! As you might have guessed, those three new directories serves as mountpoints:

root # mount | grep myfirstpool
myfirstpool on /myfirstpool type zfs (rw,xattr)
myfirstpool/myfirstDS on /myfirstpool/myfirstDS type zfs (rw,xattr)
myfirstpool/mysecondDS on /myfirstpool/mysecondDS type zfs (rw,xattr)
myfirstpool/mythirdDS on /myfirstpool/mythirdDS type zfs (rw,xattr)

As we did before, we can copy some files in the newly created datasets just like they were regular directories:

root # cp -a /usr/portage /myfirstpool/mythirdDS
root # ls -l /myfirstpool/mythirdDS/*
total 697
drwxr-xr-x   48 root root   49 Aug 18  2013 app-accessibility
drwxr-xr-x  238 root root  239 Jan 10 06:22 app-admin
drwxr-xr-x    4 root root    5 Dec 28 08:54 app-antivirus
drwxr-xr-x  100 root root  101 Feb 26 07:19 app-arch
drwxr-xr-x   42 root root   43 Nov 26 21:24 app-backup
drwxr-xr-x   34 root root   35 Aug 18  2013 app-benchmarks
drwxr-xr-x   66 root root   67 Oct 16 06:39 app-cdr(...)

Nothing really too exciting here, we have file in mythirdDS. A bit more interesting output:

root # zfs list
NAME                     USED  AVAIL  REFER  MOUNTPOINT
myfirstpool             1.81G  6.00G   850M  /myfirstpool
myfirstpool/myfirstDS     30K  6.00G    30K  /myfirstpool/myfirstDS
myfirstpool/mysecondDS    30K  6.00G    30K  /myfirstpool/mysecondDS
myfirstpool/mythirdDS   1002M  6.00G  1002M  /myfirstpool/mythirdDS
root # df -h
Filesystem              Size  Used Avail Use% Mounted on
(...)
myfirstpool             6.9G  850M  6.1G  13% /myfirstpool
myfirstpool/myfirstDS   6.1G     0  6.1G   0% /myfirstpool/myfirstDS
myfirstpool/mysecondDS  6.1G     0  6.1G   0% /myfirstpool/mysecondDS
myfirstpool/mythirdDS   7.0G 1002M  6.1G  15% /myfirstpool/mythirdDS

Noticed the size given for the 'AVAIL' column? At the very beginning of this tutorial we had slightly less than 8 Gb of available space, it now has a value of roughly 6 Gb. The datasets are just a subdivision of the zpool, they compete with each others for using the available storage within the zpool, no miracle here. To what limit? The pool itself as we never imposed a quota on datasets. Hopefully df and zfs list gives a coherent result.

Second contact with attributes: quota management

Remember how painful is the quota management under Linux? Now you can say goodbye to setquota, edquota and other quotacheck commands, ZFS handle this in the snap of fingers! Guess with what? An ZFS dataset attribute of course! ;-) Just to make you drool here is how a 2Gb limit can be set on myfirstpool/mythirdDS :

root # zfs set quota=2G myfirstpool/mythirdDS

Et voila! The zfs command is bit silent however if we check we can see that myfirstpool/mythirdDS is now capped to 2 Gb (forget about 'REFER' for the moment): around 1 Gb of data has been copied in this dataset thus leaving a big 1 Gb of available space.

root # zfs list
NAME                     USED  AVAIL  REFER  MOUNTPOINT
myfirstpool             1.81G  6.00G   850M  /myfirstpool
myfirstpool/myfirstDS     30K  6.00G    30K  /myfirstpool/myfirstDS
myfirstpool/mysecondDS    30K  6.00G    30K  /myfirstpool/mysecondDS
myfirstpool/mythirdDS   1002M  1.02G  1002M  /myfirstpool/mythirdDS

Using the df command:

root # df -h                                 
Filesystem              Size  Used Avail Use% Mounted on
(...)
myfirstpool             6.9G  850M  6.1G  13% /myfirstpool
myfirstpool/myfirstDS   6.1G     0  6.1G   0% /myfirstpool/myfirstDS
myfirstpool/mysecondDS  6.1G     0  6.1G   0% /myfirstpool/mysecondDS
myfirstpool/mythirdDS   2.0G 1002M  1.1G  49% /myfirstpool/mythirdDS

Of course you can use this technique for the home directories of your users /home this also having the a advantage of being much less forgiving than a soft/hard user quota: when the limit is reached, it is reached period and no more data can be written in the dataset. The user must do some cleanup and cannot procastinate anymore :-)

To remove the quota:

root # zfs set quota=none myfirstpool/mythirdDS

none is simply the original value for the quota attribute (we did not demonstrate it, you can check by doing a zfs get quota myfirstpool/mysecondDS for example).

Destroying datasets

   Important

There is no way to resurrect a destroyed ZFS dataset and the data it contained! Once you destroy a dataset the corresponding metadata is cleared and gone forever so be careful when using zfs destroy notably with the -r option ...


We have three datasets, but the third is pretty useless and contains a lot of garbage. Is it possible to remove it with a simple rm -rf? Let's try:

root # rm -rf /myfirstpool/mythirdDS
rm: cannot remove `/myfirstpool/mythirdDS': Device or resource busy

This is perfectly normal, remember that datasets are indeed something mounted in your VFS. ZFS might be ZFS and do alot for you, it cannot enforce the nature of a mounted filesystem under Linux/Unix. The "ZFS way" to remove a dataset is to use the zfs command like this at the reserve no process owns open files on it (once again, ZFS can do miracles for you but not that kind of miracles as it has to unmount the dataset before deleting it):

root # zfs destroy myfirstpool/mythirdDS
root # zfs list
NAME                     USED  AVAIL  REFER  MOUNTPOINT
myfirstpool              444M  7.38G   444M  /myfirstpool
myfirstpool/myfirstDS     21K  7.38G    21K  /myfirstpool/myfirstDS
myfirstpool/mysecondDS    21K  7.38G    21K  /myfirstpool/mysecondDS

Et voila! No more myfirstpool/mythirdDS dataset. :-)

A bit more subtle case would be to try to destroy a ZFS dataset whenever another ZFS dataset is nested in it. Before doing that nasty experiment myfirstpool/mythirdDS must be created again this time with another nested dataset (myfirstpool/mythirdDS/nestedSD1):

root # zfs create myfirstpool/mythirdDS
root # zfs create myfirstpool/mythirdDS/nestedSD1
root # zfs list
NAME                              USED  AVAIL  REFER  MOUNTPOINT
myfirstpool                       851M  6.98G   850M  /myfirstpool
myfirstpool/myfirstDS              30K  6.98G    30K  /myfirstpool/myfirstDS
myfirstpool/mysecondDS             30K  6.98G    30K  /myfirstpool/mysecondDS
myfirstpool/mythirdDS             124K  6.98G    34K  /myfirstpool/mythirdDS
myfirstpool/mythirdDS/nestedDS1    30K  6.98G    30K  /myfirstpool/mythirdDS/nestedDS1

Now let's try to destroy myfirstpool/mythirdDS again:

root # zfs destroy myfirstpool/mythirdDS
cannot destroy 'myfirstpool/mythirdDS': filesystem has children
use '-r' to destroy the following datasets:
myfirstpool/mythirdDS/nestedDS1

The zfs command detected the situation and refused to proceed on the deletion without your consent to make a recursive destruction (-r parameter). Before going any step further let's create some more nested datasets plus a couple of directories inside myfirstpool/mythirdDS:

root # zfs create myfirstpool/mythirdDS/nestedDS1
root # zfs create myfirstpool/mythirdDS/nestedDS2
root # zfs create myfirstpool/mythirdDS/nestedDS3
root # zfs create myfirstpool/mythirdDS/nestedDS3/nestednestedDS
root # mkdir /myfirstpool/mythirdDS/dir1
root # mkdir /myfirstpool/mythirdDS/dir2
root # mkdir /myfirstpool/mythirdDS/dir3
root # zfs list
NAME                                             USED  AVAIL  REFER  MOUNTPOINT
myfirstpool                                      851M  6.98G   850M  /myfirstpool
myfirstpool/myfirstDS                             30K  6.98G    30K  /myfirstpool/myfirstDS
myfirstpool/mysecondDS                            30K  6.98G    30K  /myfirstpool/mysecondDS
myfirstpool/mythirdDS                            157K  6.98G    37K  /myfirstpool/mythirdDS
myfirstpool/mythirdDS/nestedDS1                   30K  6.98G    30K  /myfirstpool/mythirdDS/nestedDS1
myfirstpool/mythirdDS/nestedDS2                   30K  6.98G    30K  /myfirstpool/mythirdDS/nestedDS2
myfirstpool/mythirdDS/nestedDS3                   60K  6.98G    30K  /myfirstpool/mythirdDS/nestedDS3
myfirstpool/mythirdDS/nestedDS3/nestednestedDS    30K  6.98G    30K  /myfirstpool/mythirdDS/nestedDS3/nestednestedDS

Now what happens if myfirstpool/mythirdDS is destroyed again with '-r'?

root # zfs destroy -r myfirstpool/mythirdDS
root # zfs list                            
NAME                     USED  AVAIL  REFER  MOUNTPOINT
myfirstpool              851M  6.98G   850M  /myfirstpool
myfirstpool/myfirstDS     30K  6.98G    30K  /myfirstpool/myfirstDS
myfirstpool/mysecondDS    30K  6.98G    30K  /myfirstpool/mysecondDS

myfirstpool/mythirdDS and everything it contained is now gone!

Snapshotting and rolling back datasets

This is, by far, one of the coolest features of ZFS. You can:

  1. take a photo of a dataset (this photo is called a snapshot)
  2. do whatever you want with the data contained in the dataset
  3. restore (roll back) the dataset in in the exact same state it was before you did your changes just as if nothing had ever happened in the middle.

Single snapshot

   Important

Only ZFS datasets can be snapshotted and rolled back, not the zpool.


To start with, let's copy some files in mysecondDS:

root # cp -a /usr/portage /myfirstpool/mysecondDS
root # ls /myfirstpool/mysecondDS/portage
total 672
drwxr-xr-x   48 root root   49 Aug 18  2013 app-accessibility
drwxr-xr-x  238 root root  239 Jan 10 06:22 app-admin
drwxr-xr-x    4 root root    5 Dec 28 08:54 app-antivirus
drwxr-xr-x  100 root root  101 Feb 26 07:19 app-arch
drwxr-xr-x   42 root root   43 Nov 26 21:24 app-backup
drwxr-xr-x   34 root root   35 Aug 18  2013 app-benchmarks
(...)
drwxr-xr-x   62 root root   63 Feb 20 06:47 x11-wm
drwxr-xr-x   16 root root   17 Aug 18  2013 xfce-base
drwxr-xr-x   64 root root   65 Dec 14 19:09 xfce-extra

Now, let's take a snapshot of mysecondDS. What command would be used? zpool or zfs? In that case it is zfs because we manipulate a ZFS dataset (this time you problably got it right!):

root # zfs snapshot myfirstpool/mysecondDS@Charlie
   Note

The syntax is always pool/dataset@snapshot, the snapshot's name is left at your discretion however you must use an arobase sign (@) to separate the snapshot's name from the rest of the path.

Let's check what /myfirstpool/mysecondDS contains after taking the snapshot:

root # ls -la /myfirstpool/mysecondDS     
total 9
drwxr-xr-x   3 root root   3 Mar  2 18:22 .
drwxr-xr-x   5 root root   6 Mar  2 17:58 ..
drwx------ 170 root root 171 Mar  2 18:36 portage

Nothing really new the portage directory is here nothing more a priori. If you have used BTRFS before reading this tutorial you probably expected to see a @Charlie lying in /myfirstpool/mysecondDS? So where the check is Charlie? In ZFS a dataset snapshot is not visible from within the VFS tree (if you are not convinced you can search for it with the find command but it will never find it). Let's check with the zfs command:

root # zfs list
root # zfs list -t all     
NAME                             USED  AVAIL  REFER  MOUNTPOINT
myfirstpool                     1.81G  6.00G   850M  /myfirstpool
myfirstpool/myfirstDS             30K  6.00G    30K  /myfirstpool/myfirstDS
myfirstpool/mysecondDS          1001M  6.00G  1001M  /myfirstpool/mysecondDS

Wow... No sign of the snapshot. What you mus know is that indeed zfs list shows only datasets by default and omits snapshots. If the command is invoked with the parameter -t set to all it will list everything:

root # zfs list
root # zfs list -t all     
NAME                             USED  AVAIL  REFER  MOUNTPOINT
myfirstpool                     1.81G  6.00G   850M  /myfirstpool
myfirstpool/myfirstDS             30K  6.00G    30K  /myfirstpool/myfirstDS
myfirstpool/mysecondDS          1001M  6.00G  1001M  /myfirstpool/mysecondDS
myfirstpool/mysecondDS@Charlie      0      -  1001M  -

So yes, @Charlie is here! Also notice here the power of copy-on-write filesystems: @Charlie takes only a couple of kilobytes (some ZFS metadata) just like any ZFS snapshot at the time they are taken. The reason snapshots occupy very little space in the datasets is because data and metadata blocks are the same and no physical copy of them are made. At the time goes on and more and more changes happens in the original dataset (myfirstpool/mysecondDS here), ZFS will allocate new data and metadata blocks to accommodate the changes but will leave the blocks used by the snapshot untouched and the snapshot will tend to eat more and more pool space. It seems odd at first glance because a snapshot is a frozen in time copy of a ZFS dataset but this the way ZFS manage them. So caveat emptor: remove any unused snapshot to not full your zpool...

Now we have found Charlie, let's do some changes in the mysecondDS:

root # rm -rf /myfirstpool/mysecondDS/portage/[a-h]*
root # echo "Hello, world" >  /myfirstpool/mysecondDS/hello.txt
root # cp /lib/firmware/radeon/* /myfirstpool/mysecondDS
root # ls -l  /myfirstpool/mysecondDS
/myfirstpool/mysecondDS:
total 3043
-rw-r--r--  1 root root   8704 Mar  2 19:29 ARUBA_me.bin
-rw-r--r--  1 root root   8704 Mar  2 19:29 ARUBA_pfp.bin
-rw-r--r--  1 root root   6144 Mar  2 19:29 ARUBA_rlc.bin
-rw-r--r--  1 root root  24096 Mar  2 19:29 BARTS_mc.bin
-rw-r--r--  1 root root   5504 Mar  2 19:29 BARTS_me.bin
(...)
-rw-r--r--  1 root root  60388 Mar  2 19:29 VERDE_smc.bin
-rw-r--r--  1 root root     13 Mar  2 19:28 hello.txt
drwx------ 94 root root     95 Mar  2 19:28 portage

/myfirstpool/mysecondDS/portage:
total 324
drwxr-xr-x  16 root root   17 Oct 26 07:30 java-virtuals
drwxr-xr-x 303 root root  304 Jan 21 06:53 kde-base
drwxr-xr-x 117 root root  118 Feb 21 06:24 kde-misc
drwxr-xr-x   2 root root  756 Feb 23 08:44 licenses
drwxr-xr-x  20 root root   21 Jan  7 06:56 lxde-base
(...)

Now let's check again what the zpool command gives:

root # zfs list -t all                      
NAME                             USED  AVAIL  REFER  MOUNTPOINT
myfirstpool                     1.82G  6.00G   850M  /myfirstpool
myfirstpool/myfirstDS             30K  6.00G    30K  /myfirstpool/myfirstDS
myfirstpool/mysecondDS          1005M  6.00G   903M  /myfirstpool/mysecondDS
myfirstpool/mysecondDS@Charlie   102M      -  1001M  -

Noticed the size's increase of myfirstpool/mysecondDS@Charlie? This is mainly due to new files copied in the snasphot: ZFS had to retained the original blocks of data. Now time to roll this ZFS dataset back to its original state (if some processes would have open files in the dataset to be rolled back, you should terminate them first) :

root # zfs rollback myfirstpool/mysecondDS@Charlie
root # ls -l /myfirstpool/mysecondDS
total 6
drwxr-xr-x 164 root root 169 Aug 18 18:25 portage

Again, ZFS handled everything for you and you now have the contents of mysecondDS exactly as it was at the time the snapshot Charlie was taken. Not more complicated than that. Not illustrated here but if you look at the output given by zfs list -t all at this point you will notice that the Charlie snapshot only eat very little space. This is normal: the modified blocks have been dropped so myfirstpool/mysecondDS and its myfirstpool/mysecondDS@Charlie snapshot are the same module some metadata (hence the few kilobytes of space taken).

the .zfs pseudo-directory or the secret passage to your snapshots

Any directory where a ZFS dataset is mounted (having snapshots or not) secretly contains a pseudo-directory named .zfs (dot-ZFS) and you will not see it even with the option -a given to a ls command unless you specify it. It is a contradiction to Unix and Unix-like systems' philosophy to not hide anything to the system administrator. It is not a bug of ZFS On Linux implementation and the Solaris implementation of ZFS exposes the exact behavior. So what is inside this little magic box?

root # cd /myfirstpool/mysecondDS
root # ls -la | grep .zfs        
root # ls -lad .zfs              
dr-xr-xr-x 1 root root 0 Mar  2 15:26 .zfs
root # cd .zfs
root # pwd
/myfirstpool/mysecondDS/.zfs
root # ls -la
total 4
dr-xr-xr-x 1 root root   0 Mar  2 15:26 .
drwxr-xr-x 3 root root 145 Mar  2 19:29 ..
dr-xr-xr-x 2 root root   2 Mar  2 19:47 shares
dr-xr-xr-x 2 root root   2 Mar  2 18:46 snapshot

We will focus on the snapshot directory and since we did not dropped the Charlie snapshot (yet) let's see what lies there:

root # cd snapshot
root # ls -l
total 0
dr-xr-xr-x 1 root root 0 Mar  2 20:16 Charlie

Yes we found Charlie here (also!), the snapshot is seen as regular directory but pay attention to its permissions:

  • owning user (root) has read+execute
  • owning group (root) has read+execute
  • rest of the world has read+execute

Did you notice? Not a single write permission on this directory, the only action any user can do is to enter in the directory and list its contents. This not a bug but the nature of ZFS snapshots: they are read-only stuff at the basis. Next question is naturally: can we change something in it? For that we have to enter inside the Charlie directory:

root # cd Charlie
root # ls -la
total 7
drwxr-xr-x   3 root root   3 Mar  2 18:22 .
dr-xr-xr-x   3 root root   3 Mar  2 18:46 ..
drwx------ 170 root root 171 Mar  2 18:36 portage

No surprise here: at the time we took the snapshot, myfirstpool/mysecondDS held a copy of the portage tree stored in a directory named portage. At first glance this one seems to be writable for the root user let's try to create a file in it:

root # cd portage
root # touch test
touch: cannot touch ‘test’: Read-only file system

Thing are a bit tricky here: indeed nothing has been mounted (check with the mount command!), we are walking though a pseudo-directory exposed by ZFS that holds the Charlie snapshot. Pseudo-directory because in fact .zfs had no physical existence even in the ZFS metadata as they exists in the zpool. It is just a convenient way provided by the ZFS kernel modules to walk inside the various snapshots' content. You can see but you cannot touch :-)

Backtracking changes between a dataset and its snapshot

Is it possible to know what is the difference between a a live dataset and its snapshot? Answer to this question is yes and the zfs command will help us in this task. Now we rolled back the myfirstpool/mysecondDS ZFS dataset back to its original state we have to botch it again:

root # cp -a /lib/firmware/radeon/C* /myfirstpool/mysecondDS

Now inspect the difference between the live ZFS dataset myfirstpool/mysecondDS and its snasphot Charlie, this is done via zfs diff and by giving only the snapshot's name (you can inspect the difference between snasphot with that command with a slightly change in parameters):

root # # zfs diff myfirstpool/mysecondDS@Charlie
M       /myfirstpool/mysecondDS/
+       /myfirstpool/mysecondDS/CAICOS_mc.bin
+       /myfirstpool/mysecondDS/CAICOS_me.bin
+       /myfirstpool/mysecondDS/CAICOS_pfp.bin
+       /myfirstpool/mysecondDS/CAICOS_smc.bin
+       /myfirstpool/mysecondDS/CAYMAN_mc.bin
+       /myfirstpool/mysecondDS/CAYMAN_me.bin
(...)

So do we have here? Two things: First it shows we have changed something in /myfirstpool/mysecondDS (notice the 'M' for Modified), second it shows the addition of several files (CAICOS_mc.bin, CAICOS_me.bin, CAICOS_pfp.bin...) by putting a plus sign ('+') on their left.

If we botch a bit more myfirstpool/mysecondDS by removing the file /myfirstpool/mysecondDS/portage/sys-libs/glibc/Manifest :

root # rm /myfirstpool/mysecondDS/portage/sys-libs/glibc/Manifest
root # zfs diff myfirstpool/mysecondDS@Charlie
M       /myfirstpool/mysecondDS/
M       /myfirstpool/mysecondDS/portage/sys-libs/glibc
-       /myfirstpool/mysecondDS/portage/sys-libs/glibc/Manifest
+       /myfirstpool/mysecondDS/CAICOS_mc.bin
+       /myfirstpool/mysecondDS/CAICOS_me.bin
+       /myfirstpool/mysecondDS/CAICOS_pfp.bin
+       /myfirstpool/mysecondDS/CAICOS_smc.bin
+       /myfirstpool/mysecondDS/CAYMAN_mc.bin
+       /myfirstpool/mysecondDS/CAYMAN_me.bin
(...)

Obviously deleted content is marked by a minus sign ('-').

Now a real butchery:

root # rm -rf /myfirstpool/mysecondDS/portage/sys-devel/gcc 
root # zfs diff myfirstpool/mysecondDS@Charlie
root # zfs diff myfirstpool/mysecondDS@Charlie             
M       /myfirstpool/mysecondDS/
M       /myfirstpool/mysecondDS/portage/sys-devel
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc/files
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc/files/awk
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc/files/awk/fixlafiles.awk
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc/files/awk/fixlafiles.awk-no_gcc_la
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc/files/c89
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc/files/c99
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc/files/gcc-4.6.4-fix-libgcc-s-path-with-vsrl.patch
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc/files/gcc-spec-env.patch
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc/files/gcc-spec-env-r1.patch
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc/files/gcc-4.8.2-fix-cache-detection.patch
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc/files/fix_libtool_files.sh
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc/files/gcc-configure-texinfo.patch
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc/files/gcc-4.8.1-bogus-error-with-int.patch
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc/gcc-4.3.3-r2.ebuild
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc/metadata.xml
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc/gcc-4.6.4-r2.ebuild
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc/gcc-4.6.4.ebuild
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc/gcc-4.8.1-r1.ebuild
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc/gcc-4.8.1-r2.ebuild
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc/gcc-4.6.2-r1.ebuild
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc/gcc-4.8.1-r3.ebuild
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc/gcc-4.8.2.ebuild
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc/gcc-4.8.1-r4.ebuild
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc/Manifest
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc/gcc-4.7.3-r1.ebuild
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc/gcc-4.8.2-r1.ebuild
M       /myfirstpool/mysecondDS/portage/sys-libs/glibc
-       /myfirstpool/mysecondDS/portage/sys-libs/glibc/Manifest
+       /myfirstpool/mysecondDS/CAICOS_mc.bin
+       /myfirstpool/mysecondDS/CAICOS_me.bin
+       /myfirstpool/mysecondDS/CAICOS_pfp.bin
+       /myfirstpool/mysecondDS/CAICOS_smc.bin
+       /myfirstpool/mysecondDS/CAYMAN_mc.bin
+       /myfirstpool/mysecondDS/CAYMAN_me.bin
(...)

No need to explain that digital mayhem! What happens if, in addition, we change the contents of the file /myfirstpool/mysecondDS/portage/sys-devel/autoconf/Manifest?

root # zfs diff myfirstpool/mysecondDS@Charlie
M       /myfirstpool/mysecondDS/
M       /myfirstpool/mysecondDS/portage/sys-devel
M       /myfirstpool/mysecondDS/portage/sys-devel/autoconf/Manifest
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc/files
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc/files/awk
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc/files/awk/fixlafiles.awk
-       /myfirstpool/mysecondDS/portage/sys-devel/gcc/files/awk/fixlafiles.awk-no_gcc_la
(...)

ZFS shows that the file /myfirstpool/mysecondDS/portage/sys-devel/autoconf/Manifest has changed. So ZFS can help to track files deletion, creation and modifications. What it does not show is the difference of a file's content between as it exists in a live dataset and this dataset's snapshot. Not a big issue! You can explore a snapshot's content via the .zfs pseudo-directory and use a command like /usr/bin/diff to examine the difference with the file as it exists on the corresponding live dataset.

root # diff -u /myfirstpool/mysecondDS/.zfs/snapshot/Charlie/portage/sys-devel/autoconf/Manifest /myfirstpool/mysecondDS/portage/sys-devel/autoconf/Manifest
--- /myfirstpool/mysecondDS/.zfs/snapshot/Charlie/portage/sys-devel/autoconf/Manifest   2013-08-18 08:52:01.742411902 -0400
+++ /myfirstpool/mysecondDS/portage/sys-devel/autoconf/Manifest 2014-03-02 21:36:50.582258990 -0500
@@ -4,7 +4,4 @@
 DIST autoconf-2.62.tar.gz 1518427 SHA256 83aa747e6443def0ebd1882509c53f5a2133f50...
 DIST autoconf-2.63.tar.gz 1562665 SHA256 b05a6cee81657dd2db86194a6232b895b8b2606a...
 DIST autoconf-2.64.tar.bz2 1313833 SHA256 872f4cadf12e7e7c8a2414e047fdff26b517c7...
-DIST autoconf-2.65.tar.bz2 1332522 SHA256 db11944057f3faf229ff5d6ce3fcd819f56545...
-DIST autoconf-2.67.tar.bz2 1369605 SHA256 00ded92074999d26a7137d15bd1d51b8a8ae23...
-DIST autoconf-2.68.tar.bz2 1381988 SHA256 c491fb273fd6d4ca925e26ceed3d177920233c...
 DIST autoconf-2.69.tar.xz 1214744 SHA256 64ebcec9f8ac5b2487125a86a7760d2591ac9e1d3...
(...)

Dropping a snapshot

A snapshot is no more than a dataset frozen in time and thus can be destroyed in the exact same way seen in the paragraphs before. Now we do not need the Charlie snapshot we can remove it. Simple:

root # zfs destroy myfirstpool/mysecondDS@Charlie
root # zfs list -t all
NAME                     USED  AVAIL  REFER  MOUNTPOINT
myfirstpool             1.71G  6.10G   850M  /myfirstpool
myfirstpool/myfirstDS     30K  6.10G    30K  /myfirstpool/myfirstDS
myfirstpool/mysecondDS   903M  6.10G   903M  /myfirstpool/mysecondDS

And Charlie is gone forever ;-)

The time travelling machine part 1: examining differences between snapshots

So far we only used a single snapshot just to keep things simple. However a dataset can hold several snapshots and you can do everything seen so far with them like rolling back, destroying them or examining the difference not only between a snapshot and its corresponding live dataset but also between two snapshots. For this part we will consider the myfirstpool/myfirstDS dataset which should be empty at this point.

root # ls -la /myfirstpool/myfirstDS
total 3
drwxr-xr-x 2 root root 2 Mar 2 21:14 .
drwxr-xr-x 5 root root 6 Mar 2 17:58 ..

Now let's generate some contents, take a snapshot (snapshot-1), add more content, take a snapshot again (snapshot-2), do some modifications again and take a third snapshot (snapshot-3):

root # echo "Hello, world" > /myfirstpool/myfirstDS/hello.txt
root # cp -R /lib/firmware/radeon /myfirstpool/myfirstDS
root # ls -l /myfirstpool/myfirstDS
total 5
-rw-r--r-- 1 root root 13 Mar 3 06:41 hello.txt
drwxr-xr-x 2 root root 143 Mar 3 06:42 radeon
root # zfs snapshot myfirstpool/myfirstDS@snapshot-1
root # echo "Goodbye, world" > /myfirstpool/myfirstDS/goodbye.txt
root # echo "Are you there?" >> /myfirstpool/myfirstDS/hello.txt
root # cp /proc/config.gz /myfirstpool/myfirstDS
root # rm /myfirstpool/myfirstDS/radeon/CAYMAN_me.bin
root # zfs snapshot myfirstpool/myfirstDS@snapshot-2
root # echo "Still there?" >> /myfirstpool/myfirstDS/goodbye.txt
root # mv /myfirstpool/myfirstDS/hello.txt /myfirstpool/myfirstDS/hello_new.txt 
root # cat /proc/version > /myfirstpool/myfirstDS/version.txt
root # zfs snapshot myfirstpool/myfirstDS@snapshot-3
root # zfs list -t all
NAME                               USED  AVAIL  REFER  MOUNTPOINT
myfirstpool                       1.81G  6.00G   850M  /myfirstpool
myfirstpool/myfirstDS             3.04M  6.00G  2.97M  /myfirstpool/myfirstDS
myfirstpool/myfirstDS@snapshot-1    47K      -  2.96M  -
myfirstpool/myfirstDS@snapshot-2    30K      -  2.97M  -
myfirstpool/myfirstDS@snapshot-3      0      -  2.97M  -
myfirstpool/mysecondDS            1003M  6.00G  1003M  /myfirstpool/mysecondDS

You saw to how use zfs diff to compare the difference between a snapshot and its corresponding "live" dataset in the above paragraphs. Doing the same exercise with two snapshots is not that much different as you just have to explicitly tell the command what datasets are to be compared against and the command will oputput the result in the exact same manner.So what are the differences between snapshots myfirstpool/myfirstDS@snapshot-1 and myfirstpool/myfirstDS@snapshot-2? Let's make the zfs command work for us:

root # zfs diff myfirstpool/myfirstDS@snapshot-1 myfirstpool/myfirstDS@snapshot-2
M       /myfirstpool/myfirstDS/
M       /myfirstpool/myfirstDS/hello.txt
M       /myfirstpool/myfirstDS/radeon
-       /myfirstpool/myfirstDS/radeon/CAYMAN_me.bin
+       /myfirstpool/myfirstDS/goodbye.txt
+       /myfirstpool/myfirstDS/config.gz

Before digging farther, let's think about what we did between the time we created the first snapshot and the second snapshot:

  • We modified the file /myfirstpool/myfirstDS/hello.txt hence the 'M' shown on left of the second line (thus we changed something under /myfirstpool/myfirstDS hence a 'M' is also shown on the left of the first line)
  • We deleted the file /myfirstpool/myfirstDS/radeon/CAYMAN_me.bin hence the minus sign ('-') shown on the left of the fourth line (and the 'M' shown on left of the third line)
  • We added two files which were /myfirstpool/myfirstDS/goodbye.txt and /myfirstpool/myfirstDS/config.gz hence the plus sign ('+') shown on the left of the fifth and sixth lines (also this is a change happening in /myfirstpool/myfirstDS hence another reason to show a 'M' on the left of the first line)

Now same exercise this time with snapshots myfirstpool/myfirstDS@snapshot-2 and myfirstpool/myfirstDS@snapshot-3:

root # zfs diff myfirstpool/myfirstDS@snapshot-2 myfirstpool/myfirstDS@snapshot-3
M       /myfirstpool/myfirstDS/
R       /myfirstpool/myfirstDS/hello.txt -> /myfirstpool/myfirstDS/hello_new.txt
M       /myfirstpool/myfirstDS/goodbye.txt
+       /myfirstpool/myfirstDS/version.txt

Try to interpret what you see except for the second line where a "R" (standing for "Rename") is shown. ZFS is smart enough to also show both the old the new names!

Why not push the limit and try a few fancy things. First things first: what happens if we tell to compare two snapshots but in a reverse order?

root # zfs diff myfirstpool/myfirstDS@snapshot-3 myfirstpool/myfirstDS@snapshot-2
Unable to obtain diffs: 
   Not an earlier snapshot from the same fs

Is ZFS would be a bit more happy if we ask the difference between two snapshots this time with a gap in between (so snapshot 1 with snapshot 3):

root # zfs diff myfirstpool/myfirstDS@snapshot-1 myfirstpool/myfirstDS@snapshot-3
M       /myfirstpool/myfirstDS/
R       /myfirstpool/myfirstDS/hello.txt -> /myfirstpool/myfirstDS/hello_new.txt
M       /myfirstpool/myfirstDS/radeon
-       /myfirstpool/myfirstDS/radeon/CAYMAN_me.bin
+       /myfirstpool/myfirstDS/goodbye.txt
+       /myfirstpool/myfirstDS/config.gz
+       /myfirstpool/myfirstDS/version.txt

Amazing! Here again, take a couple of minutes to think about all operations you did on the dataset between the time you took the first snapshot and the time you took the last snapshot: this summary is the exact reflect of all your previous operations.

Just to put a conclusion on this subject, let's see the differences between the myfirstpool/myfirstDS dataset and its various snapshots:

root # zfs diff myfirstpool/myfirstDS@snapshot-1                                 
M       /myfirstpool/myfirstDS/
R       /myfirstpool/myfirstDS/hello.txt -> /myfirstpool/myfirstDS/hello_new.txt
M       /myfirstpool/myfirstDS/radeon
-       /myfirstpool/myfirstDS/radeon/CAYMAN_me.bin
+       /myfirstpool/myfirstDS/goodbye.txt
+       /myfirstpool/myfirstDS/config.gz
+       /myfirstpool/myfirstDS/version.txt
root # zfs diff myfirstpool/myfirstDS@snapshot-2
M       /myfirstpool/myfirstDS/
R       /myfirstpool/myfirstDS/hello.txt -> /myfirstpool/myfirstDS/hello_new.txt
M       /myfirstpool/myfirstDS/goodbye.txt
+       /myfirstpool/myfirstDS/version.txt
root #  zfs diff myfirstpool/myfirstDS@snapshot-3

Having nothing reported for the last zfs diff is normal as changed in the dataset since the snapshot has been taken.

The time travelling machine part 2: rolling back with multiple snapshots

Examining the differences between the various snapshots of a dataset or the dataset itself would be quite useless if we would not be able to roll the dataset back to one of its previous states. How we have salvaged myfirstpool/myfirstDS a bit, it would the time to restore it at it was when the first snapshot had been taken:

root # zfs rollback myfirstpool/myfirstDS@snapshot-1
cannot rollback to 'myfirstpool/myfirstDS@snapshot-1': more recent snapshots exist
use '-r' to force deletion of the following snapshots:
myfirstpool/myfirstDS@snapshot-3
myfirstpool/myfirstDS@snapshot-2

Err... Well, ZFS just tells us that several more recent snapshots exists and it refuses to proceed without dropping those latter. Unfortunately for us there is no way to circumvent that: once you jump backward you have no way to move forward again. We could demonstrate the rollback to myfirstpool/myfirstDS@snapshot-3 then myfirstpool/myfirstDS@snapshot-2 then myfirstpool/myfirstDS@snapshot-1 but it would be of very little interest previous sections of this tutorial did that already so second attempt:

root # zfs rollback -r myfirstpool/myfirstDS@snapshot-1
root # zfs list -t all                                                           
NAME                               USED  AVAIL  REFER  MOUNTPOINT
myfirstpool                       1.81G  6.00G   850M  /myfirstpool
myfirstpool/myfirstDS             2.96M  6.00G  2.96M  /myfirstpool/myfirstDS
myfirstpool/myfirstDS@snapshot-1     1K      -  2.96M  -
myfirstpool/mysecondDS            1003M  6.00G  1003M  /myfirstpool/mysecondDS

myfirstpool/myfirstDS effectively returned to the desired state (notice the size of myfirstpool/myfirstDS@snapshot-1) and the snapshots snapshot-2 and snapshot-3 vanished. Just to convince you:

root # zfs diff myfirstpool/myfirstDS@snapshot-1
root #

No differences at all!

Snapshots and clones

A clone and a snapshot are two very close things in ZFS:

  • A clone appears as mounted dataset (i.e. you can read and write data in it) while a snapshot stays apart and is always read-only
  • A clone is always spawned from a snapshot

So it is absolutely true to say that a clone is just indeed a writable snapshot. The copy-on-write feature of ZFS plays its role even there: the data blocks hold by the snapshot are only duplicated upon modification. So cloning 20Gb snapshot of data does not lead to an additional 20 Gb of data being eaten from the pool.

How to make a clone? Simple, once again with the zfs command used like this:

root # zfs clone myfirstpool/myfirstDS@snapshot-1 myfirstpool/myfirstDS_clone1
root # fs list -t all
NAME                               USED  AVAIL  REFER  MOUNTPOINT
myfirstpool                       1.81G  6.00G   850M  /myfirstpool
myfirstpool/myfirstDS             2.96M  6.00G  2.96M  /myfirstpool/myfirstDS
myfirstpool/myfirstDS@snapshot-1     1K      -  2.96M  -
myfirstpool/myfirstDS_clone1         1K  6.00G  2.96M  /myfirstpool/myfirstDS_clone1
myfirstpool/mysecondDS            1003M  6.00G  1003M  /myfirstpool/mysecondDS

Noticed the value of MOUNTPOINT for myfirstpool/myfirstDS_clone1? No we have a dataset that is mounted! Let's check with the mount command:

root # mount | grep clone
myfirstpool/myfirstDS_clone1 on /myfirstpool/myfirstDS_clone1 type zfs (rw,xattr)

In theory we can change or write additional data in the clone as it is mounted as being writable (rw). Let it be!

root # # ls /myfirstpool/myfirstDS_clone1
hello.txt  radeon
root # cp /proc/config.gz /myfirstpool/myfirstDS_clone1
root # echo 'This is a clone!' >> /myfirstpool/myfirstDS_clone1/hello.txt
root # ls /myfirstpool/myfirstDS_clone1
config.gz  hello.txt  radeon
root # cat /myfirstpool/myfirstDS_clone1/hello.txt                       
Hello, world
This is a clone!

Unfortunately it is not possible to ask the difference between a clone and a snapshot, zfs diff expects to see either a snapshot name either two snapshots names. Once spawned, a clone starts its own existence and the clone that served as a seed for it remains attached to its own original dataset.

Because clones are nothing more than a ZFS dataset they can be destroyed just like any ZFS dataset:

root # zfs destroy myfirstpool/myfirstDS_clone1
root # zfs list -t all                                                        
NAME                               USED  AVAIL  REFER  MOUNTPOINT
myfirstpool                       1.81G  6.00G   850M  /myfirstpool
myfirstpool/myfirstDS             2.96M  6.00G  2.96M  /myfirstpool/myfirstDS
myfirstpool/myfirstDS@snapshot-1     1K      -  2.96M  -
myfirstpool/mysecondDS            1003M  6.00G  1003M  /myfirstpool/mysecondDS

Streaming ZFS datasets

A ZFS snapshot can not only be cloned or explored but also streamed in a local file or even over the network thus allowing to back up or simply an exact bit to bit copy of a ZFS dataset between two machines for example. Snapshots being differential (i.e. incremental) by nature very little network overhead is induced when consecutive snapshots are streamed over the network. A nifty move from the designers was to use stdin and stdout as transmission/reception channels thus allowing great a flexibility in processing the ZFS stream. You can envisage, for instance, to compress your stream then crypt it then encode it in base64 then sign it and so on. It sounds a bit overkill but it is possible and in the general case you can use any tool that swallows the data from stdin and spit it through stdout in your plumbing.

First things first, just to illustrate some basic concepts here is how to stream a ZFS dataset snapshot to a local file:

root # zfs send myfirstpool/myfirstDS@snapshot-1 > /tmp/myfirstpool-myfirstDS@snapshot-snap1
root # cat /tmp/myfirstpool-myfirstDS@snapshot-snap1 | zfs receive myfirstpool/myfirstDS@testrecv

Now let's stream it back:

root # cannot receive new filesystem stream: destination 'myfirstpool/myfirstDS' exists
must specify -F to overwrite it

Ouch... ZFS refuses to go any step further because some data would be overwritten. We do now own any critical data on the dataset so we could destroy it and try again or use a different name nevertheless, just for the sake of the demonstration, let's create another zpool prior restoring the dataset there:

root # dd if=/dev/zero of=/tmp/zfs-test-disk04.img bs=2G count=1 
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB) copied, 6.35547 s, 338 MB/s
root # losetup -f            
/dev/loop4
root # losetup /dev/loop4 /tmp/zfs-test-disk04.img
root # zpool create testpool /dev/loop4
root # zpool list 
NAME          SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
myfirstpool  7.94G  1.81G  6.12G    22%  1.00x  ONLINE  -
testpool     1.98G  89.5K  1.98G     0%  1.00x  ONLINE  -

Take two:

root # cat /tmp/myfirstpool-myfirstDS@snapshot-snap1 | zfs receive testpool/myfirstDS@testrecv
root # zfs list -t all
NAME                               USED  AVAIL  REFER  MOUNTPOINT
myfirstpool                       1.81G  6.00G   850M  /myfirstpool
myfirstpool/myfirstDS             2.96M  6.00G  2.96M  /myfirstpool/myfirstDS
myfirstpool/myfirstDS@snapshot-1     1K      -  2.96M  -
myfirstpool/mysecondDS            1003M  6.00G  1003M  /myfirstpool/mysecondDS
testpool                          3.08M  1.95G    31K  /testpool
testpool/myfirstDS                2.96M  1.95G  2.96M  /testpool/myfirstDS
testpool/myfirstDS@testrecv           0      -  2.96M  -

Very interesting things happened there! First the data previously stored in the file /tmp/myfirstpool-myfirstDS@snapshot-snap1 been copied as a snapshot in the destination zpool (testpool here) and it has been copied exactly in the same manner given on the command line. Second a clone of this snapshot has been crated for you by ZFS and the snapshot myfirstpool/myfirstDS@snapshot-1 now appears as a live ZFS dataset where data can be read and written! Think two seconds about the error message we got just above, the reason ZFS protested becomes clear now.

An alternative would have been to use the original zpool but this time with a different name for the dataset:

root # cat /tmp/myfirstpool-myfirstDS@snapshot-snap1 | zfs receive myfirstpool/myfirstDS_copy@testrecv
root # zfs list -t all                                                                                
NAME                                  USED  AVAIL  REFER  MOUNTPOINT
myfirstpool                          1.82G  6.00G   850M  /myfirstpool
myfirstpool/myfirstDS                2.96M  6.00G  2.96M  /myfirstpool/myfirstDS
myfirstpool/myfirstDS@snapshot-1        1K      -  2.96M  -
myfirstpool/myfirstDS_copy           2.96M  6.00G  2.96M  /myfirstpool/myfirstDS_copy
myfirstpool/myfirstDS_copy@testrecv      0      -  2.96M  -
myfirstpool/mysecondDS               1003M  6.00G  1003M  /myfirstpool/mysecondDS 

Now something a bit more interesting: instead of using a local file, we will stream the dataset to a Solaris 11 machine (OpenIndiana can be used also) over the network using the GNU flavour of netcat (net-analyzer/gnu-netcat) over the port TCP/7000 , in that case the Solaris host is a x86 machine but a SPARC machine would have given the exact same result as ZFS contrary to UFS is platform agnostic.

On the Solaris machine:

root # nc -l -p 7000 | zfs receive nas/zfs-stream-test@s1

On the Linux machine:

root #  zfs send myfirstpool/myfirstDS@snapshot-1 | netcat -c 192.168.1.13 7000
   Warning

The nc command coming with the net-analyzer/netcat package does not automatically close the network connection when its input stream is closed (i.e. when zfs send command terminates its job) thus its Solaris conterpart also waits "forever" at the other end of the "pipe". It is not possible to override this behaviour hence the reason we use its GNU variant (package net-analyzer/netcat).

After the dataset has been received on the Solaris machine the nas zpool now contains the sent snapshot and its corresponding clone, that latter being automatically created:

root # zfs list -t snapshot
NAME                                          USED  AVAIL  REFER  MOUNTPOINT
(...)
nas/zfs-stream-test                          3.02M  6.17T  3.02M  /nas/zfs-stream-test
nas/zfs-stream-test@s1                           0      -  3.02M  -

A quick look in the /san/zfs-stream-test directory on the same Solaris machine gives:

root # ls -lR /nas/zfs-stream-test
/nas/zfs-stream-test/:
total 12
-rw-r--r--   1 root     root          13 Mar  3 18:59 hello.txt
drwxr-xr-x   2 root     root         143 Mar  3 18:59 radeon

/nas/zfs-stream-test/radeon:
total 6144
-rw-r--r--   1 root     root        8704 Mar  3 18:59 ARUBA_me.bin
-rw-r--r--   1 root     root        8704 Mar  3 18:59 ARUBA_pfp.bin
-rw-r--r--   1 root     root        6144 Mar  3 18:59 ARUBA_rlc.bin
-rw-r--r--   1 root     root       24096 Mar  3 18:59 BARTS_mc.bin
-rw-r--r--   1 root     root        5504 Mar  3 18:59 BARTS_me.bin
-rw-r--r--   1 root     root        4480 Mar  3 18:59 BARTS_pfp.bin
(...)

The dataset is exactly what it is on the Linux machine!

   Note

We took only a simple case here: ZFS can is able to handle snapshots is a very flexible way. You can ask, for example, to combine several consecutive snapshots then send them as a single snapshot or you can choose to proceed in incremental steps. A man zfs will tell you the art of streaming your snapshots.

Govern a dataset by attributes

In the ZFS world, many aspects are now managed by simply setting/clearing a property attached to a ZFS dataset through the now so well-known command zfs. You can, for example:

  • put a size limit on a dataset
  • control if new files are encrypted and/or compressed
  • define a quota
  • control checksum usage => never turn that property off unless having very good reasons you are likely to never have (no checksums = no silent data corruption detection)
  • share a dataset by NFS/CIFS (Samba)
  • control data deduplication

Not all of a dataset properties are settable, some of them are set and managed by the operating system in the background for you and thus cannot be modified. Like any other action concerning datasets, properties are sets and unset via the zfs command. Let's start by checking the value of all supported attributes for the dataset myfirstpool/myfirstDS:

root # zfs get all myfirstpool/myfirstDS
NAME                   PROPERTY              VALUE                   SOURCE
myfirstpool/myfirstDS  type                  filesystem              -
myfirstpool/myfirstDS  creation              Sun Mar  2 15:26 2014   -
myfirstpool/myfirstDS  used                  2.96M                   -
myfirstpool/myfirstDS  available             6.00G                   -
myfirstpool/myfirstDS  referenced            2.96M                   -
myfirstpool/myfirstDS  compressratio         1.00x                   -
myfirstpool/myfirstDS  mounted               yes                     -
myfirstpool/myfirstDS  quota                 none                    default
myfirstpool/myfirstDS  reservation           none                    default
myfirstpool/myfirstDS  recordsize            128K                    default
myfirstpool/myfirstDS  mountpoint            /myfirstpool/myfirstDS  default
myfirstpool/myfirstDS  sharenfs              off                     default
myfirstpool/myfirstDS  checksum              on                      default
myfirstpool/myfirstDS  compression           off                     default
myfirstpool/myfirstDS  atime                 on                      default
myfirstpool/myfirstDS  devices               on                      default
myfirstpool/myfirstDS  exec                  on                      default
myfirstpool/myfirstDS  setuid                on                      default
myfirstpool/myfirstDS  readonly              off                     default
myfirstpool/myfirstDS  zoned                 off                     default
myfirstpool/myfirstDS  snapdir               hidden                  default
myfirstpool/myfirstDS  aclinherit            restricted              default
myfirstpool/myfirstDS  canmount              on                      default
myfirstpool/myfirstDS  xattr                 on                      default
myfirstpool/myfirstDS  copies                1                       default
myfirstpool/myfirstDS  version               5                       -
myfirstpool/myfirstDS  utf8only              off                     -
myfirstpool/myfirstDS  normalization         none                    -
myfirstpool/myfirstDS  casesensitivity       sensitive               -
myfirstpool/myfirstDS  vscan                 off                     default
myfirstpool/myfirstDS  nbmand                off                     default
myfirstpool/myfirstDS  sharesmb              off                     default
myfirstpool/myfirstDS  refquota              none                    default
myfirstpool/myfirstDS  refreservation        none                    default
myfirstpool/myfirstDS  primarycache          all                     default
myfirstpool/myfirstDS  secondarycache        all                     default
myfirstpool/myfirstDS  usedbysnapshots       1K                      -
myfirstpool/myfirstDS  usedbydataset         2.96M                   -
myfirstpool/myfirstDS  usedbychildren        0                       -
myfirstpool/myfirstDS  usedbyrefreservation  0                       -
myfirstpool/myfirstDS  logbias               latency                 default
myfirstpool/myfirstDS  dedup                 off                     default
myfirstpool/myfirstDS  mlslabel              none                    default
myfirstpool/myfirstDS  sync                  standard                default
myfirstpool/myfirstDS  refcompressratio      1.00x                   -
myfirstpool/myfirstDS  written               1K                      -
myfirstpool/myfirstDS  snapdev               hidden                  default
   Note

the manual page of the zfs command gives a list and description of every attributes supported by a dataset.

May be something poked your curiosity: "what SOURCE means?". SOURCE describes how the property has been determined for the dataset and can have several values:

  • local: the property has been explicitly set for this dataset
  • default: a default value has been assigned by the operating system if not explicitely set by the system adminsitrator
  • dash (-): immutable property (e.g. dataset creation time, whether the dataset is currently mounted or not...)

Of course you can get the property of a single attribute if you know its name instead of asking for all properties.

Compressing data

root # zfs get compression myfirstpool/myfirstDS
NAME                   PROPERTY     VALUE     SOURCE
myfirstpool/myfirstDS  compression  off       default

Let's activate the compression on the volume (notice the change in the SOURCE column). That is being achieved through an attribute simply named compression which can be changed by running the zfs command with the set sub-command followed by the attribute's name (compression here) and value (on here) like this:

root # zfs set compression=on myfirstpool/myfirstDS
root # zfs get compression myfirstpool/myfirstDS
NAME                   PROPERTY     VALUE     SOURCE
myfirstpool/myfirstDS  compression  on        local

The attribute's new value becomes immediately effective no need to unmount and remount anything. compression set to on will only affect new data and not what already exists on the dataset. For your information, the lzjb compression algorithms is used when compression is set to on, you can override and use another compression algorithm by explicitly tell your choice. For example if you want to activate LZ4 compression on the dataset:

root # zfs get compression myfirstpool/myfirstDS
NAME                   PROPERTY     VALUE     SOURCE
myfirstpool/myfirstDS  compression  off       default
root # zfs set compression=lz4 myfirstpool/myfirstDS
root # zfs get compression myfirstpool/myfirstDS
NAME                   PROPERTY     VALUE     SOURCE
myfirstpool/myfirstDS  compression  lz4        local

Assuming myfirstpool/myfirstDS is empty with no snapshots:

root # cp -a /usr/src/linux-3.13.5-gentoo /-a /usr/src/linux-3.13.5-gentoo
root # zfs get all myfirstpool/myfirstDS
NAME                   PROPERTY              VALUE                   SOURCE
myfirstpool/myfirstDS  type                  filesystem              -
myfirstpool/myfirstDS  creation              Sun Mar  2 15:26 2014   -
myfirstpool/myfirstDS  used                  584M                    -
myfirstpool/myfirstDS  available             5.43G                   -
myfirstpool/myfirstDS  referenced            584M                    -
myfirstpool/myfirstDS  compressratio         1.96x                   -      <<<< Compression ratio
myfirstpool/myfirstDS  mounted               yes                     -
myfirstpool/myfirstDS  quota                 none                    default
myfirstpool/myfirstDS  reservation           none                    default
myfirstpool/myfirstDS  recordsize            128K                    default
myfirstpool/myfirstDS  mountpoint            /myfirstpool/myfirstDS  default
myfirstpool/myfirstDS  sharenfs              off                     default
myfirstpool/myfirstDS  checksum              on                      default
myfirstpool/myfirstDS  compression           on                      local   <<<< LZJB compression active
myfirstpool/myfirstDS  atime                 on                      default
myfirstpool/myfirstDS  devices               on                      default
myfirstpool/myfirstDS  exec                  on                      default
myfirstpool/myfirstDS  setuid                on                      default
myfirstpool/myfirstDS  readonly              off                     default
myfirstpool/myfirstDS  zoned                 off                     default
myfirstpool/myfirstDS  snapdir               hidden                  default
myfirstpool/myfirstDS  aclinherit            restricted              default
myfirstpool/myfirstDS  canmount              on                      default
myfirstpool/myfirstDS  xattr                 on                      default
myfirstpool/myfirstDS  copies                1                       default
myfirstpool/myfirstDS  version               5                       -
myfirstpool/myfirstDS  utf8only              off                     -
myfirstpool/myfirstDS  normalization         none                    -
myfirstpool/myfirstDS  casesensitivity       sensitive               -
myfirstpool/myfirstDS  vscan                 off                     default
myfirstpool/myfirstDS  nbmand                off                     default
myfirstpool/myfirstDS  sharesmb              off                     default
myfirstpool/myfirstDS  refquota              none                    default
myfirstpool/myfirstDS  refreservation        none                    default
myfirstpool/myfirstDS  primarycache          all                     default
myfirstpool/myfirstDS  secondarycache        all                     default
myfirstpool/myfirstDS  usedbysnapshots       0                       -
myfirstpool/myfirstDS  usedbydataset         584M                    -
myfirstpool/myfirstDS  usedbychildren        0                       -
myfirstpool/myfirstDS  usedbyrefreservation  0                       -
myfirstpool/myfirstDS  logbias               latency                 default
myfirstpool/myfirstDS  dedup                 off                     default
myfirstpool/myfirstDS  mlslabel              none                    default
myfirstpool/myfirstDS  sync                  standard                default
myfirstpool/myfirstDS  refcompressratio      1.96x                   -
myfirstpool/myfirstDS  written               584M                    -
myfirstpool/myfirstDS  snapdev               hidden                  default

Notice the value for compressionratio: it no longer shows 1.00x but a shiny 1.96 here (1.96:1 ratio). We have a high compression ratio here because we copied a lot of source code files but if we put a lot of compressed data (images in jpeg or png format for example) the ratio would have decreased a lot.

Changing the mountpoint

Let's change the mount point of myfirstpool/myfirstDS to something like /mnt/floppy instead of /myfirstpool/myfirstDS for the sake of demonstration purposes. Changing a dataset mountpoint is done via its mountpoint attribute:

root # zfs get mountpoint myfirstpool/myfirstDS
NAME                   PROPERTY    VALUE                   SOURCE
myfirstpool/myfirstDS  mountpoint  /myfirstpool/myfirstDS  default
root # zfs set mountpoint=/mnt/floppy myfirstpool/myfirstDS
root # zfs list  
NAME                     USED  AVAIL  REFER  MOUNTPOINT
myfirstpool             2.38G  5.43G   850M  /myfirstpool
myfirstpool/myfirstDS    584M  5.43G   584M  /mnt/floppy
myfirstpool/mysecondDS  1003M  5.43G  1003M  /myfirstpool/mysecondDS
root # mount | grep floppy
myfirstpool/myfirstDS on /mnt/floppy type zfs (rw,xattr)

Notice the dataset has been automatically unmounted and remounted at the new location for you and once again the change is effective immediately. If the indicated mountpoint would not be empty ZFS is smart enough to warn you and to not remount it.

Sharing a dataset through NFS

Now that you are a bit more familiar with ZFS properties you won't be that much surprised to learn that sharing a dataset can be done by setting one of its properties. You can, of course, go the "traditional" way and edit Samba's or NFS related configuration files by hand however why hassle with manual editing since ZFS can do that for you? ZFS On Linux has support for both systems.

Next let's share the myfirstpool/myfirstDS dataset by NFS to any host within the network 192.168.1.0/24 (read-write access) . An important detail here : the zfs command will use NFS v4 by default so any options related to NFS v4 can be passed on the command line, refer to options supported by your NFS server documentation for further information on what is supported and how use the feature. To share the dataset by NFS, you must change a property named sharenfs:

root # zfs set sharenfs='rw=@192.168.1.0/24' myfirstpool/myfirstDS

What happened? Simple:

root # zfs get sharenfs myfirstpool/myfirstDS
NAME                   PROPERTY  VALUE               SOURCE
myfirstpool/mfirstDS  sharenfs  rw=@192.168.1.0/24  local
root # cat /etc/dfs/sharetab 
/myfirstpool/myfirstDS  -       nfs     rw=@192.168.1.0/24
   Important

The syntax and behaviour is similar to what is found under Solaris 11: zfs share' reads and updates entries coming from the file /etc/dfs/sharetab (not /etc/exports). This is a Solaris touch: under Solaris 11 the zfs and share commands now acts on /etc/dfs/sharetab, /etc/dfs/dfstab being no longer supported.

By a checking with the showmount command:

root # showmount -e  
Export list for .... :
/myfirstpool/myfirstDS 192.168.1.0/24

At this point it should be possible to mount the dataset from another host on the network (here a Solaris 11 machine) and write some data in it:

root # mkdir -p /mnt/myfirstDS
root # mount 192.168.1.19:/myfirstpool/myfirstDS /mnt/myfirstDS
root #  mount | grep myfirst
/mnt/myfirstDS on 192.168.1.19:/myfirstpool/myfirstDS remote/read/write/setuid/devices/rstchown/xattr/dev=89c0002 on Sun Mar  9 14:28:55 2014
root # cp /kernel/amd64/genunix /mnt/myfirstDS

Et voila!No sign of protest so the file has been copied. If we check what the ZFS dataset looks like on the Linux host where the ZFS dataset resides, the copied file (a Solaris kernel image here) is present:

root # ls -l /myfirstpool/myfirstDS/genunix 
-rwxr-xr-x 1 root root 5769456 Mar  9 14:32 /myfirstpool/myfirstDS/genunix

$100 question: How to "unshare" the dataset? Simple: just set sharenfs to off! Be aware that the NFS server will cease to share the dataset no matter if this one is still in use by client machines. Any NFS client still having the dataset mounted at this point will encounter RPC errors whenever an I/O operation is attempted on the share (Solaris NFS client here):

root # ls /mnt/myfirstDS
NFS compound failed for server 192.168.1.19: error 7 (RPC: Authentication error)

Sharing a dataset through Samba/SMB

Let's push the limit a bit and use Samba instead of NFS. ZFS relies on Samba (net-fs/samba on Gentoo/Funtoo) to get the job done as it does not implement a SMBFS server on its own. So Samba must be emerged first making sure :

  • it has built-in ACL support (acl use flag)
  • client tools are built (client use flag) as ZoL invokes the net command behind the scene (i.e. net usershare ... )
  • usershare must be functional

Quoting the zfs command's manual page, your Samba server must also be configured like this:

  • Samba will need to listen to 'localhost' (127.0.0.1) for the zfs utilities to communicate with samba. This is the default behaviour for most Linux distributions.
  • Samba must be able to authenticate a user. This can be done in a number of ways, depending on if using the system password file, LDAP or the Samba specific smbpasswd file. How to do this is outside the scope of this manual. Please refer to the smb.conf(5) manpage for more information.
  • See the USERSHARE section of the smb.conf(5) man page for all configuration options in case you need to modify any options to the share afterwards. Do note that any changes done with the 'net' command will be undone if the share is every unshared (such as at a reboot etc). In the future, ZoL will be able to set specific options directly using sharesmb=<option>.

What you have to know at this point is that, once emerged on your Funtoo box, Samba has no configuration file thus will refuse to start. You can use the provided example file /etc/samba/smb.conf.example as a starting point for /etc/samba/smb.conf, just copy it:

root # cd /etc/samba
root # cp smb.conf.example smb.conf

Now create the directory /var/lib/samba/usershares (will host the definitions of all usershares), leaving default permissions (0755) and owner (root:root) untouched for the context of this tutorial, unless you use ZFS delegation, is acceptable.

root # mkdir /var/lib/samba/usershares

Several important things to know unless you have hours to waste with your friend Google:

  • When you set the sharesmb property to on, the zfs command will invoke Samba's net command behind the scenes to create a usershare (comment and ACL are values are both specified). E.g. zfs sharesmb=on myfirstpool/myfirstDS => net usershare add myfirstpool_myfirstDS /myfirstpool/myfirstDS "Comment:/myfirstpool/myfirstDS" "Everyone:F" guest_ok=n
  • Under which user the net usershare command will be invoked? Unless ZFS delegation is used, root will be the owner of the usershare created by root which is specified in a textual file (named after the usershare's name) located in the directory /var/lib/samba/usershares. There is per Samba requirement three very important details about the directory /var/lib/samba/usershares :
    • Its owner must be root , the group is of secondary importance and left to your discretion
    • Its permissions must be 1775 (so owner = rwx, group = rwx, others = r-x with sticky bit armed).
    • If the directory is not set as above Samba will simply ignore any usershares you define so if you have errors like BAD_NETWORK_NAME when connecting a usershare created by ZFS double check the owner and permissions set for /var/lib/samba/usershares or the directory you use on your Funtoo box to hold usershares definition...
  • Unless explicitly overridden in /etc/samba/smb.conf:
    • usershare max shares default value is zero so no usershare can be created. If you forget to set a value greater than zero for usershare max shares any zfs set sharesmb=on command will complain with the message cannot share (...) smb add share failed (also any net usershare add command will show the error message net usershare: usershares are currently disabled).
    • usershare path = /var/lib/samba/usershares
    • usershare owner only is set to true by default so Samba will refuse the share to any remote user not opening a session as root on the share

So basically a super-minimalistic configuration for Samba would be:

[global]
   workgroup = MYGROUP
   server string = Samba Server
   security = user
   log file = /var/log/samba/log.%m
   max log size = 50
   # Permits the usershares of being accessed by any other user than 'root' from a remote client machine
   usershare owner only = False
   # WARNING: default value for usershare max shares is 0 so No usershares possible...
   usershare max shares = 10

   Warning

This configuration is obviously for the sake of demonstration purposes within the scope of this tutorial, do not use it for the real world!

At this point reload or restart Samba if you have altered /etc/samba/smb.conf. Now the usershares are possible, let's share a ZFS dataset over Samba:

root # zfs set sharesmb=on myfirstpool/myfirstDS 
root # zfs get sharesmb myfirstpool/myfirstDS
NAME                   PROPERTY  VALUE     SOURCE
myfirstpool/myfirstDS  sharesmb  on        local

The command must return without any error message, if you have something like "cannot share myfirstpool/myfirstDS smb add share failed" then usershares are not functional on your machine (see the notes just above). Now a Samba usershare named after the zpool and the dataset names should exist:

root # net usershare list
myfirstpool_myfirstDS
root # net usershare info myfirstpool_myfirstDS
[myfirstpool_myfirstDS]
path=/myfirstpool/myfirstDS
comment=Comment: /myfirstpool/myfirstDS
usershare_acl=Everyone:F,
guest_ok=n

So far so good! So let's try this on the machine itself:

root # 

Data redundancy with ZFS

Nothing is perfect and the storage medium (even in datacenter-class equipment) is prone to failures and fails on a regular basis. Having data redundancy is mandatory to help in preventing single-points of failure (SPoF). Over the past decades, RAID technologies were powerful however their power is precisely their weakness: as operating at the block level, they do not care about what is stored on the data blocks and have no ways to interact with the filesystems stored on them to ensure data integrity is properly handled.

Some statistics

It is not a secret to tell that a general trend in the IT industry is the exponential growth of data quantities. Just thinking about the amount of data Youtube, Google or Facebook generates every day taking the case of the first some statistics gives:

  • 24 hours of video is generated every minute in March 2010 (May 2009 - 20h / October 2008 - 15h / May 2008 - 13h)
  • More than 2 billions views a day
  • More video is produced on Youtube every 60 days than 3 major US broadcasting networks did in the last 60 years

Facebook is also impressive (Facebook own stats):

  • over 900 million objects that people interact with (pages, groups, events and community pages)
  • Average user creates 90 pieces of content each month (750 millions users active)
  • More than 2.5 million websites have integrated with Facebook

What is true with Facebook and Youtube is also true with many other cases (think one minutes about the amount of data stored in iTunes) especially with the growing popularity of cloud computing infrastructures. Despite the progress of the technology a "bottleneck" still exists: the storage reliability is nearly the same over the years. If only one organization in the world generate huge quantities of data it would be the CERN (Conseil Européen pour la Recherche Nucléaire, now officially known as European Organization for Nuclear Research) as their experiments can generate spikes of many terabytes of data within a few seconds. A study done in 2007 quoted by a ZDNet article reveals that:

  • Even ECC memory cannot be always be helpful: 3 double-bit errors (uncorrectable) occurred in 3 months on 1300 nodes. Bad news: it should be zero.
  • RAID systems cannot protect in all cases: monitoring 492 RAID controller for 4 weeks showed an average error rate of 1 per ~10^14 bits, giving roughly 300 errors for every 2.4 petabytes
  • Magnetic storage is still not reliable even on high-end datacenter class drives: 500 errors found over 100 nodes while writing 2 GB file to 3000+ nodes every 2 hours then read it again and again for 5 weeks.

Overall this means: 22 corrupted files (1 in every 1500 files) for a grand total of 33700 files holding 8.7TB of data. And this study is 5 years old....

Source of silent data corruption

http://www.zdnet.com/blog/storage/50-ways-to-lose-your-data/168

Not an exhaustive list but we can quote:

  • Cheap controller or buggy driver that does not reports errors/pre-failure conditions to the operating system;
  • "bit-leaking": an harddrive consists of many concentric magnetic tracks. When the hard drive magnetic head writes bits on the magnetic surface it generates a very weak magnetic field however sufficient to "leak" on the next track and change some bits. Drives can generally, compensate those situations because they also records some error correction data on the magnetic surface
  • magnetic surface defects (weak sectors)
  • Hard drives firmware bugs
  • Cosmic rays hitting your RAM chips or hard drives cache memory/electronics

Building a mirrored pool

ZFS RAID-Z

ZFS/RAID-Z vs RAID-5

RAID-5 is very commonly used nowadays because of its simplicity, efficiency and fault-tolerance. Although the technology did its proof over decades, it has a major drawback known as "The RAID-5 write hole". if you are familiar with RAID-5 you already know that is consists of spreading the stripes across all of the disks within the array and interleaving them with a special stripe called the parity. Several schemes of spreading stripes/parity between disks exists in the natures, each one with its own pros and cons, however the "standard" one (also known as left-asynchronous) is:

Disk_0  | Disk_1  | Disk_2  | Disk_3
[D0_S0] | [D0_S1] | [D0_S2] | [D0_P]
[D1_S0] | [D1_S1] | [D1_P]  | [D1_S2]
[D2_S0] | [D2_P]  | [D2_S1] | [D2_S2]
[D2_P]  | [D2_S0] | [D2_S1] | [D2_S2]

The parity is simply computed by XORing the stripes of the same "row", thus giving the general equation:

  • [Dn_S0] XOR [Dn_S1] XOR ... XOR [Dn_Sm] XOR [Dn_P] = 0

This equation can be rewritten in several ways:

  • [Dn_S0] XOR [Dn_S1] XOR ... XOR [Dn_Sm] = [Dn_P]
  • [Dn_S1] XOR [Dn_S2] XOR ... XOR [Dn_Sm] XOR [Dn_P] = [Dn_S0]
  • [Dn_S0] XOR [Dn_S2] XOR ... XOR [Dn_Sm] XOR [Dn_P] = [Dn_S1]
  • ...and so on!

Because the equations are a combinations of exclusive-or, it is possible to easily compute a parameter if it is missing. Let say we have 3 stripes plus one parity composed of 4 bits each but one of them is missing due to a disk failure:

  • D0_S0 = 1011
  • D0_S1 = 0010
  • D0_S2 = <missing>
  • D0_P = 0110

However we know that:

  • D0_S0 XOR D0_S1 XOR D0_S2 XOR D0_P = 0000 also rewritten as:
  • D0_S2 = D0_S1 XOR D0_S2 XOR D0_P

Applying boolean algebra it gives: D0_S2 = 1011 XOR 0010 XOR 0110 = 1111. Proof: 1011 XOR 0010 XOR 1111 = 0110 this is the same as D0_P

'So what's the deal?' Okay now the funny part, forgot the above hypothesis and imagine we have this:

  • D0_S0 = 1011
  • D0_S1 = 0010
  • D0_S2 = 1101
  • D0_P = 0110

Applying boolean algebra magics gives 1011 XOR 0010 XOR 1101 => 0100. Problem: this is different of D0_P (0110). Can you tell which one (or which ONES) of the four terms lies? If you find a mathematically acceptable solution, found your company because you have just solved a big computer science problem. If humans can't solve the question, imagine how hard it is for the poor little RAID-5 controller to determine which stripe is right and which one lies and the resulting "datageddon" (i.e. massive data corruption on the RAID-5 array) when the RAID-5 controller detect error and start to rebuild the array.

This is not science fiction, this a pure reality and the weakness stays in the RAID-5 simplicity. Here is how it can happen: an urban legend with RAID-5 arrays is that they update stripes in an atomic transaction (all of the stripes+parity are written or none of them). Too bad, this is just not true, the data is written on the fly and if for a reason or another the machine where the RAID-5 array has a power outage or crash, the RAID-5 controller will simply have no idea about what he was doing and which stripes are up to date which ones are not up to date. Of course, RAID controllers in servers do have a replaceable on-board battery and most of the time the server they reside in is connected to an auxiliary source like a battery-based UPS or a diesel/gas electricity generator. However, Murphy laws or unpredictable hazards can, sometimes, happens....

Another funny scenario: imagine a machine with a RAID-5 array (on UPS this time) but with non ECC memory. the RAID-5 controller splits the data buffer in stripes, computes a data stripe and starts to write them on the different disks of the array. But...but...but... For some odd reason, only one bit in one of the stripes flips (cosmic rays, RFI...) after the parity calculation. Too bad too sad, one of the written stripes contains corrupted data and it is silently written on the array. Datageddon in sight!

Not to make you freaking: storage units have sophisticated error correction capability (a magnetic surface or an optical recording surface is not perfect and reading/writing error occurs) masking most the cases. However, some established statistics estimates that even with error correction mechanism one bit over 10^16 bits transferred is incorrect. 10^16 is really huge but unfortunately in this beginning of the XXIst century with datacenters brewing massive amounts of data with several hundreds to not say thousands servers this this number starts to give headaches: a big datacenter can face to silent data corruption every 15 minutes (Wikepedia). No typo here, a potential disaster may silently appear 5 times an hour for every single day of the year. Detection techniques exists but traditional RAID-5 arrays in them selves can be a problem. Ironic for a so popular and widely used solution :)

If RAID-5 was an acceptable trade-off in the past decades, it simply made its time. RAID-5 is dead? *Horray!*

More advanced topics

Z-Volumes (ZVOLs)

ZFS Intention Log (ZIL)

Permission delegation

ZFS brings a feature known as delegated administration. Delegated administration enables ordinary users to handle administrative tasks on a dataset without being administrators. It is however not a sudo replacement as it covers only ZFS related tasks such as sharing/unsharing, disk quota management and so on. Permission delegation shines in flexibility because such delegation can be handled by inheritance though nested datasets. Pewrmission deleguation is handled via zfs through its allow and disallow options.

Final words and lessons learned

ZFS on Linux, while still in development, showed strong capabilities and supported many of the features found in the Solaris/OpenIndiana implementation. It also seems to be very stable as no crashes or kernel oops happened while writing this tutorial. Funtoo does not officially support an installations over ZFS datasets however you can always read ZFS Install Guide to have a Funtoo box relying on ZFS!

Footnotes & references

Source: solaris-zfs-administration-guide