https://www.funtoo.org/api.php?action=feedcontributions&user=118.90.19.249&feedformat=atomFuntoo - User contributions [en]2024-03-29T15:06:42ZUser contributionsMediaWiki 1.36.2https://www.funtoo.org/index.php?title=ZFS_as_Root_Filesystem&diff=2411ZFS as Root Filesystem2013-10-23T05:51:58Z<p>118.90.19.249: it should be /dev/sda3 right?</p>
<hr />
<div>== Introduction ==<br />
<br />
This tutorial will show you how to install Funtoo on ZFS (rootfs). This tutorial is meant to be an "overlay" over the [[Funtoo_Linux_Installation|Regular Funtoo Installation]]. Follow the normal installation and only use this guide for steps 2, 3, and 8.<br />
<br />
=== Introduction to ZFS ===<br />
<br />
Since ZFS is a new technology for Linux, it can be helpful to understand some of its benefits, particularly in comparison to BTRFS, another popular next-generation Linux filesystem:<br />
<br />
* On Linux, the ZFS code can be updated independently of the kernel to obtain the latest fixes. btrfs is exclusive to Linux and you need to build the latest kernel sources to get the latest fixes.<br />
<br />
* ZFS is supported on multiple platforms. The platforms with the best support are Solaris, FreeBSD and Linux. Other platforms with varying degrees of support are NetBSD, Mac OS X and Windows. btrfs is exclusive to Linux.<br />
<br />
* ZFS has the Adaptive Replacement Cache replacement algorithm while btrfs uses the Linux kernel's Last Recently Used replacement algorithm. The former often has an overwhelmingly superior hit rate, which means fewer disk accesses.<br />
<br />
* ZFS has the ZFS Intent Log and SLOG devices, which accelerates small synchronous write performance.<br />
<br />
* ZFS handles internal fragmentation gracefully, such that you can fill it until 100%. Internal fragmentation in btrfs can make btrfs think it is full at 10%. Btrfs has no automatic rebalancing code, so it requires a manual rebalance to correct it.<br />
<br />
* ZFS has raidz, which is like RAID 5/6 (or a hypothetical RAID 7 that supports 3 parity disks), except it does not suffer from the RAID write hole issue thanks to its use of CoW and a variable stripe size. btrfs gained integrated RAID 5/6 functionality in Linux 3.9. However, its implementation uses a stripe cache that can only partially mitigate the effect of the RAID write hole.<br />
<br />
* ZFS send/receive implementation supports incremental update when doing backups. btrfs' send/receive implementation requires sending the entire snapshot.<br />
<br />
* ZFS supports data deduplication, which is a memory hog and only works well for specialized workloads. btrfs has no equivalent.<br />
<br />
* ZFS datasets have a hierarchical namespace while btrfs subvolumes have a flat namespace.<br />
<br />
* ZFS has the ability to create virtual block devices called zvols in its namespace. btrfs has no equivalent and must rely on the loop device for this functionality, which is cumbersome.<br />
<br />
The only area where btrfs is ahead of ZFS is in the area of small file<br />
efficiency. btrfs supports a feature called block suballocation, which<br />
enables it to store small files far more efficiently than ZFS. It is<br />
possible to use another filesystem (e.g. reiserfs) on top of a ZFS zvol<br />
to obtain similar benefits (with arguably better data integrity) when<br />
dealing with many small files (e.g. the portage tree).<br />
<br />
=== Disclaimers ===<br />
<br />
{{fancywarning|This guide is a work in progress. Expect some quirks.}}<br />
{{fancyimportant|'''Since ZFS was really designed for 64 bit systems, we are only recommending and supporting 64 bit platforms and installations. We will not be supporting 32 bit platforms'''!}}<br />
<br />
== Video Tutorial ==<br />
<br />
As a companion to the install instructions below, a YouTube video ZFS install tutorial is now available:<br />
<br />
{{#widget:YouTube|id=kxEdSXwU0ZI|width=640|height=360}}<br />
<br />
== Downloading the ISO (With ZFS) ==<br />
In order for us to install Funtoo on ZFS, you will need an environment that provides the ZFS tools. Therefore we will download a customized version of System Rescue CD with ZFS already included.<br />
<br />
<pre><br />
Name: sysresccd-3.7.1_zfs_0.6.2.iso (492 MB)<br />
Release Date: 2013-08-27<br />
md5sum e6cbebfafb3c32c97be4acd1bb099743<br />
</pre><br />
<br />
<br />
'''[http://ftp.osuosl.org/pub/funtoo/distfiles/sysresccd/ Download System Rescue CD with ZFS]'''<br /><br />
<br />
== Creating a bootable USB from ISO ==<br />
After you download the iso, you can do the following steps to create a bootable USB:<br />
<br />
<console><br />
Make a temporary directory<br />
# ##i##mkdir /tmp/loop<br />
<br />
Mount the iso<br />
# ##i##mount -o ro,loop /root/sysresccd-3.7.1_zfs_0.6.2.iso /tmp/loop<br />
<br />
Run the usb installer<br />
# ##i##/tmp/loop/usb_inst.sh<br />
</console><br />
<br />
That should be all you need to do to get your flash drive working.<br />
<br />
== Creating partitions ==<br />
There are two ways to partition your disk: You can use your entire drive and let ZFS automatically partition it for you, or you can do it manually.<br />
<br />
We will be showing you how to partition it '''manually''' because if you partition it manually you get to create your own layout, you get to have your own separate /boot partition (Which is nice since not every bootloader supports booting from ZFS pools), and you get to boot into RAID10, RAID5 (RAIDZ) pools and any other layouts due to you having a separate /boot partition.<br />
<br />
==== gdisk (GPT Style) ====<br />
<br />
'''A Fresh Start''':<br />
<br />
First lets make sure that the disk is completely wiped from any previous disk labels and partitions.<br />
We will also assume that <tt>/dev/sda</tt> is the target drive.<br /><br />
<br />
<console><br />
# ##i##gdisk /dev/sda<br />
<br />
Command: ##i##x ↵<br />
Expert command: ##i##z ↵<br />
About to wipe out GPT on /dev/sda. Proceed?: ##i##y ↵<br />
GPT data structures destroyed! You may now partition the disk using fdisk or other utilities.<br />
Blank out MBR?: ##i##y ↵<br />
</console><br />
<br />
{{fancywarning|This is a destructive operation. Make sure you really don't want anything on this disk.}}<br />
<br />
Now that we have a clean drive, we will create the new layout.<br />
<br />
'''Create Partition 1''' (boot):<br />
<console><br />
Command: ##i##n ↵<br />
Partition Number: ##i##↵<br />
First sector: ##i##↵<br />
Last sector: ##i##+250M ↵<br />
Hex Code: ##i##↵<br />
</console><br />
<br />
'''Create Partition 2''' (BIOS Boot Partition):<br />
<console>Command: ##i##n ↵<br />
Partition Number: ##i##↵<br />
First sector: ##i##↵<br />
Last sector: ##i##+32M ↵<br />
Hex Code: ##i##EF02 ↵<br />
</console><br />
<br />
'''Create Partition 3''' (ZFS):<br />
<console>Command: ##i##n ↵<br />
Partition Number: ##i##↵<br />
First sector: ##i##↵<br />
Last sector: ##i##↵<br />
Hex Code: ##i##bf00 ↵<br />
<br />
Command: ##i##p ↵<br />
<br />
Number Start (sector) End (sector) Size Code Name<br />
1 2048 514047 250.0 MiB 8300 Linux filesystem<br />
2 514048 579583 32.0 MiB EF02 BIOS boot partition<br />
3 579584 1953525134 931.2 GiB BF00 Solaris root<br />
<br />
Command: ##i##w ↵<br />
</console><br />
<br />
<br />
=== Format your boot volume ===<br />
Format your separate /boot partition:<br />
<console># ##i##mkfs.ext2 /dev/sda1</console><br />
<br />
=== Encryption (Optional) ===<br />
If you want encryption, then create your encrypted vault(s) now by doing the following:<br />
<br />
<console><br />
# ##i##cryptsetup luksFormat /dev/sda2<br />
# ##i##cryptsetup luksOpen /dev/sda2 vault_1<br />
</console><br />
<br />
=== Create the zpool ===<br />
We will first create the pool. The pool will be named `rpool` and the disk will be aligned to 4096 (using ashift=12)<br />
<console># ##i##zpool create -f -o ashift=12 -o cachefile= -O compression=on -m none -R /mnt/funtoo rpool /dev/sda3</console><br />
<br />
{{fancyimportant|If you are using encrypted root, change '''/dev/sda3 to /dev/mapper/vault_1'''.}}<br />
<br />
{{fancynote|'''ashift<nowiki>=</nowiki>12''' should be use if you have a newer, advanced format disk that has a sector size of 4096 bytes. If you have an older disk with 512 byte sectors, you should use '''ashift<nowiki>=</nowiki>9''' or don't add the option for auto detection}}<br />
<br />
{{fancynote|If you have a previous pool that you would like to import, you can do a: '''zpool import -f -R /mnt/funtoo <pool_name>'''}}<br />
<br />
=== Create the zfs datasets ===<br />
We will now create some datasets. For this installation, we will create a small but future proof amount of datasets. We will have a dataset for the OS (/), and your swap. We will also show you how to create some optional datasets: /home, /var, /usr/src, and /usr/portage.<br />
<br />
<console><br />
Create some empty containers for organization purposes, and make the dataset that will hold /<br />
# ##i##zfs create rpool/ROOT<br />
# ##i##zfs create -o mountpoint=/ rpool/ROOT/funtoo<br />
<br />
Optional, but recommended datasets: /home<br />
# ##i##zfs create -o mountpoint=/home rpool/HOME<br />
<br />
Optional, portage tree, distfiles, and binary packages:<br />
# ##i##zfs create rpool/FUNTOO<br />
# ##i##zfs create -o mountpoint=/usr/portage -o compression=off rpool/FUNTOO/portage<br />
# ##i##zfs create -o mountpoint=/usr/portage/distfiles rpool/FUNTOO/portage/distfiles<br />
# ##i##zfs create -o mountpoint=/usr/portage/packages rpool/FUNTOO/portage/packages<br />
<br />
Optional datasets: /usr/src<br />
# ##i##zfs create -o mountpoint=/usr/src rpool/FUNTOO/src<br />
</console><br />
<br />
=== Create your swap zvol ===<br />
'''Make your swap +1G greater than your RAM. An 8G machine would have 9G of SWAP (This is kinda big though). For machines with this much memory, You could just make it 2G if you don't have any problems.'''<br />
<console><br />
# ##i##zfs create -o sync=always -o primarycache=metadata -o secondarycache=none -o volblocksize=4K -V 1G rpool/swap<br />
</console><br />
<br />
=== Format your swap zvol ===<br />
<console><br />
# ##i##mkswap -f /dev/zvol/rpool/swap<br />
# ##i##swapon /dev/zvol/rpool/swap<br />
</console><br />
<br />
<br />
=== Last minute checks and touches ===<br />
Check to make sure everything appears fine. Your output may differ depending on the choices you made above:<br />
<console><br />
# ##i##zpool status<br />
pool: rpool<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
rpool ONLINE 0 0 0<br />
sda2 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
<br />
# ##i##zfs list<br />
rpool 3.10G 15.5G 136K none<br />
rpool/HOME 136K 15.5G 136K /mnt/funtoo/home<br />
rpool/ROOT 308K 15.5G 136K none<br />
rpool/ROOT/funtoo 172K 15.5G 172K /mnt/funtoo<br />
rpool/swap 3.09G 18.6G 76K -<br />
</console><br />
<br />
Now we will continue to install funtoo.<br />
<br />
== Installing Funtoo ==<br />
[[Funtoo_Linux_Installation|Download and extract the Funtoo stage3 and continue installation as normal.]]<br />
<br />
Then once you've extracted the stage3, chroot into your new funtoo environment:<br />
<console><br />
Go into the directory that you will chroot into<br />
# ##i##cd /mnt/funtoo<br />
<br />
Mount your boot drive<br />
# ##i##mount /dev/sda1 /mnt/funtoo/boot<br />
<br />
Bind the kernel related directories<br />
# ##i##mount -t proc none /mnt/funtoo/proc<br />
# ##i##mount --rbind /dev /mnt/funtoo/dev<br />
# ##i##mount --rbind /sys /mnt/funtoo/sys<br />
<br />
Copy network settings<br />
# ##i##cp /etc/resolv.conf /mnt/funtoo/etc/<br />
<br />
chroot into your new funtoo environment<br />
# ##i##env -i HOME=/root TERM=$TERM chroot /mnt/funtoo /bin/bash --login<br />
<br />
Place your mountpoints into your /etc/mtab file<br />
# ##i##cat /proc/mounts > /etc/mtab<br />
<br />
Sync your tree<br />
# ##i##emerge --sync<br />
</console><br />
<br />
=== Add filesystems to /etc/fstab ===<br />
<br />
Before we continue to compile and or install our kernel in the next step, we will edit the /etc/fstab file because if we decide to install our kernel through portage, portage will need to know where is your /boot so that it can place the files in there. We also need to update /etc/mtab so our system knows what is mounted<br />
<br />
<console><br />
# ##i##nano /etc/fstab<br />
<br />
# <fs> <mountpoint> <type> <opts> <dump/pass><br />
# Do not add the /boot line below if you are using whole-disk zfs<br />
/dev/sda1 /boot ext2 defaults 0 2<br />
/dev/zvol/rpool/swap none swap sw 0 0<br />
</console><br />
<br />
== Kernel Configuration ==<br />
To speed up this step, you can install "bliss-kernel" since it's already properly configured for ZFS and a lot of other configurations. The kernel is also compiled and ready to go. To install 'bliss-kernel' type the following:<br />
<br />
<console><br />
# ##i##emerge bliss-kernel<br />
</console><br />
<br />
Now make sure that your /usr/src/linux symlink is pointing to this kernel by typing the following:<br />
<br />
<console><br />
# ##i##eselect kernel list<br />
Available kernel symlink targets:<br />
[1] linux-3.10.10-FB.01 *<br />
</console><br />
<br />
You should see a star next to the bliss-kernel version you installed. In this case it was 3.10.10-FB.01. If it's not set, you can type '''eselect kernel set #'''.<br />
<br />
== Installing the ZFS userspace tools ==<br />
<br />
<console># ##i##emerge -av zfs</console><br />
<br />
Check to make sure that the zfs tools are working, the zpool.cache file that you copied before should be displayed.<br />
<br />
<console><br />
# ##i##zpool status<br />
# ##i##zfs list<br />
</console><br />
<br />
If everything worked, continue.<br />
<br />
== Install the bootloader ==<br />
=== GRUB 2 ===<br />
Before you do this, make sure this checklist is followed:<br />
* Installed kernel and kernel modules<br />
* Installed zfs package from the tree<br />
* /dev, /proc, /sys are mounted in the chroot environment<br />
<br />
Once all this is checked, let's install grub2. First we need to enable the "libzfs" use flag so zfs support is compiled for grub2.<br />
<br />
<console># ##i##echo "sys-boot/grub libzfs" >> /etc/portage/package.use</console><br />
<br />
Then we will compile grub2:<br />
<br />
<console># ##i##emerge -av grub</console><br />
<br />
Once this is done, you can check that grub is version 2.00 by doing the following command:<br />
<console><br />
# ##i##grub-install --version<br />
grub-install (GRUB) 2.00<br />
</console><br />
<br />
Now try to install grub2:<br />
<console># ##i##grub-install --no-floppy /dev/sda</console><br />
<br />
You should receive the following message<br />
<console>Installation finished. No error reported.</console><br />
<br />
If not, then go back to the above checklist.<br />
<br />
=== LILO ===<br />
Before you do this, make sure the following checklist is followed:<br />
* /dev/, /proc and /sys are mounted.<br />
* Installed the sys-fs/zfs package from the tree.<br />
Once the above requirements are met, LILO can be installed.<br />
<br />
Now we will install LILO.<br />
<console># ##i##emerge -av sys-boot/lilo</console><br />
Once the installation of LILO is complete we will need to edit the lilo.conf file.<br />
<console># ##i##nano /etc/lilo.conf<br />
boot=/dev/sda<br />
prompt<br />
timeout=4<br />
default=Funtoo<br />
<br />
image=/boot/bzImage<br />
label=Funtoo<br />
read-only<br />
append="root=rpool/ROOT/funtoo"<br />
initrd=/boot/initramfs<br />
</console><br />
All that is left now is to install the bootcode to the MBR.<br />
<br />
This can be accomplished by running:<br />
<console># ##i##/sbin/lilo</console><br />
If it is successful you should see:<br />
<console><br />
Warning: LBA32 addressing assumed<br />
Added Funtoo + *<br />
One warning was issued<br />
</console><br />
<br />
== Create the initramfs ==<br />
There are two ways to do this, you can use genkernel, or you can use my bliss initramfs creator. I will show you both.<br />
<br />
=== genkernel ===<br />
<console><br />
# ##i##emerge -av sys-kernel/genkernel<br />
# You only need to add --luks if you used encryption<br />
# ##i##genkernel --zfs --luks initramfs<br />
</console><br />
<br />
=== Bliss Initramfs Creator ===<br />
If you are encrypting your drives, then add the "luks" use flag to your package.use before emerging:<br />
<br />
<console><br />
# ##i##echo "sys-kernel/bliss-initramfs luks" >> /etc/portage/package.use<br />
</console><br />
<br />
Now install the creator:<br />
<br />
<console><br />
# ##i##emerge bliss-initramfs<br />
</console><br />
<br />
<br />
Then go into the install directory, run the script as root, and place it into /boot:<br />
<console># ##i##cd /opt/bliss-initramfs<br />
# ##i##./createInit<br />
# ##i##mv initrd-<kernel_name> /boot<br />
</console><br />
'''<kernel_name>''' is the name of what you selected in the initramfs creator, and the name of the outputted file.<br />
<br />
== Using boot-update ==<br />
=== /boot on separate partition ===<br />
If you created a separate non-zfs partition for boot then configuring boot-update is almost exactly the same as a normal install except that auto detection for root does not work. You must tell boot-update what your root is. <br />
==== Genkernel ====<br />
If your using genkernel you must add 'real_root=ZFS=<root>' and 'dozfs' to your params.<br />
Example entry for boot.conf:<br />
<console><br />
"Funtoo ZFS" {<br />
kernel vmlinuz[-v]<br />
initrd initramfs-genkernel-x86_64[-v]<br />
params real_root=ZFS=rpool/ROOT/funtoo<br />
params += dozfs<br />
# Also add 'params += crypt_root=/dev/sda2' if you used encryption<br />
# Adjust the above setting to your system if needed<br />
}<br />
</console><br />
<br />
==== Bliss Initramfs Creator ====<br />
If you used the Bliss Initramfs Creator then all you need to do is add 'root=<root>' to your params.<br />
Example entry for boot.conf:<br />
<console><br />
"Funtoo ZFS" {<br />
kernel vmlinuz[-v]<br />
initrd initrd[-v]<br />
params root=rpool/ROOT/funtoo quiet<br />
# If you have an encrypted device with a regular passphrase,<br />
# you can add the following line<br />
params += enc_root=/dev/sda3 enc_type=pass<br />
}<br />
</console><br />
<br />
After editing /etc/boot.conf, you just need to run boot-update to update grub.cfg<br />
<console># ##i##boot-update</console><br />
<br />
=== /boot on ZFS ===<br />
TBC - pending update to boot-update to support this<br />
<br />
== Final configuration ==<br />
=== Add the zfs tools to openrc ===<br />
<console># ##i##rc-update add zfs boot</console><br />
<br />
=== Clean up and reboot ===<br />
We are almost done, we are just going to clean up, '''set our root password''', and unmount whatever we mounted and get out.<br />
<br />
<console><br />
Delete the stage3 tarball that you downloaded earlier so it doesn't take up space.<br />
# ##i##cd /<br />
# ##i##rm stage3-latest.tar.xz<br />
<br />
Set your root password<br />
# ##i##passwd<br />
>> Enter your password, you won't see what you are writing (for security reasons), but it is there!<br />
<br />
Get out of the chroot environment<br />
# ##i##exit<br />
<br />
Unmount all the kernel filesystem stuff and boot (if you have a separate /boot)<br />
# ##i##umount -l proc dev sys boot<br />
<br />
Turn off the swap<br />
# ##i##swapoff /dev/zvol/rpool/swap<br />
<br />
Export the zpool<br />
# ##i##cd /<br />
# ##i##zpool export rpool<br />
<br />
Reboot<br />
# ##i##reboot<br />
</console><br />
<br />
{{fancyimportant|'''Don't forget to set your root password as stated above before exiting chroot and rebooting. If you don't set the root password, you won't be able to log into your new system.'''}}<br />
<br />
and that should be enough to get your system to boot on ZFS.<br />
<br />
== After reboot ==<br />
=== Create initial ZFS Snapshot ===<br />
Continue to set up anything you need in terms of /etc configurations. Once you have everything the way you like it, take a snapshot of your system. You will be using this snapshot to revert back to this state if anything ever happens to your system down the road. The snapshots are cheap, and almost instant. <br />
<br />
To take the snapshot of your system, type the following:<br />
<console># ##i##zfs snapshot -r rpool@install</console><br />
<br />
To see if your snapshot was taken, type:<br />
<console># ##i##zfs list -t snapshot</console><br />
<br />
If your machine ever fails and you need to get back to this state, just type (This will only revert your / dataset while keeping the rest of your data intact):<br />
<console># ##i##zfs rollback rpool/ROOT/funtoo@install</console><br />
<br />
{{fancyimportant|'''For a detailed overview, presentation of ZFS' capabilities, as well as usage examples, please refer to the [[ZFS_Fun|ZFS Fun]] page.'''}}<br />
<br />
[[Category:HOWTO]]<br />
[[Category:Filesystems]]<br />
[[Category:Featured]]<br />
<br />
__NOTITLE__</div>118.90.19.249https://www.funtoo.org/index.php?title=ZFS_as_Root_Filesystem&diff=2401ZFS as Root Filesystem2013-10-18T22:10:22Z<p>118.90.19.249: /* Create the zpool */</p>
<hr />
<div>== Introduction ==<br />
<br />
This tutorial will show you how to install Funtoo on ZFS (rootfs). This tutorial is meant to be an "overlay" over the [[Funtoo_Linux_Installation|Regular Funtoo Installation]]. Follow the normal installation and only use this guide for steps 2, 3, and 8.<br />
<br />
=== Introduction to ZFS ===<br />
<br />
Since ZFS is a new technology for Linux, it can be helpful to understand some of its benefits, particularly in comparison to BTRFS, another popular next-generation Linux filesystem:<br />
<br />
* On Linux, the ZFS code can be updated independently of the kernel to obtain the latest fixes. btrfs is exclusive to Linux and you need to build the latest kernel sources to get the latest fixes.<br />
<br />
* ZFS is supported on multiple platforms. The platforms with the best support are Solaris, FreeBSD and Linux. Other platforms with varying degrees of support are NetBSD, Mac OS X and Windows. btrfs is exclusive to Linux.<br />
<br />
* ZFS has the Adaptive Replacement Cache replacement algorithm while btrfs uses the Linux kernel's Last Recently Used replacement algorithm. The former often has an overwhelmingly superior hit rate, which means fewer disk accesses.<br />
<br />
* ZFS has the ZFS Intent Log and SLOG devices, which accelerates small synchronous write performance.<br />
<br />
* ZFS handles internal fragmentation gracefully, such that you can fill it until 100%. Internal fragmentation in btrfs can make btrfs think it is full at 10%. Btrfs has no automatic rebalancing code, so it requires a manual rebalance to correct it.<br />
<br />
* ZFS has raidz, which is like RAID 5/6 (or a hypothetical RAID 7 that supports 3 parity disks), except it does not suffer from the RAID write hole issue thanks to its use of CoW and a variable stripe size. btrfs gained integrated RAID 5/6 functionality in Linux 3.9. However, its implementation uses a stripe cache that can only partially mitigate the effect of the RAID write hole.<br />
<br />
* ZFS send/receive implementation supports incremental update when doing backups. btrfs' send/receive implementation requires sending the entire snapshot.<br />
<br />
* ZFS supports data deduplication, which is a memory hog and only works well for specialized workloads. btrfs has no equivalent.<br />
<br />
* ZFS datasets have a hierarchical namespace while btrfs subvolumes have a flat namespace.<br />
<br />
* ZFS has the ability to create virtual block devices called zvols in its namespace. btrfs has no equivalent and must rely on the loop device for this functionality, which is cumbersome.<br />
<br />
The only area where btrfs is ahead of ZFS is in the area of small file<br />
efficiency. btrfs supports a feature called block suballocation, which<br />
enables it to store small files far more efficiently than ZFS. It is<br />
possible to use another filesystem (e.g. reiserfs) on top of a ZFS zvol<br />
to obtain similar benefits (with arguably better data integrity) when<br />
dealing with many small files (e.g. the portage tree).<br />
<br />
=== Disclaimers ===<br />
<br />
{{fancywarning|This guide is a work in progress. Expect some quirks.}}<br />
{{fancyimportant|'''Since ZFS was really designed for 64 bit systems, we are only recommending and supporting 64 bit platforms and installations. We will not be supporting 32 bit platforms'''!}}<br />
<br />
== Video Tutorial ==<br />
<br />
As a companion to the install instructions below, a YouTube video ZFS install tutorial is now available:<br />
<br />
{{#widget:YouTube|id=kxEdSXwU0ZI|width=640|height=360}}<br />
<br />
== Downloading the ISO (With ZFS) ==<br />
In order for us to install Funtoo on ZFS, you will need an environment that provides the ZFS tools. Therefore we will download a customized version of System Rescue CD with ZFS already included.<br />
<br />
<pre><br />
Name: sysresccd-3.7.1_zfs_0.6.2.iso (492 MB)<br />
Release Date: 2013-08-27<br />
md5sum e6cbebfafb3c32c97be4acd1bb099743<br />
</pre><br />
<br />
<br />
'''[http://ftp.osuosl.org/pub/funtoo/distfiles/sysresccd/ Download System Rescue CD with ZFS]'''<br /><br />
<br />
== Creating a bootable USB from ISO ==<br />
After you download the iso, you can do the following steps to create a bootable USB:<br />
<br />
<console><br />
Make a temporary directory<br />
# ##i##mkdir /tmp/loop<br />
<br />
Mount the iso<br />
# ##i##mount -o ro,loop /root/sysresccd-3.7.1_zfs_0.6.2.iso /tmp/loop<br />
<br />
Run the usb installer<br />
# ##i##/tmp/loop/usb_inst.sh<br />
</console><br />
<br />
That should be all you need to do to get your flash drive working.<br />
<br />
== Creating partitions ==<br />
There are two ways to partition your disk: You can use your entire drive and let ZFS automatically partition it for you, or you can do it manually.<br />
<br />
We will be showing you how to partition it '''manually''' because if you partition it manually you get to create your own layout, you get to have your own separate /boot partition (Which is nice since not every bootloader supports booting from ZFS pools), and you get to boot into RAID10, RAID5 (RAIDZ) pools and any other layouts due to you having a separate /boot partition.<br />
<br />
==== fdisk (MBR Style) ====<br />
<br />
'''A Fresh Start''':<br />
<br />
First lets make sure that the disk is completely wiped from any previous disk labels and partitions.<br />
We will also assume that <tt>/dev/sda</tt> is the target drive.<br /><br />
<br />
<console><br />
# ##i##fdisk /dev/sda<br />
Command (m for help): ##i##o ↵<br />
Building a new DOS disklabel with disk identifier 0xbeead864.<br />
</console><br />
<br />
{{fancywarning|This is a destructive operation. Make sure you really don't want anything on this disk.}}<br />
<br />
Now that we have a clean drive, we will create the new layout.<br />
<br />
'''Create Partition 1''' (boot):<br />
<console><br />
Command: ##i##n ↵<br />
Partition type: ##i##↵<br />
Partition number: ##i##↵<br />
First sector: ##i##↵<br />
Last sector: ##i##+250M ↵<br />
</console><br />
<br />
'''Create Partition 2''' (ZFS):<br />
<console><br />
Command: ##i##n ↵<br />
Partition type: ##i##↵<br />
Partition number: ##i##↵<br />
First sector: ##i##↵<br />
Last sector: ##i##↵<br />
<br />
Command: ##i##t ↵<br />
Partition number: ##i##2 ↵<br />
Hex code: ##i##bf ↵<br />
<br />
Command: ##i##p ↵<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sda1 2048 514047 256000 83 Linux<br />
/dev/sda2 514048 1953525167 976505560 bf Solaris<br />
</console><br />
<br />
==== gdisk (GPT Style) ====<br />
<br />
'''A Fresh Start''':<br />
<br />
First lets make sure that the disk is completely wiped from any previous disk labels and partitions.<br />
We will also assume that <tt>/dev/sda</tt> is the target drive.<br /><br />
<br />
<console><br />
# ##i##gdisk /dev/sda<br />
<br />
Command: ##i##x ↵<br />
Expert command: ##i##z ↵<br />
About to wipe out GPT on /dev/sda. Proceed?: ##i##y ↵<br />
GPT data structures destroyed! You may now partition the disk using fdisk or other utilities.<br />
Blank out MBR?: ##i##y ↵<br />
</console><br />
<br />
{{fancywarning|This is a destructive operation. Make sure you really don't want anything on this disk.}}<br />
<br />
Now that we have a clean drive, we will create the new layout.<br />
<br />
'''Create Partition 1''' (boot):<br />
<console><br />
Command: ##i##n ↵<br />
Partition Number: ##i##↵<br />
First sector: ##i##↵<br />
Last sector: ##i##+250M ↵<br />
Hex Code: ##i##↵<br />
</console><br />
<br />
'''Create Partition 2''' (BIOS Boot Partition):<br />
<console>Command: ##i##n ↵<br />
Partition Number: ##i##↵<br />
First sector: ##i##↵<br />
Last sector: ##i##+32M ↵<br />
Hex Code: ##i##EF02 ↵<br />
</console><br />
{{fancyimportant|Only make the above BIOS Boot Partition if you are using GRUB 2 on GPT. If you are using the extlinux bootloader, this partition is not necessary.}}<br />
<br />
'''Create Partition 3''' (ZFS):<br />
<console>Command: ##i##n ↵<br />
Partition Number: ##i##↵<br />
First sector: ##i##↵<br />
Last sector: ##i##↵<br />
Hex Code: ##i##bf00 ↵<br />
<br />
Command: ##i##p ↵<br />
<br />
Number Start (sector) End (sector) Size Code Name<br />
1 2048 514047 250.0 MiB 8300 Linux filesystem<br />
2 514048 579583 32.0 MiB EF02 BIOS boot partition<br />
3 579584 1953525134 931.2 GiB BF00 Solaris root<br />
<br />
Command: ##i##w ↵<br />
</console><br />
<br />
<br />
=== Format your boot volume ===<br />
Format your separate /boot partition:<br />
<console># ##i##mkfs.ext2 /dev/sda1</console><br />
<br />
<br />
=== Encryption (Optional) ===<br />
If you want encryption, then create your encrypted vault(s) now by doing the following:<br />
<br />
<console><br />
# ##i##cryptsetup luksFormat /dev/sda2<br />
# ##i##cryptsetup luksOpen /dev/sda2 vault_1<br />
</console><br />
<br />
{{fancyimportant|If you followed the manual GPT partitioning instructions, you should change '''/dev/sda2 to /dev/sda3'''.}}<br />
=== Create the zpool ===<br />
We will first create the pool. The pool will be named `rpool` and the disk will be aligned to 4096 (using ashift=12)<br />
<console># ##i##zpool create -f -o ashift=12 -o cachefile= -O compression=on -m none -R /mnt/funtoo rpool /dev/sda2</console><br />
<br />
{{fancyimportant|If you followed the manual GPT partitioning instructions, you should change '''/dev/sda2 to /dev/sda3'''. If you are using encrypted root, then change '''/dev/sda2 to vault_1'''.}}<br />
<br />
{{fancynote|'''ashift<nowiki>=</nowiki>12''' should be use if you have a newer, advanced format disk that has a sector size of 4096 bytes. If you have an older disk with 512 byte sectors, you should use '''ashift<nowiki>=</nowiki>9''' or don't add the option for auto detection}}<br />
<br />
{{fancynote|If you have a previous pool that you would like to import, you can do a: '''zpool import -f -R /mnt/funtoo <pool_name>'''}}<br />
<br />
<br />
{{fancynote|If you used encryption, replace '''/dev/sda2''' with '''/dev/mapper/vault_1''' or '''/dev/mapper/{whatever you named your encrypted device}'''.}}<br />
<br />
=== Create the zfs datasets ===<br />
We will now create some datasets. For this installation, we will create a small but future proof amount of datasets. We will have a dataset for the OS (/), and your swap. We will also show you how to create some optional datasets: /home, /var, /usr/src, and /usr/portage.<br />
<br />
<console><br />
Create some empty containers for organization purposes, and make the dataset that will hold /<br />
# ##i##zfs create rpool/ROOT<br />
# ##i##zfs create -o mountpoint=/ rpool/ROOT/funtoo<br />
<br />
Optional, but recommended datasets: /home<br />
# ##i##zfs create -o mountpoint=/home rpool/HOME<br />
<br />
Optional, portage tree, distfiles, and binary packages:<br />
# ##i##zfs create rpool/FUNTOO<br />
# ##i##zfs create -o mountpoint=/usr/portage -o compression=off rpool/FUNTOO/portage<br />
# ##i##zfs create -o mountpoint=/usr/portage/distfiles rpool/FUNTOO/portage/distfiles<br />
# ##i##zfs create -o mountpoint=/usr/portage/packages rpool/FUNTOO/portage/packages<br />
<br />
Optional datasets: /usr/src<br />
# ##i##zfs create -o mountpoint=/usr/src rpool/FUNTOO/src<br />
</console><br />
<br />
=== Create your swap zvol ===<br />
'''Make your swap +1G greater than your RAM. An 8G machine would have 9G of SWAP (This is kinda big though). For machines with this much memory, You could just make it 2G if you don't have any problems.'''<br />
<console><br />
# ##i##zfs create -o sync=always -o primarycache=metadata -o secondarycache=none -o volblocksize=4K -V 1G rpool/swap<br />
</console><br />
<br />
=== Format your swap zvol ===<br />
<console><br />
# ##i##mkswap -f /dev/zvol/rpool/swap<br />
# ##i##swapon /dev/zvol/rpool/swap<br />
</console><br />
<br />
<br />
=== Last minute checks and touches ===<br />
Check to make sure everything appears fine. Your output may differ depending on the choices you made above:<br />
<console><br />
# ##i##zpool status<br />
pool: rpool<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
rpool ONLINE 0 0 0<br />
sda2 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
<br />
# ##i##zfs list<br />
rpool 3.10G 15.5G 136K none<br />
rpool/HOME 136K 15.5G 136K /mnt/funtoo/home<br />
rpool/ROOT 308K 15.5G 136K none<br />
rpool/ROOT/funtoo 172K 15.5G 172K /mnt/funtoo<br />
rpool/swap 3.09G 18.6G 76K -<br />
</console><br />
<br />
Now we will continue to install funtoo.<br />
<br />
== Installing Funtoo ==<br />
[[Funtoo_Linux_Installation|Download and extract the Funtoo stage3 and continue installation as normal.]]<br />
<br />
Then once you've extracted the stage3, chroot into your new funtoo environment:<br />
<console><br />
Go into the directory that you will chroot into<br />
# ##i##cd /mnt/funtoo<br />
<br />
Mount your boot drive<br />
# ##i##mount /dev/sda1 /mnt/funtoo/boot<br />
<br />
Bind the kernel related directories<br />
# ##i##mount -t proc none /mnt/funtoo/proc<br />
# ##i##mount --rbind /dev /mnt/funtoo/dev<br />
# ##i##mount --rbind /sys /mnt/funtoo/sys<br />
<br />
Copy network settings<br />
# ##i##cp /etc/resolv.conf /mnt/funtoo/etc/<br />
<br />
chroot into your new funtoo environment<br />
# ##i##env -i HOME=/root TERM=$TERM chroot /mnt/funtoo /bin/bash --login<br />
<br />
Place your mountpoints into your /etc/mtab file<br />
# ##i##cat /proc/mounts > /etc/mtab<br />
<br />
Sync your tree<br />
# ##i##emerge --sync<br />
</console><br />
<br />
=== Add filesystems to /etc/fstab ===<br />
<br />
Before we continue to compile and or install our kernel in the next step, we will edit the /etc/fstab file because if we decide to install our kernel through portage, portage will need to know where is your /boot so that it can place the files in there. We also need to update /etc/mtab so our system knows what is mounted<br />
<br />
<console><br />
# ##i##nano /etc/fstab<br />
<br />
# <fs> <mountpoint> <type> <opts> <dump/pass><br />
# Do not add the /boot line below if you are using whole-disk zfs<br />
/dev/sda1 /boot ext2 defaults 0 2<br />
/dev/zvol/rpool/swap none swap sw 0 0<br />
</console><br />
<br />
== Kernel Configuration ==<br />
Your kernel should have the below options if necessary.<br />
<br />
{{fancynote|The below configurations are the requirements for "Bliss Initramfs Creator". Some of these might not be needed for genkernel.}}<br />
<br />
When you get up to the kernel, make sure that you disable the CFQ scheduler, and turn on No-op (It's the default one once you disable all schedulers). The reason for this is because ZFS has its own scheduler and the CFQ one conflicts with it. Go to your kernel sources tree (normally /usr/src/linux), and make sure you have the following options enabled in kernel config:<br />
<br />
<pre><br />
- Linux Kernel<br />
ZLIB_INFLATE/ZLIB_DEFLATE can be compiled as a module but must be declared <br />
in the ADDON_MODS variable in hooks/addon.sh.<br />
<br />
General setup ---><br />
> [*] Initial RAM filesystem and RAM disk (initramfs/initrd) support<br />
> () Initramfs source file(s)<br />
<br />
Device Drivers ---><br />
> Generic Driver Options ---><br />
>> [*] Maintain a devtmpfs filesystem to mount at /dev<br />
>> [*] Automount devtmpfs at /dev, after the kernel mounted the rootfs<br />
<br />
* All other drivers required to see your PATA/SATA drives (or USB devices) need to be compiled in<br />
or you can compile them as a module and declare them in the ADDON_MODS variable.<br />
<br />
For LUKS support:<br />
<br />
- Linux Kernel<br />
Device Drivers ---><br />
[*] Multiple devices driver support (RAID and LVM) ---><br />
<*> Device mapper support<br />
<*> Crypt target support<br />
<br />
Cryptographic API ---><br />
<*> XTS support<br />
-*- AES cipher algorithms</pre><br />
<br />
Continue and compile/install your kernel:<br />
<br />
<console><br />
# ##i##make bzImage modules<br />
# ##i##make install<br />
# ##i##make modules_install<br />
</console><br />
<br />
== Installing the ZFS userspace tools ==<br />
<br />
<console># ##i##emerge -av zfs</console><br />
<br />
Check to make sure that the zfs tools are working, the zpool.cache file that you copied before should be displayed.<br />
<br />
<console><br />
# ##i##zpool status<br />
# ##i##zfs list<br />
</console><br />
<br />
If everything worked, continue.<br />
<br />
== Install the bootloader ==<br />
=== GRUB 2 ===<br />
Before you do this, make sure this checklist is followed:<br />
* Installed kernel and kernel modules<br />
* Installed zfs package from the tree<br />
* /dev, /proc, /sys are mounted in the chroot environment<br />
<br />
Once all this is checked, let's install grub2. First we need to enable the "libzfs" use flag so zfs support is compiled for grub2.<br />
<br />
<console># ##i##echo "sys-boot/grub libzfs" >> /etc/portage/package.use</console><br />
<br />
Then we will compile grub2:<br />
<br />
<console># ##i##emerge -av grub</console><br />
<br />
Once this is done, you can check that grub is version 2.00 by doing the following command:<br />
<console><br />
# ##i##grub-install --version<br />
grub-install (GRUB) 2.00<br />
</console><br />
<br />
Now try to install grub2:<br />
<console># ##i##grub-install --no-floppy /dev/sda</console><br />
<br />
You should receive the following message<br />
<console>Installation finished. No error reported.</console><br />
<br />
If not, then go back to the above checklist.<br />
<br />
=== LILO ===<br />
Before you do this, make sure the following checklist is followed:<br />
* /dev/, /proc and /sys are mounted.<br />
* Installed the sys-fs/zfs package from the tree.<br />
Once the above requirements are met, LILO can be installed.<br />
<br />
Now we will install LILO.<br />
<console># ##i##emerge -av sys-boot/lilo</console><br />
Once the installation of LILO is complete we will need to edit the lilo.conf file.<br />
<console># ##i##nano /etc/lilo.conf<br />
boot=/dev/sda<br />
prompt<br />
timeout=4<br />
default=Funtoo<br />
<br />
image=/boot/bzImage<br />
label=Funtoo<br />
read-only<br />
append="root=rpool/ROOT/funtoo"<br />
initrd=/boot/initramfs<br />
</console><br />
All that is left now is to install the bootcode to the MBR.<br />
<br />
This can be accomplished by running:<br />
<console># ##i##/sbin/lilo</console><br />
If it is successful you should see:<br />
<console><br />
Warning: LBA32 addressing assumed<br />
Added Funtoo + *<br />
One warning was issued<br />
</console><br />
<br />
== Create the initramfs ==<br />
There are two ways to do this, you can use genkernel, or you can use my bliss initramfs creator. I will show you both.<br />
<br />
=== genkernel ===<br />
<console><br />
# ##i##emerge -av sys-kernel/genkernel<br />
# You only need to add --luks if you used encryption<br />
# ##i##genkernel --zfs --luks initramfs<br />
</console><br />
<br />
=== Bliss Initramfs Creator ===<br />
If you are encrypting your drives, then add the "luks" use flag to your package.use before emerging:<br />
<br />
<console><br />
# ##i##echo "sys-kernel/bliss-initramfs luks" >> /etc/portage/package.use<br />
</console><br />
<br />
Now install the creator:<br />
<br />
<console><br />
# ##i##emerge bliss-initramfs<br />
</console><br />
<br />
<br />
Then go into the install directory, run the script as root, and place it into /boot:<br />
<console># ##i##cd /opt/bliss-initramfs<br />
# ##i##./createInit<br />
# ##i##mv initrd-<kernel_name> /boot<br />
</console><br />
'''<kernel_name>''' is the name of what you selected in the initramfs creator, and the name of the outputted file.<br />
<br />
== Using boot-update ==<br />
=== /boot on separate partition ===<br />
If you created a separate non-zfs partition for boot then configuring boot-update is almost exactly the same as a normal install except that auto detection for root does not work. You must tell boot-update what your root is. <br />
==== Genkernel ====<br />
If your using genkernel you must add 'real_root=ZFS=<root>' and 'dozfs' to your params.<br />
Example entry for boot.conf:<br />
<console><br />
"Funtoo ZFS" {<br />
kernel vmlinuz[-v]<br />
initrd initramfs-genkernel-x86_64[-v]<br />
params real_root=ZFS=rpool/ROOT/funtoo<br />
params += dozfs<br />
# Also add 'params += crypt_root=/dev/sda2' if you used encryption<br />
# Adjust the above setting to your system if needed<br />
}<br />
</console><br />
<br />
==== Bliss Initramfs Creator ====<br />
If you used the Bliss Initramfs Creator then all you need to do is add 'root=<root>' to your params.<br />
Example entry for boot.conf:<br />
<console><br />
"Funtoo ZFS" {<br />
kernel vmlinuz[-v]<br />
initrd initrd[-v]<br />
params root=rpool/ROOT/funtoo quiet<br />
# If you have an encrypted device with a regular passphrase,<br />
# you can add the following line<br />
params += enc_root=/dev/sda3 enc_type=pass<br />
}<br />
</console><br />
<br />
After editing /etc/boot.conf, you just need to run boot-update to update grub.cfg<br />
<console># ##i##boot-update</console><br />
<br />
=== /boot on ZFS ===<br />
TBC - pending update to boot-update to support this<br />
<br />
== Final configuration ==<br />
=== Add the zfs tools to openrc ===<br />
<console># ##i##rc-update add zfs boot</console><br />
<br />
=== Clean up and reboot ===<br />
We are almost done, we are just going to clean up, '''set our root password''', and unmount whatever we mounted and get out.<br />
<br />
<console><br />
Delete the stage3 tarball that you downloaded earlier so it doesn't take up space.<br />
# ##i##cd /<br />
# ##i##rm stage3-latest.tar.xz<br />
<br />
Set your root password<br />
# ##i##passwd<br />
>> Enter your password, you won't see what you are writing (for security reasons), but it is there!<br />
<br />
Get out of the chroot environment<br />
# ##i##exit<br />
<br />
Unmount all the kernel filesystem stuff and boot (if you have a separate /boot)<br />
# ##i##umount -l proc dev sys boot<br />
<br />
Turn off the swap<br />
# ##i##swapoff /dev/zvol/rpool/swap<br />
<br />
Export the zpool<br />
# ##i##cd /<br />
# ##i##zpool export rpool<br />
<br />
Reboot<br />
# ##i##reboot<br />
</console><br />
<br />
{{fancyimportant|'''Don't forget to set your root password as stated above before exiting chroot and rebooting. If you don't set the root password, you won't be able to log into your new system.'''}}<br />
<br />
and that should be enough to get your system to boot on ZFS.<br />
<br />
== After reboot ==<br />
=== Create initial ZFS Snapshot ===<br />
Continue to set up anything you need in terms of /etc configurations. Once you have everything the way you like it, take a snapshot of your system. You will be using this snapshot to revert back to this state if anything ever happens to your system down the road. The snapshots are cheap, and almost instant. <br />
<br />
To take the snapshot of your system, type the following:<br />
<console># ##i##zfs snapshot -r rpool@install</console><br />
<br />
To see if your snapshot was taken, type:<br />
<console># ##i##zfs list -t snapshot</console><br />
<br />
If your machine ever fails and you need to get back to this state, just type (This will only revert your / dataset while keeping the rest of your data intact):<br />
<console># ##i##zfs rollback rpool/ROOT/funtoo@install</console><br />
<br />
{{fancyimportant|'''For a detailed overview, presentation of ZFS' capabilities, as well as usage examples, please refer to the [[ZFS_Fun|ZFS Fun]] page.'''}}<br />
<br />
[[Category:HOWTO]]<br />
[[Category:Filesystems]]<br />
[[Category:Featured]]<br />
<br />
__NOTITLE__</div>118.90.19.249