Difference between pages "Pt-br/Linux Containers" and "ZFS Install Guide"

(Difference between pages)
(reboot)
 
(Install genkernel and initial kernel build)
 
Line 1: Line 1:
== Status ==
+
== Introduction ==
  
Como no kernel Linux 3.1.5, o LXC é útil por isolar seus próprios trabalhos de outros. Ele não está pronto ainda para isolar potencialmente usuários de outros ou do sistema hóspede (host). Para uma solução de containers mais madura, que é apropriada para ambientes de hospedagem, veja [[OpenVZ]].
+
This tutorial will show you how to install Funtoo on ZFS (rootfs). This tutorial is meant to be an "overlay" over the [[Funtoo_Linux_Installation|Regular Funtoo Installation]]. Follow the normal installation and only use this guide for steps 2, 3, and 8.
  
Containers LXC ainda não possui seu próprio sistema de atualização, e eles veem tudo que está na saída {{c|dmesg}} no host, entre outras coisas. Mas em geral, a tecnologia funciona.
+
=== Introduction to ZFS ===
  
== Informações Básicas ==
+
Since ZFS is a new technology for Linux, it can be helpful to understand some of its benefits, particularly in comparison to BTRFS, another popular next-generation Linux filesystem:
  
 +
* On Linux, the ZFS code can be updated independently of the kernel to obtain the latest fixes. btrfs is exclusive to Linux and you need to build the latest kernel sources to get the latest fixes.
  
* Linux Containers são baseados em:
+
* ZFS is supported on multiple platforms. The platforms with the best support are Solaris, FreeBSD and Linux. Other platforms with varying degrees of support are NetBSD, Mac OS X and Windows. btrfs is exclusive to Linux.
** Kernel namespaces para isolamento de recursos
+
** CGroups para limitação e contabilidade de recursos
+
  
{{Package|app-emulation/lxc}} é a ferramenta userspace para os Linux containers
+
* ZFS has the Adaptive Replacement Cache replacement algorithm while btrfs uses the Linux kernel's Last Recently Used replacement algorithm. The former often has an overwhelmingly superior hit rate, which means fewer disk accesses.
  
== Grupos de controle (Control groups) ==
+
* ZFS has the ZFS Intent Log and SLOG devices, which accelerates small synchronous write performance.
  
* Control groups (cgroups) no kernel desde o 2.6.24
+
* ZFS handles internal fragmentation gracefully, such that you can fill it until 100%. Internal fragmentation in btrfs can make btrfs think it is full at 10%. Btrfs has no automatic rebalancing code, so it requires a manual rebalance to correct it.
** Permite agregação de tarefas e seus filhos (children)
+
** Subsystems (cpuset, memory, blkio,...)
+
** accounting - para medir a quantidade de recursos que certos sistemas utilizam
+
** resource limiting (limitante de recurso) - grupos podem ser configurados para não excederem um determinado limite de memória
+
** prioritization (priorização) - alguns grupos podem ter uma fatia maior do CPU
+
** control - freezing/unfreezing (congelamento/descongelamento) de cgroups, checkpointing (ponto de verificação) e restarting (reinicialização)
+
** No disk quota limitation ( -> image file, LVM, XFS, directory tree quota,...) [sem limitação de cota de disco (-> arquivo imagem, LVM, XFS, cota de arvore de diretório)]
+
  
== Subsystems ==
+
* ZFS has raidz, which is like RAID 5/6 (or a hypothetical RAID 7 that supports 3 parity disks), except it does not suffer from the RAID write hole issue thanks to its use of CoW and a variable stripe size. btrfs gained integrated RAID 5/6 functionality in Linux 3.9. However, its implementation uses a stripe cache that can only partially mitigate the effect of the RAID write hole.
<br>
+
{{console|body=
+
###i## cat /proc/cgroups
+
subsys_name hierarchy num_cgroups enabled
+
cpuset
+
cpu
+
cpuacct
+
memory
+
devices
+
freezer
+
blkio
+
perf_event
+
hugetlb
+
}}
+
  
#cpuset    -> limita tarefas para CPU/CPUs específicos
+
* ZFS send/receive implementation supports incremental update when doing backups. btrfs' send/receive implementation requires sending the entire snapshot.
#cpu        -> compartilhamento de CPU
+
#cpuacct    -> contabilização de CPU
+
#memory    -> limitação de contabilização de memória e de swap
+
#devices    -> lista de dispositivo negado ou permitido
+
#freezer    -> suspend/resume tarefas
+
#blkio      -> priorização I/O (weight, throttle, ...)
+
#perf_event -> suporte para monitoramento por-cpu por-cgroup [http://lwn.net/Articles/421574/ perf_events]
+
#hugetlb    -> recurso controlador do cgroup para páginas HugeTLB [http://lwn.net/Articles/499255/ hugetlb]
+
  
== Configurando o Host system do Funtoo ==
+
* ZFS supports data deduplication, which is a memory hog and only works well for specialized workloads. btrfs has no equivalent.
  
=== Instale o LXC kernel ===
+
* ZFS datasets have a hierarchical namespace while btrfs subvolumes have a flat namespace.
Qualquer kernel acima do 3.1.5 provavelmente funcionará. Pessoalmente, eu prefiro o {{Package|sys-kernel/gentoo-sources}} , uma vez que ele possui suporte para todos os namespaces sem sacrificar o suporte a xfs, FUSE ou NFS, por exemplo. Essas verificações foram introduzidas posteriormente, a partir do kernel 3.5, o que pode também significar que o namespace do usuário não está funcionando de forma otimizada.
+
  
* User namespace (EXPERIMENTAL) depende do EXPERIMENTAL e do UIDGID_CONVERTED
+
* ZFS has the ability to create virtual block devices called zvols in its namespace. btrfs has no equivalent and must rely on the loop device for this functionality, which is cumbersome.
** config UIDGID_CONVERTED
+
*** True (verdadeiro) se todos os componentes de software selecionados forem conhecidos por terem uid_t and gid_t convertidos para kuid_t and kgid_t onde se apropriaram e são por outro lado seguros de utilizar com o user namespace.
+
**** Networking - depende do NET_9P = n
+
**** Filesystems - 9P_FS = n, AFS_FS = n, AUTOFS4_FS = n, CEPH_FS = n, CIFS = n, CODA_FS = n, FUSE_FS = n, GFS2_FS = n, NCP_FS = n, NFSD = n, NFS_FS = n, OCFS2_FS = n, XFS_FS = n
+
**** Opções de segurança (Security options) - Grsecurity - GRKERNSEC = n (se aplicável)
+
  
** A partir do kernel 3.10.xx, todas as opções acima são seguras para se usar com User namespaces, excetuando XFS_FS. Consequentemente, com kernel >=3.10.xx, você deverá responder XFS_FS= n, caso queira suporte a User namespaces.
+
The only area where btrfs is ahead of ZFS is in the area of small file
** Em seu diretório da fonte do kernel (kernel source directory), você deve verificar o init/Kconfig e descobrir o que UIDGID_CONVERTED depende
+
efficiency. btrfs supports a feature called block suballocation, which
 +
enables it to store small files far more efficiently than ZFS. It is
 +
possible to use another filesystem (e.g. reiserfs) on top of a ZFS zvol
 +
to obtain similar benefits (with arguably better data integrity) when
 +
dealing with many small files (e.g. the portage tree).
  
==== Configuração do kernel ====
+
For a quick tour of ZFS and have a big picture of its common operations you can consult the page [[ZFS Fun]].
Estas opções devem ser habilitadas em seu kernel para ser capaz de tirar o máximo proveito do LXC.
+
  
* General setup
+
=== Disclaimers ===
** CONFIG_NAMESPACES
+
*** CONFIG_UTS_NS
+
*** CONFIG_IPC_NS
+
*** CONFIG_PID_NS
+
*** CONFIG_NET_NS
+
*** CONFIG_USER_NS
+
** CONFIG_CGROUPS
+
*** CONFIG_CGROUP_DEVICE
+
*** CONFIG_CGROUP_SCHED
+
*** CONFIG_CGROUP_CPUACCT
+
*** CONFIG_CGROUP_MEM_RES_CTLR (em kernels superiores ao 3.6 essa opção é chamada de CONFIG_MEMCG)
+
*** CONFIG_CGROUP_MEM_RES_CTLR_SWAP (em kernels superiores ao 3.6 essa opção é chamada CONFIG_MEMCG_SWAP)
+
*** CONFIG_CPUSETS (on multiprocessor hosts)
+
* Networking support
+
** Networking options
+
*** CONFIG_VLAN_8021Q
+
* Device Drivers
+
** Character devices
+
*** Unix98 PTY support
+
**** CONFIG_DEVPTS_MULTIPLE_INSTANCES
+
** Network device support
+
*** Network core driver support
+
**** CONFIG_VETH
+
**** CONFIG_MACVLAN
+
  
Uma vez que você tenha o lxc instalado, você pode verificar seu kernel config com:
+
{{fancywarning|This guide is a work in progress. Expect some quirks.
{{console|body=
+
# ##i##CONFIG=/path/to/config /usr/sbin/lxc-checkconfig
+
}}
+
  
=== Emerge lxc ===
+
Today is 2015-05-12. ZFS has undertaken an upgrade - from 0.6.3 to 0.6.4. Please ensure that you use a RescueCD with ZFS 0.6.3. At present date grub 2.02 is not able to deal with those new ZFS parameters. If you want to use ZFS 0.6.4 for pool creation, you should use the compatability mode.
{{console|body=
+
# ##i##emerge app-emulation/lxc
+
}}
+
  
=== Configure a Rede para o Container ===
+
You should upgrade an existing pool only when grub is able to deal with - in a future version ... If not, you will not be able to boot into your system, and no rollback will help!
  
Tipicamente, alguém utiliza uma ponte (bridge) para permitir que os containers conectem a rede. Esse é o modo de se fazer isso no Funtoo Linux:
+
Please inform yourself!}}
  
# crie uma bridge utilizando os Funtoo network configuration scripts. Nomeie a bridge com algo como {{c|brwan}} (using {{c|/etc/init.d/netif.brwan}}). Configure sua bridge pata ter um endereço IP.
+
{{fancyimportant|'''Since ZFS was really designed for 64 bit systems, we are only recommending and supporting 64 bit platforms and installations. We will not be supporting 32 bit platforms'''!}}
# Faça a dua interface física, tal qual {{c|eth0}}, uma interface sem endereço de IP (utilize o template {{c|interface-noip}} do Funtoo.)
+
# Torne o {{c|netif.eth0}} um slave de {{c|netif.brwan}} em {{c|/etc/conf.d/netif.brwan}}.
+
# Habilite sua nova rede já em bridge e certifique-se de que está funcionando corretamente no host.
+
  
Agora você será capaz de configurar LXC para adicionar automaticamente sua interface ethernet virtual do container para criar uma bridge quando ele inicializar, que a conectará a sua rede.
+
== Downloading the ISO (With ZFS) ==
 +
In order for us to install Funtoo on ZFS, you will need an environment that already provides the ZFS tools. Therefore we will download a customized version of System Rescue CD with ZFS included.  
  
==Definindo um LXC Container do Funtoo Linux ==
+
<pre>
 +
Name: sysresccd-4.2.0_zfs_0.6.2.iso  (545 MB)
 +
Release Date: 2014-02-25
 +
md5sum 01f4e6929247d54db77ab7be4d156d85
 +
</pre>
  
Aqui estão os passos necessários para por o Funtoo Linux para funcionar <i>dentro de</i> um container. Os passos abaixo mostram como definir um container utilizando um template OpenVZ existente do Funtoo Linux. Agora é possível também utilizar o [[Metro]] para consruir um diretamente um tarball do lxc container, que salvará sua configuração manual e fornecerá um arquivo {{c|/etc/fstab.lxc}} que você pode utilizar para o seu host container config. Veja [[Metro Recipes]] para informações de como utilizar o Metro para gerar um lxc container.
 
  
=== Criar e configurar um Container Filesystem ===
+
'''[http://ftp.osuosl.org/pub/funtoo/distfiles/sysresccd/ Download System Rescue CD with ZFS]'''<br />
  
# Inicie o Funtoo LXC template, e desempacote-o em um diretório tal qual {{c|/lxc/funtoo0/rootfs/}}
+
== Creating a bootable USB from ISO (From a Linux Environment) ==
# Crie um arquivo {{c|/lxc/funtoo0/fstab}} vazio
+
After you download the iso, you can do the following steps to create a bootable USB:
# Certifique-se de que a linha {{c|c1}} não está comentada (habilitada) e da linha {{c|c2}} à linha {{c|c6}} estão desabilitadas em {{c|/lxc/funtoo0/rootfs/etc/inittab}}
+
  
Isso é quase tudo o que você precisa para ter o container filesystem pronto para iniciar.
+
<console>
 +
Make a temporary directory
 +
# ##i##mkdir /tmp/loop
  
=== Crie os Arquivos de Configuração do Container ===
+
Mount the iso
 +
# ##i##mount -o ro,loop /root/sysresccd-4.2.0_zfs_0.6.2.iso /tmp/loop
  
Crie os seguintes arquivos:
+
Run the usb installer
 +
# ##i##/tmp/loop/usb_inst.sh
 +
</console>
  
==== {{c|/lxc/funtoo0/config}} ====
+
That should be all you need to do to get your flash drive working.
  
 +
== Booting the ISO ==
  
e crie também o link simbólico a partir de
+
{{fancywarning|'''When booting into the ISO, Make sure that you select the "Alternate 64 bit kernel (altker64)". The ZFS modules have been built specifically for this kernel rather than the standard kernel. If you select a different kernel, you will get a fail to load module stack error message.'''}}
==== {{c|/lxc/funtoo0/config to /etc/lxc/funtoo0/config }} ====
+
{{console|body=
+
###i## install -d /etc/lxc/funtoo0
+
###i## ln -s /lxc/funtoo0/config /etc/lxc/funtoo0/config
+
}}
+
  
{{note| Daniel Robbins precisa atualizar esse config para ficar mais alinhado com o http://wiki.progress-linux.org/software/lxc/ -- Esse config aparenta ter permissões de node de dispositivo boa, refinado entre outras coisas. // nota de Havis para Daniel, esse config já é superior.}}
+
== Creating partitions ==
 +
There are two ways to partition your disk: You can use your entire drive and let ZFS automatically partition it for you, or you can do it manually.
  
 +
We will be showing you how to partition it '''manually''' because if you partition it manually you get to create your own layout, you get to have your own separate /boot partition (Which is nice since not every bootloader supports booting from ZFS pools), and you get to boot into RAID10, RAID5 (RAIDZ) pools and any other layouts due to you having a separate /boot partition.
  
Leia "man 5 lxc.conf" , para obter mais informações sobre o arquivo de configuração do linux container.
+
==== gdisk (GPT Style) ====
<pre>
+
## Container
+
lxc.utsname                            = funtoo0
+
lxc.rootfs                              = /lxc/funtoo0/rootfs/
+
lxc.arch                                = x86_64
+
#lxc.console                            = /var/log/lxc/funtoo0.console  # uncomment if you want to log containers console
+
lxc.tty                                = 6  # if you plan to use container with physical terminals (eg F1..F6)
+
#lxc.tty                                = 0  # set to 0 if you dont plan to use the container with physical terminal, also comment out in your containers /etc/inittab  c1 to c6 respawns (e.g. c1:12345:respawn:/sbin/agetty 38400 tty1 linux)
+
lxc.pts                                = 1024
+
  
 +
'''A Fresh Start''':
  
## Capabilities
+
First lets make sure that the disk is completely wiped from any previous disk labels and partitions.
lxc.cap.drop                            = audit_control
+
We will also assume that <tt>/dev/sda</tt> is the target drive.<br />
lxc.cap.drop                            = audit_write
+
lxc.cap.drop                            = mac_admin
+
lxc.cap.drop                            = mac_override
+
lxc.cap.drop                            = mknod
+
lxc.cap.drop                            = setfcap
+
lxc.cap.drop                            = setpcap
+
lxc.cap.drop                            = sys_admin
+
#lxc.cap.drop                            = sys_boot # capability to reboot the container
+
#lxc.cap.drop                            = sys_chroot # required by SSH
+
lxc.cap.drop                            = sys_module
+
#lxc.cap.drop                            = sys_nice
+
lxc.cap.drop                            = sys_pacct
+
lxc.cap.drop                            = sys_rawio
+
lxc.cap.drop                            = sys_resource
+
lxc.cap.drop                            = sys_time
+
#lxc.cap.drop                            = sys_tty_config # required by getty
+
  
## Devices
+
<console>
#lxc.cgroup.devices.allow              = a # Allow access to all devices
+
# ##i##sgdisk -Z /dev/sda
lxc.cgroup.devices.deny                = a # Deny access to all devices
+
</console>
  
# Allow to mknod all devices (but not using them)
+
{{fancywarning|This is a destructive operation and the program will not ask you for confirmation! Make sure you really don't want anything on this disk.}}
lxc.cgroup.devices.allow                = c *:* m
+
lxc.cgroup.devices.allow                = b *:* m
+
  
lxc.cgroup.devices.allow                = c 1:3 rwm # /dev/null
+
Now that we have a clean drive, we will create the new layout.
lxc.cgroup.devices.allow                = c 1:5 rwm # /dev/zero
+
lxc.cgroup.devices.allow                = c 1:7 rwm # /dev/full
+
lxc.cgroup.devices.allow                = c 1:8 rwm # /dev/random
+
lxc.cgroup.devices.allow                = c 1:9 rwm # /dev/urandom
+
#lxc.cgroup.devices.allow                = c 4:0 rwm # /dev/tty0 ttys not required if you have lxc.tty = 0
+
#lxc.cgroup.devices.allow                = c 4:1 rwm # /dev/tty1 devices with major number 4 are "real" tty devices
+
#lxc.cgroup.devices.allow                = c 4:2 rwm # /dev/tty2
+
#lxc.cgroup.devices.allow                = c 4:3 rwm # /dev/tty3
+
lxc.cgroup.devices.allow                = c 5:0 rwm # /dev/tty
+
lxc.cgroup.devices.allow                = c 5:1 rwm # /dev/console
+
lxc.cgroup.devices.allow                = c 5:2 rwm # /dev/ptmx
+
lxc.cgroup.devices.allow                = c 10:229 rwm # /dev/fuse
+
lxc.cgroup.devices.allow                = c 136:* rwm # /dev/pts/* devices with major number 136 are pts
+
lxc.cgroup.devices.allow                = c 254:0 rwm # /dev/rtc0
+
  
## Limits#
+
First open up the application:
lxc.cgroup.cpu.shares                  = 1024
+
lxc.cgroup.cpuset.cpus                = 0        # limits container to CPU0
+
lxc.cgroup.memory.limit_in_bytes      = 512M
+
lxc.cgroup.memory.memsw.limit_in_bytes = 1G
+
#lxc.cgroup.blkio.weight                = 500      # requires cfq block scheduler
+
  
## Filesystem
+
<console>
#containers fstab should be outside it's rootfs dir (e.g. /lxc/funtoo0/fstab is ok, but /lxc/funtoo0/rootfs/etc/fstab is wrong!!!)
+
# ##i##gdisk /dev/sda
#lxc.mount                              = /lxc/funtoo0/fstab     
+
</console>
  
#lxc.mount.entry is prefered, because it supports relative paths
+
'''Create Partition 1''' (boot):
lxc.mount.entry                        = proc proc proc nosuid,nodev,noexec  0 0
+
<console>
lxc.mount.entry                        = sysfs sys sysfs nosuid,nodev,noexec,ro 0 0
+
Command: ##i##n ↵
lxc.mount.entry                        = devpts dev/pts devpts nosuid,noexec,mode=0620,ptmxmode=000,newinstance 0 0
+
Partition Number: ##i##↵
lxc.mount.entry                        = tmpfs dev/shm tmpfs nosuid,nodev,mode=1777 0 0
+
First sector: ##i##↵
lxc.mount.entry                        = tmpfs run tmpfs nosuid,nodev,noexec,mode=0755,size=128m 0 0
+
Last sector: ##i##+250M ↵
lxc.mount.entry                        = tmpfs tmp tmpfs nosuid,nodev,noexec,mode=1777,size=1g 0 0
+
Hex Code: ##i##↵
 +
</console>
  
##Example of having /var/tmp/portage as tmpfs in container
+
'''Create Partition 2''' (BIOS Boot Partition):
#lxc.mount.entry                        = tmpfs var/tmp/portage tmpfs defaults,size=8g,uid=250,gid=250,mode=0775 0 0
+
<console>Command: ##i##n ↵
##Example of bind mount
+
Partition Number: ##i##↵
#lxc.mount.entry                        = /srv/funtoo0 /lxc/funtoo0/rootfs/srv/funtoo0 none defaults,bind 0 0
+
First sector: ##i##↵
 +
Last sector: ##i##+32M ↵
 +
Hex Code: ##i##EF02 ↵
 +
</console>
  
## Network
+
'''Create Partition 3''' (ZFS):
lxc.network.type                        = veth
+
<console>Command: ##i##n ↵
lxc.network.flags                      = up
+
Partition Number: ##i##↵
lxc.network.hwaddr                      = #put your MAC address here, otherwise you will get a random one
+
First sector: ##i##↵
lxc.network.link                        = br0
+
Last sector: ##i##↵
lxc.network.name                        = eth0
+
Hex Code: ##i##bf00 ↵
#lxc.network.veth.pair                  = veth-example
+
</pre>
+
  
Leia "man 7 capabilities" para obter mais informações sobre compatibilidades no Linux.
+
Command: ##i##p ↵
  
Acima, utilize o comando a seguir para gerar um MAC randômico (random MAC) para o {{c|lxc.network.hwaddr}}:
+
Number  Start (sector)   End (sector)  Size      Code  Name
 +
  1            2048          514047  250.0 MiB  8300  Linux filesystem
 +
  2          514048          579583  32.0 MiB    EF02  BIOS boot partition
 +
  3          579584      1953525134  931.2 GiB  BF00  Solaris root
  
{{console|body=
+
Command: ##i##w ↵
###i## openssl rand -hex 6 | sed 's/\(..\)/\1:/g; s/.$//'
+
</console>
}}
+
  
É uma boa ideia atribuir um endereço MAC estático para o seu container utilizar {{c|lxc.network.hwaddr}}. Caso não, LXC will auto-gerará um novo MAC randômico toda vez que seu container inicializar, o qual pode confundir o equipamento de rede que espera que os endereços MAC  permaneça constante.
 
  
Pode acontecer de caso para caso que você não seja capaz de inicializar seu LXC Container com o endereço MAC gerado; então, para todos esse que tiverem esse problema, aqui está um pequeno script que conecta seu IP para o container com o endereço MAC. Apenas salve o código a seguir assim {{c|/etc/lxc/hwaddr.sh}}, torne-o executável e execute-o assim {{c|/etc/lxc/hwaddr.sh xxx.xxx.xxx.xxx}} onde xxx.xxx.xxx.xxx representa o IP do seu Container. <br>{{c|/etc/lxc/hwaddr.sh}}:
+
=== Format your /boot partition ===
  
<pre>
+
<console>
#!/bin/sh
+
# ##i##mkfs.ext2 -m 1 /dev/sda1
IP=$*
+
</console>
HA=`printf "02:00:%x:%x:%x:%x" ${IP//./ }`
+
echo $HA
+
</pre>
+
  
==== {{c|/lxc/funtoo0/fstab}} ====
+
=== Create the zpool ===
{{fancynote| é preferível ter entradas mount diretamente no arquivo config ao invés do fstab separado:}}
+
We will first create the pool. The pool will be named  <code>tank</code>. Feel free to name your pool as you want.  We will use <code>ashift=12</code> option  which is used for a hard drives with a 4096 sector size.
Edite arquivo {{c|/lxc/funtoo0/fstab}}:
+
<console># ##i##  zpool create -f -o ashift=12 -o cachefile=/tmp/zpool.cache -O normalization=formD -m none -R /mnt/funtoo tank /dev/sda3 </console>
<pre>
+
none /lxc/funtoo0/dev/pts devpts defaults 0 0
+
none /lxc/funtoo0/proc proc defaults 0 0
+
none /lxc/funtoo0/sys sysfs defaults 0 0
+
none /lxc/funtoo0/dev/shm tmpfs nodev,nosuid,noexec,mode=1777,rw 0 0
+
</pre>
+
  
== LXC Networking ==
+
=== Create the zfs datasets ===
*veth - Ethernet Virtual (bridge)
+
We will now create some datasets. For this installation, we will create a small but future proof amount of datasets. We will have a dataset for the OS (/), and your swap. We will also show you how to create some optional datasets as examples ones: <code>/home</code>,  <code>/usr/src</code>, and <code>/usr/portage</code>. 
*vlan - Interface da vlan (requer dispositivo capaz de utilizar a vlan tagging)
+
*macvlan (mac-address baseado na lan tagging virtual) possui 3 modos:
+
**private
+
**vepa (Virtual Ethernet Port Aggregator)
+
**bridge
+
*phys - NIC hospede (host) dedicado
+
[https://blog.flameeyes.eu/2010/09/linux-containers-and-networking Linux Containers and Networking]
+
  
Habilita roteamento no host:
+
<console>
Por padrão, o Linux workstations e servidores possuem o IPv4 forwarding desabilitado.
+
Create some empty containers for organization purposes, and make the dataset that will hold /
{{console|body=
+
# ##i##zfs create -p tank/funtoo
###i## echo "1" > /proc/sys/net/ipv4/ip_forward
+
# ##i##zfs create -o mountpoint=/ tank/funtoo/root
###i## cat /proc/sys/net/ipv4/ip_forward
+
# 1
+
}}
+
  
== Initializing and Starting the Container ==
+
Optional, but recommended datasets: /home
 +
# ##i##zfs create -o mountpoint=/home tank/funtoo/home
  
You will probably need to set the root password for the container before you can log in. You can use chroot to do this quickly:
+
Optional datasets: /usr/src, /usr/portage/{distfiles,packages}
 +
# ##i##zfs create -o mountpoint=/usr/src tank/funtoo/src
 +
# ##i##zfs create -o mountpoint=/usr/portage -o compression=off tank/funtoo/portage
 +
# ##i##zfs create -o mountpoint=/usr/portage/distfiles tank/funtoo/portage/distfiles
 +
# ##i##zfs create -o mountpoint=/usr/portage/packages tank/funtoo/portage/packages
 +
</console>
  
{{console|body=
+
== Installing Funtoo ==
###i## chroot /lxc/funtoo0/rootfs
+
(chroot) ###i## passwd
+
New password: XXXXXXXX
+
Retype new password: XXXXXXXX
+
passwd: password updated successfully
+
(chroot) ###i## exit
+
}}
+
  
Now that the root password is set, run:
+
=== Pre-Chroot ===
  
{{console|body=
+
<console>
###i## lxc-start -n funtoo0 -d
+
Go into the directory that you will chroot into
}}
+
# ##i##cd /mnt/funtoo
  
The {{c|-d}} option will cause it to run in the background.
+
Make a boot folder and mount your boot drive
 +
# ##i##mkdir boot
 +
# ##i##mount /dev/sda1 boot
 +
</console>
  
To attach to the console:
+
[[Funtoo_Linux_Installation|Now download and extract the Funtoo stage3 ...]]
  
{{console|body=
 
###i## lxc-console -n funtoo0
 
}}
 
  
You should now be able to log in and use the container. In addition, the container should now be accessible on the network.
+
{{fancynote|It is trully recommended to use the current version and generic64. That reduces the risk of a broken build.  
  
To directly attach to container:
+
After successfull ZFS installation and successfull first boot, the kernel may be changed using the <code> eselect profile set ... </code> command. If you create a snapshot before, you may allways come back to your previous installation, with some simple steps ... (rollback your pool and in the worst case configure and install the bootloader again)}}
  
{{console|body=
+
 
###i## lxc-attach -n funtoo0
+
 
 +
Once you've extracted the stage3, do a few more preparations and chroot into your new funtoo environment:
 +
 
 +
<console>
 +
Bind the kernel related directories
 +
# ##i##mount -t proc none proc
 +
# ##i##mount --rbind /dev dev
 +
# ##i##mount --rbind /sys sys
 +
 
 +
Copy network settings
 +
# ##i##cp -f /etc/resolv.conf etc
 +
 
 +
Make the zfs folder in 'etc' and copy your zpool.cache
 +
# ##i##mkdir etc/zfs
 +
# ##i##cp /tmp/zpool.cache etc/zfs
 +
 
 +
Chroot into Funtoo
 +
# ##i##env -i HOME=/root TERM=$TERM chroot . bash -l
 +
</console>
 +
 
 +
{{fancynote|How to create zpool.cache file?}}
 +
If no <code>zpool.cache</code> file is available, the following command will create one:
 +
<console>
 +
# ##i##zpool set cachefile=/etc/zfs/zpool.cache tank
 +
</console>
 +
 
 +
{{:Install/PortageTree}}
 +
 
 +
=== Add filesystems to /etc/fstab ===
 +
 
 +
Before we continue to compile and or install our kernel in the next step, we will edit the <code>/etc/fstab</code> file because if we decide to install our kernel through portage, portage will need to know where our <code>/boot</code> is, so that it can place the files in there.
 +
 
 +
Edit <code>/etc/fstab</code>:
 +
 
 +
{{file|name=/etc/fstab|desc= |body=
 +
# <fs>                  <mountpoint>    <type>          <opts>          <dump/pass>
 +
 
 +
/dev/sda1              /boot          ext2            defaults        0 2
 
}}
 
}}
  
To stop the container:
+
== Building kernel, initramfs and grub to work with zfs==
 +
=== Install genkernel and initial kernel build ===
 +
 
 +
We need to build a genkernel initially:
 +
<console>
 +
# ##i##emerge genkernel
 +
 
 +
Build initial kernel (required for checks in sys-kernel/spl and sys-fs/zfs):
 +
# ##i##genkernel kernel --no-clean --no-mountboot
 +
 
 +
</console>
 +
 
 +
=== Installing the ZFS userspace tools and kernel modules ===
 +
Emerge {{Package|sys-fs/zfs}}. This package will bring in {{Package|sys-kernel/spl}}, and {{Package|sys-fs/zfs-kmod}} as its dependencies:
 +
 
 +
<console>
 +
# ##i##emerge zfs
 +
</console>
 +
 
 +
Check to make sure that the zfs tools are working. The <code>zpool.cache</code> file that you copied before should be displayed.
 +
 
 +
<console>
 +
# ##i##zpool status
 +
# ##i##zfs list
 +
</console>
 +
 
 +
Add the zfs tools to openrc.
 +
<console># ##i##rc-update add zfs boot</console>
 +
 
 +
If everything worked, continue.
 +
 
 +
=== Install GRUB 2  ===
 +
 
 +
Install grub2:
 +
<console>
 +
# ##i##echo "sys-boot/grub libzfs -truetype" >> /etc/portage/package.use
 +
# ##i##emerge grub
 +
</console>
 +
 
 +
Now install grub to the drive itself (not a partition):
 +
<console>
 +
# ##i##grub-install /dev/sda
 +
</console>
 +
 
 +
=== Emerge genkernel and initial kernel build ===
 +
Install genkernel using:
 +
<console>
 +
# ##i##echo "sys-kernel/genkernel zfs" >> /etc/portage/package.use
 +
# ##i##emerge genkernel
 +
 
 +
Build now kernel and initramfs with --zfs
 +
# ##i##genkernel all --zfs --no-clean --no-mountboot --callback="emerge @module-rebuild"
 +
</console>
 +
 
 +
 
 +
{{fancynote|During the build, ZFS configurations should be observed.
 +
 
 +
If the build breaks, restart it again.}}
 +
 
 +
=== Configuring the Bootloader ===
 +
 
 +
Using the genkernel you must add 'real_root=ZFS=<root>' and 'dozfs' to your params.
 +
Edit the  entry for <code>/etc/boot.conf</code>:
  
{{console|body=
+
{{file|name=/etc/boot.conf|desc= |body=
###i## lxc-stop -n funtoo0
+
"Funtoo ZFS" {
 +
        kernel kernel[-v]
 +
        initrd initramfs-genkernel-x86_64[-v]
 +
        params real_root=ZFS=tank/funtoo/root
 +
        params += dozfs=force
 +
}
 
}}
 
}}
  
Ensure that networking is working from within the container while it is running, and you're good to go!
+
The command <code>boot-update</code> should take care of grub configuration:
== Starting LXC container during host boot ==
+
  
# You need to create symlink in {{c|/etc/init.d/}} to {{c|/etc/init.d/lxc}} so that it reflects your container.
+
<console>
# {{c|ln -s /etc/init.d/lxc /etc/init.d/lxc.funtoo0}}
+
Install boot-update (if it is missing):
# now you can add {{c|lxc.funtoo0}} to default runlevel
+
###i##emerge boot-update
# {{c|rc-update add lxc.funtoo0 default}}
+
 
{{console|body=
+
Run boot-update to update grub.cfg
###i## rc
+
###i##boot-update
* Starting funtoo0 ...                  [ ok ]
+
</console>
 +
 
 +
{{fancynote|If <code>boot-update</code>fails, try this:
 +
<console>
 +
# ##i##grub-mkconfig -o /boot/grub/grub.cfg
 +
</console>
 
}}
 
}}
== LXC Bugs/Missing Features ==
+
Now you should have a new installation of the kernel, initramfs and grub which are zfs capable. The configurtion files should be updated, and the system should come up during the next reboot.
  
This section is devoted to documenting issues with the current implementation of LXC and its associated tools. We will be gradually expanding this section with detailed descriptions of problems, their status, and proposed solutions.
+
{{fancynote|If The <code>luks</code> integration works basically the same way.}}
  
=== reboot ===
+
== Final configuration ==
 +
=== Clean up and reboot ===
 +
We are almost done, we are just going to clean up, '''set our root password''', and unmount whatever we mounted and get out.
  
* Por padrão, o lxc não possui suporte a reiniciar um container. Ele simplesmente parará e o host não saberá inicializá-lo.
+
<console>
* Se você quiser que o seu to reinicialize de forma agradável, você precisa da capacidade sys_boot (comente lxc.cap.drop = sys_boot no seu container config)
+
Delete the stage3 tarball that you downloaded earlier so it doesn't take up space.
 +
# ##i##cd /
 +
# ##i##rm stage3-latest.tar.xz
  
=== PID namespaces ===
+
Set your root password
 +
# ##i##passwd
 +
>> Enter your password, you won't see what you are writing (for security reasons), but it is there!
  
Process ID namespaces are functional, but the container can still see the CPU utilization of the host via the system load (ie. in {{c|top}}).
+
Get out of the chroot environment
 +
# ##i##exit
  
=== /dev/pts newinstance ===
+
Unmount all the kernel filesystem stuff and boot (if you have a separate /boot)
 +
# ##i##umount -l proc dev sys boot
  
* Some changes may be required to the host to properly implement "newinstance" {{c|/dev/pts}}. See [https://bugzilla.redhat.com/show_bug.cgi?id=501718 This Red Hat bug].
+
Turn off the swap
 +
# ##i##swapoff /dev/zvol/tank/swap
  
=== lxc-create and lxc-destroy ===
+
Export the zpool
 +
# ##i##cd /
 +
# ##i##zpool export tank
  
* LXC's shell scripts are badly designed and are sure way to destruction, avoid using lxc-create and lxc-destroy.
+
Reboot
 +
# ##i##reboot
 +
</console>
  
=== network initialization and cleanup ===
+
{{fancyimportant|'''Don't forget to set your root password as stated above before exiting chroot and rebooting. If you don't set the root password, you won't be able to log into your new system.'''}}
  
* If used network.type = phys after lxc-stop the interface will be renamed to value from lxc.network.link. It supposed to be fixed in 0.7.4, happens still on 0.7.5 - http://www.mail-archive.com/lxc-users@lists.sourceforge.net/msg01760.html
+
and that should be enough to get your system to boot on ZFS.
  
* Re-starting a container can result in a failure as network resource are tied up from the already-defunct instance: [http://www.mail-archive.com/lxc-devel@lists.sourceforge.net/msg00824.html]
+
== After reboot ==
  
=== graceful shutdown ===
+
=== Forgot to reset password? ===
 +
==== System Rescue CD ====
 +
If you aren't using bliss-initramfs, then you can reboot back into your sysresccd and reset through there by mounting your drive, chrooting, and then typing passwd.
  
* To gracefully shutdown a container, it's init system needs to properly handle kill -PWR signal
+
Example:
* For funtoo/gentoo make sure that you have:
+
<console>
** pf:12345:powerwait:/sbin/halt
+
# ##i##zpool import -f -R /mnt/funtoo tank
** in your containers /etc/inittab
+
# ##i##chroot /mnt/funtoo bash -l
* For debian/ubuntu make sure that you have:
+
# ##i##passwd
** pf::powerwait:/sbin/shutdown -t1 -a -h now
+
# ##i##exit
** in your container /etc/inittab
+
# ##i##zpool export -f tank
** and also comment out other line starting with pf:powerfail (such as pf::powerwait:/etc/init.d/powerfail start) <- these are used if you have UPS monitoring daemon installed!
+
# ##i##reboot
* /etc/init.d/lxc seems to have broken support for graceful shutdown (it sends proper signal, but then also tries to kill the init with lxc-stop)
+
</console>
  
=== funtoo ===
+
=== Create initial ZFS Snapshot ===
 +
Continue to set up anything you need in terms of /etc configurations. Once you have everything the way you like it, take a snapshot of your system. You will be using this snapshot to revert back to this state if anything ever happens to your system down the road. The snapshots are cheap, and almost instant.
  
* Our udev should be updated to contain {{c|-lxc}} in scripts. (This has been done as of 02-Nov-2011, so should be resolved. But not fixed in our openvz templates, so need to regen them in a few days.)
+
To take the snapshot of your system, type the following:
* Our openrc should be patched to handle the case where it cannot mount tmpfs, and gracefully handle this situation somehow. (Work-around in our docs above, which is to mount tmpfs to {{c|/libexec/rc/init.d}} using the container-specific {{c|fstab}} file (on the host.)
+
<console># ##i##zfs snapshot -r tank@install</console>
* Emerging udev within a container can/will fail when realdev is run, if a device node cannot be created (such as /dev/console) if there are no mknod capabilities within the container. This should be fixed.
+
== References ==
+
  
* {{c|man 7 capabilities}}
+
To see if your snapshot was taken, type:
* {{c|man 5 lxc.conf}}
+
<console># ##i##zfs list -t snapshot</console>
== Links ==
+
  
* There are a number of additional lxc features that can be enabled via patches: [http://lxc.sourceforge.net/patches/linux/3.0.0/3.0.0-lxc1/]
+
If your machine ever fails and you need to get back to this state, just type (This will only revert your / dataset while keeping the rest of your data intact):
* [https://wiki.ubuntu.com/UserNamespace Ubuntu User Namespaces page]
+
<console># ##i##zfs rollback tank/funtoo/root@install</console>
* lxc-gentoo setup script [https://github.com/globalcitizen/lxc-gentoo on GitHub]
+
  
* '''IBM developerWorks'''
+
{{fancyimportant|'''For a detailed overview, presentation of ZFS' capabilities, as well as usage examples, please refer to the [[ZFS_Fun|ZFS Fun]] page.'''}}
** [http://www.ibm.com/developerworks/linux/library/l-lxc-containers/index.html LXC: Linux Container Tools]
+
 
** [http://www.ibm.com/developerworks/linux/library/l-lxc-security/ Secure Linux Containers Cookbook]
+
== Troubleshooting ==
 +
 
 +
=== Starting from scratch ===
 +
If your installation has gotten screwed up for whatever reason and you need a fresh restart, you can do the following from sysresccd to start fresh:
 +
 
 +
<console>
 +
Destroy the pool and any snapshots and datasets it has
 +
# ##i##zpool destroy -R -f tank
 +
 
 +
This deletes the files from /dev/sda1 so that even after we zap, recreating the drive in the exact sector
 +
position and size will not give us access to the old files in this partition.
 +
# ##i##mkfs.ext2 /dev/sda1
 +
# ##i##sgdisk -Z /dev/sda
 +
</console>
 +
 
 +
Now start the guide again :).
 +
 
 +
 
 +
=== Starting again reusing the same disk partitions and the same pool ===
 +
 
 +
If your installation has gotten screwed up for whatever reason and you want to keep your pole named tank than you should boou into the Rescue CD / USB as done before.
 +
 
 +
<console>import the pool reusing all existing datasets:
 +
# ##i##zpool import -f -R /mnt/funtoo tank
 +
</console>
 +
 
 +
Now you should wipe the previous installation off:
 +
 
 +
<console>
 +
let's go to our base installation directory:
 +
# ##i##cd /mnt/funtoo
 +
 
 +
and delete the old installation:
 +
# ##i##rm -rf *
 +
</console>
 +
 
 +
Now start the guide again, at "Pre-Chroot"
  
* '''Linux Weekly News'''
 
** [http://lwn.net/Articles/244531/ Smack for simplified access control]
 
  
[[Category:Labs]]
 
 
[[Category:HOWTO]]
 
[[Category:HOWTO]]
[[Category:Virtualization]]
+
[[Category:Filesystems]]
 +
[[Category:Featured]]
 +
[[Category:Install]]
 +
 
 +
__NOTITLE__

Revision as of 15:57, May 13, 2015

Introduction

This tutorial will show you how to install Funtoo on ZFS (rootfs). This tutorial is meant to be an "overlay" over the Regular Funtoo Installation. Follow the normal installation and only use this guide for steps 2, 3, and 8.

Introduction to ZFS

Since ZFS is a new technology for Linux, it can be helpful to understand some of its benefits, particularly in comparison to BTRFS, another popular next-generation Linux filesystem:

  • On Linux, the ZFS code can be updated independently of the kernel to obtain the latest fixes. btrfs is exclusive to Linux and you need to build the latest kernel sources to get the latest fixes.
  • ZFS is supported on multiple platforms. The platforms with the best support are Solaris, FreeBSD and Linux. Other platforms with varying degrees of support are NetBSD, Mac OS X and Windows. btrfs is exclusive to Linux.
  • ZFS has the Adaptive Replacement Cache replacement algorithm while btrfs uses the Linux kernel's Last Recently Used replacement algorithm. The former often has an overwhelmingly superior hit rate, which means fewer disk accesses.
  • ZFS has the ZFS Intent Log and SLOG devices, which accelerates small synchronous write performance.
  • ZFS handles internal fragmentation gracefully, such that you can fill it until 100%. Internal fragmentation in btrfs can make btrfs think it is full at 10%. Btrfs has no automatic rebalancing code, so it requires a manual rebalance to correct it.
  • ZFS has raidz, which is like RAID 5/6 (or a hypothetical RAID 7 that supports 3 parity disks), except it does not suffer from the RAID write hole issue thanks to its use of CoW and a variable stripe size. btrfs gained integrated RAID 5/6 functionality in Linux 3.9. However, its implementation uses a stripe cache that can only partially mitigate the effect of the RAID write hole.
  • ZFS send/receive implementation supports incremental update when doing backups. btrfs' send/receive implementation requires sending the entire snapshot.
  • ZFS supports data deduplication, which is a memory hog and only works well for specialized workloads. btrfs has no equivalent.
  • ZFS datasets have a hierarchical namespace while btrfs subvolumes have a flat namespace.
  • ZFS has the ability to create virtual block devices called zvols in its namespace. btrfs has no equivalent and must rely on the loop device for this functionality, which is cumbersome.

The only area where btrfs is ahead of ZFS is in the area of small file efficiency. btrfs supports a feature called block suballocation, which enables it to store small files far more efficiently than ZFS. It is possible to use another filesystem (e.g. reiserfs) on top of a ZFS zvol to obtain similar benefits (with arguably better data integrity) when dealing with many small files (e.g. the portage tree).

For a quick tour of ZFS and have a big picture of its common operations you can consult the page ZFS Fun.

Disclaimers

Warning

This guide is a work in progress. Expect some quirks.

Today is 2015-05-12. ZFS has undertaken an upgrade - from 0.6.3 to 0.6.4. Please ensure that you use a RescueCD with ZFS 0.6.3. At present date grub 2.02 is not able to deal with those new ZFS parameters. If you want to use ZFS 0.6.4 for pool creation, you should use the compatability mode.

You should upgrade an existing pool only when grub is able to deal with - in a future version ... If not, you will not be able to boot into your system, and no rollback will help!

Please inform yourself!

Important

Since ZFS was really designed for 64 bit systems, we are only recommending and supporting 64 bit platforms and installations. We will not be supporting 32 bit platforms!

Downloading the ISO (With ZFS)

In order for us to install Funtoo on ZFS, you will need an environment that already provides the ZFS tools. Therefore we will download a customized version of System Rescue CD with ZFS included.

Name: sysresccd-4.2.0_zfs_0.6.2.iso  (545 MB)
Release Date: 2014-02-25
md5sum 01f4e6929247d54db77ab7be4d156d85


Download System Rescue CD with ZFS

Creating a bootable USB from ISO (From a Linux Environment)

After you download the iso, you can do the following steps to create a bootable USB:

Make a temporary directory
# mkdir /tmp/loop

Mount the iso
# mount -o ro,loop /root/sysresccd-4.2.0_zfs_0.6.2.iso /tmp/loop

Run the usb installer
# /tmp/loop/usb_inst.sh

That should be all you need to do to get your flash drive working.

Booting the ISO

Warning

When booting into the ISO, Make sure that you select the "Alternate 64 bit kernel (altker64)". The ZFS modules have been built specifically for this kernel rather than the standard kernel. If you select a different kernel, you will get a fail to load module stack error message.

Creating partitions

There are two ways to partition your disk: You can use your entire drive and let ZFS automatically partition it for you, or you can do it manually.

We will be showing you how to partition it manually because if you partition it manually you get to create your own layout, you get to have your own separate /boot partition (Which is nice since not every bootloader supports booting from ZFS pools), and you get to boot into RAID10, RAID5 (RAIDZ) pools and any other layouts due to you having a separate /boot partition.

gdisk (GPT Style)

A Fresh Start:

First lets make sure that the disk is completely wiped from any previous disk labels and partitions. We will also assume that /dev/sda is the target drive.

# sgdisk -Z /dev/sda
Warning

This is a destructive operation and the program will not ask you for confirmation! Make sure you really don't want anything on this disk.

Now that we have a clean drive, we will create the new layout.

First open up the application:

# gdisk /dev/sda

Create Partition 1 (boot):

Command: n ↵
Partition Number: 
First sector: 
Last sector: +250M ↵
Hex Code: 

Create Partition 2 (BIOS Boot Partition):

Command: n ↵
Partition Number: 
First sector: 
Last sector: +32M ↵
Hex Code: EF02 ↵

Create Partition 3 (ZFS):

Command: n ↵
Partition Number: 
First sector: 
Last sector: 
Hex Code: bf00 ↵

Command: p ↵

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048          514047   250.0 MiB   8300  Linux filesystem
   2          514048          579583   32.0 MiB    EF02  BIOS boot partition
   3          579584      1953525134   931.2 GiB   BF00  Solaris root

Command: w ↵


Format your /boot partition

# mkfs.ext2 -m 1 /dev/sda1

Create the zpool

We will first create the pool. The pool will be named tank. Feel free to name your pool as you want. We will use ashift=12 option which is used for a hard drives with a 4096 sector size.

#   zpool create -f -o ashift=12 -o cachefile=/tmp/zpool.cache -O normalization=formD -m none -R /mnt/funtoo tank /dev/sda3 

Create the zfs datasets

We will now create some datasets. For this installation, we will create a small but future proof amount of datasets. We will have a dataset for the OS (/), and your swap. We will also show you how to create some optional datasets as examples ones: /home, /usr/src, and /usr/portage.

Create some empty containers for organization purposes, and make the dataset that will hold /
# zfs create -p tank/funtoo
# zfs create -o mountpoint=/ tank/funtoo/root

Optional, but recommended datasets: /home
# zfs create -o mountpoint=/home tank/funtoo/home

Optional datasets: /usr/src, /usr/portage/{distfiles,packages}
# zfs create -o mountpoint=/usr/src tank/funtoo/src
# zfs create -o mountpoint=/usr/portage -o compression=off tank/funtoo/portage
# zfs create -o mountpoint=/usr/portage/distfiles tank/funtoo/portage/distfiles
# zfs create -o mountpoint=/usr/portage/packages tank/funtoo/portage/packages

Installing Funtoo

Pre-Chroot

Go into the directory that you will chroot into
# cd /mnt/funtoo

Make a boot folder and mount your boot drive
# mkdir boot
# mount /dev/sda1 boot

Now download and extract the Funtoo stage3 ...


Note

It is trully recommended to use the current version and generic64. That reduces the risk of a broken build.

After successfull ZFS installation and successfull first boot, the kernel may be changed using the eselect profile set ... command. If you create a snapshot before, you may allways come back to your previous installation, with some simple steps ... (rollback your pool and in the worst case configure and install the bootloader again)


Once you've extracted the stage3, do a few more preparations and chroot into your new funtoo environment:

Bind the kernel related directories
# mount -t proc none proc
# mount --rbind /dev dev
# mount --rbind /sys sys

Copy network settings
# cp -f /etc/resolv.conf etc

Make the zfs folder in 'etc' and copy your zpool.cache
# mkdir etc/zfs
# cp /tmp/zpool.cache etc/zfs

Chroot into Funtoo
# env -i HOME=/root TERM=$TERM chroot . bash -l
Note

How to create zpool.cache file?

If no zpool.cache file is available, the following command will create one:

# zpool set cachefile=/etc/zfs/zpool.cache tank


Downloading the Portage tree

Note

For an alternative way to do this, see Installing Portage From Snapshot.

Now it's time to install a copy of the Portage repository, which contains package scripts (ebuilds) that tell portage how to build and install thousands of different software packages. To create the Portage repository, simply run emerge --sync from within the chroot. This will automatically clone the portage tree from GitHub:

(chroot) # emerge --sync
Important

If you receive the error with initial emerge --sync due to git protocol restrictions, change SYNC variable in /etc/portage/make.conf:

SYNC="https://github.com/funtoo/ports-2012.git"
Note

To update the Funtoo Linux system just type:

(chroot) # emerge -auDN @world

Add filesystems to /etc/fstab

Before we continue to compile and or install our kernel in the next step, we will edit the /etc/fstab file because if we decide to install our kernel through portage, portage will need to know where our /boot is, so that it can place the files in there.

Edit /etc/fstab:

/etc/fstab
# <fs>                  <mountpoint>    <type>          <opts>          <dump/pass>

/dev/sda1               /boot           ext2            defaults        0 2

Building kernel, initramfs and grub to work with zfs

Install genkernel and initial kernel build

We need to build a genkernel initially:

# emerge genkernel

Build initial kernel (required for checks in sys-kernel/spl and sys-fs/zfs):
# genkernel kernel --no-clean --no-mountboot 

Installing the ZFS userspace tools and kernel modules

Emerge sys-fs/zfs (package not on wiki - please add). This package will bring in sys-kernel/spl (package not on wiki - please add), and sys-fs/zfs-kmod (package not on wiki - please add) as its dependencies:

# emerge zfs

Check to make sure that the zfs tools are working. The zpool.cache file that you copied before should be displayed.

# zpool status
# zfs list

Add the zfs tools to openrc.

# rc-update add zfs boot

If everything worked, continue.

Install GRUB 2

Install grub2:

# echo "sys-boot/grub libzfs -truetype" >> /etc/portage/package.use
# emerge grub

Now install grub to the drive itself (not a partition):

# grub-install /dev/sda

Emerge genkernel and initial kernel build

Install genkernel using:

# echo "sys-kernel/genkernel zfs" >> /etc/portage/package.use
# emerge genkernel

Build now kernel and initramfs with --zfs
# genkernel all --zfs --no-clean --no-mountboot --callback="emerge @module-rebuild"


Note

During the build, ZFS configurations should be observed.

If the build breaks, restart it again.

Configuring the Bootloader

Using the genkernel you must add 'real_root=ZFS=<root>' and 'dozfs' to your params. Edit the entry for /etc/boot.conf:

/etc/boot.conf
"Funtoo ZFS" {
        kernel kernel[-v]
        initrd initramfs-genkernel-x86_64[-v]
        params real_root=ZFS=tank/funtoo/root
        params += dozfs=force
}

The command boot-update should take care of grub configuration:

Install boot-update (if it is missing):
#emerge boot-update

Run boot-update to update grub.cfg
#boot-update
Note

If boot-updatefails, try this:

# grub-mkconfig -o /boot/grub/grub.cfg

Now you should have a new installation of the kernel, initramfs and grub which are zfs capable. The configurtion files should be updated, and the system should come up during the next reboot.

Note

If The luks integration works basically the same way.

Final configuration

Clean up and reboot

We are almost done, we are just going to clean up, set our root password, and unmount whatever we mounted and get out.

Delete the stage3 tarball that you downloaded earlier so it doesn't take up space.
# cd /
# rm stage3-latest.tar.xz

Set your root password
# passwd
>> Enter your password, you won't see what you are writing (for security reasons), but it is there!

Get out of the chroot environment
# exit

Unmount all the kernel filesystem stuff and boot (if you have a separate /boot)
# umount -l proc dev sys boot

Turn off the swap
# swapoff /dev/zvol/tank/swap

Export the zpool
# cd /
# zpool export tank

Reboot
# reboot
Important

Don't forget to set your root password as stated above before exiting chroot and rebooting. If you don't set the root password, you won't be able to log into your new system.

and that should be enough to get your system to boot on ZFS.

After reboot

Forgot to reset password?

System Rescue CD

If you aren't using bliss-initramfs, then you can reboot back into your sysresccd and reset through there by mounting your drive, chrooting, and then typing passwd.

Example:

# zpool import -f -R /mnt/funtoo tank
# chroot /mnt/funtoo bash -l
# passwd
# exit
# zpool export -f tank
# reboot

Create initial ZFS Snapshot

Continue to set up anything you need in terms of /etc configurations. Once you have everything the way you like it, take a snapshot of your system. You will be using this snapshot to revert back to this state if anything ever happens to your system down the road. The snapshots are cheap, and almost instant.

To take the snapshot of your system, type the following:

# zfs snapshot -r tank@install

To see if your snapshot was taken, type:

# zfs list -t snapshot

If your machine ever fails and you need to get back to this state, just type (This will only revert your / dataset while keeping the rest of your data intact):

# zfs rollback tank/funtoo/root@install
Important

For a detailed overview, presentation of ZFS' capabilities, as well as usage examples, please refer to the ZFS Fun page.

Troubleshooting

Starting from scratch

If your installation has gotten screwed up for whatever reason and you need a fresh restart, you can do the following from sysresccd to start fresh:

Destroy the pool and any snapshots and datasets it has
# zpool destroy -R -f tank

This deletes the files from /dev/sda1 so that even after we zap, recreating the drive in the exact sector
position and size will not give us access to the old files in this partition.
# mkfs.ext2 /dev/sda1
# sgdisk -Z /dev/sda

Now start the guide again :).


Starting again reusing the same disk partitions and the same pool

If your installation has gotten screwed up for whatever reason and you want to keep your pole named tank than you should boou into the Rescue CD / USB as done before.

import the pool reusing all existing datasets:
# zpool import -f -R /mnt/funtoo tank

Now you should wipe the previous installation off:

let's go to our base installation directory:
# cd /mnt/funtoo

and delete the old installation: 
# rm -rf *

Now start the guide again, at "Pre-Chroot"