Difference between pages "Rootfs over encrypted lvm over raid-1 on GPT" and "Translations:Install/13/de"

From Funtoo
(Difference between pages)
Jump to navigation Jump to search
(Added another necessary step to take if /boot resides on a raid)
 
(Created page with "==== Zugriff auf das Netzwerk ====")
 
Line 1: Line 1:
This howto describes how to setup LVM and rootfs with cryptoLUKS-encrypted raid-1 over drive with GPT
==== Zugriff auf das Netzwerk ====
= Rootfs over encrypted lvm over raid-1 on GPT =
 
To start read [[Rootfs_over_encrypted_lvm|Rootfs over encrypted lvm]]
 
How to prepare the hard disk for GPT read [[Funtoo_Linux_Installation#GPT_Partitions|Funtoo Linux Installation on GPT_Partitions]].
For example, installing a new system on <code>/dev/sdb</code>
 
<console>
###i## gdisk -l /dev/sdb
GPT fdisk (gdisk) version 0.6.13
 
Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present
 
Found valid GPT with protective MBR; using GPT.
Disk /dev/sdb: 625142448 sectors, 298.1 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 67AC0F92-E033-4B53-B6C5-D99DD8F49D90
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 625142414
Partitions will be aligned on 2048-sector boundaries
Total free space is 3038 sectors (1.5 MiB)
 
Number  Start (sector)    End (sector)  Size      Code  Name
  1            2048          206847  100.0 MiB  0700  Linux/Windows data
  2          206848          207871  512.0 KiB  EF02  BIOS boot partition
  3          208896      625142414  298.0 GiB  FD00  Linux RAID
</console>
 
If you plan to use a raid-1 for installing only one partition (/dev/sdb3 in example) and, if successful, later add more to the mirror, issue something like:
<console>
###i## mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb3
</console>
 
If you prefer to add the two final destination devices to the array in the first place, issue something like:
<console>
###i## mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda3 /dev/sdb3
</console>
 
If everything worked well, the arrays will start synchronising immediately. You can monitor this progress by looking at the contents of <code>/proc/mdstat</code>:
 
<console>
###i## cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4]
md2 : active raid1 sdb5[1] sda5[0]
      581595328 blocks [2/2] [UU]
        resync=DELAYED
 
md1 : active raid1 sdb4[1] sda4[0]
      41942976 blocks [2/2] [UU]
      [>....................]  resync = 1.6% (691456/41942976) finish=8.9min speed=76828K/sec
 
md0 : active raid1 sdb1[1] sda1[0]
      511936 blocks [2/2] [UU]
 
unused devices: <none>
###i##
</console>
 
Now, that's awesome, isn't it? :)
Even more awesome is the fact that you can immediately start using your shiny new RAID. It will finish it's sync in the background while you do changes to its filesystem.
 
== Encrypting the raid-1 ==
 
<console>
###i## cryptsetup -c aes-xts-plain luksFormat /dev/md0
###i## cryptsetup luksOpen /dev/md0 dmcrypt_root
</console>
 
== Initramfs setup and configuration ==
=== No initramfs ===
To activate the raid-1 during boot without an initramfs, perform:
<syntaxhighlight lang="bash">
echo "Activating RAID device."
if [ ! -e '/etc/mdadm.conf' ]
then
echo "DEVICE /dev/sda[0-9] /dev/sdb[0-9] /dev/md[0-9]" >> /etc/mdadm.conf
mdadm --examine --scan --config=/etc/mdadm.conf  >> /etc/mdadm.conf
mdadm --assemble --scan
fi
</syntaxhighlight>
 
=== Better-initramfs ===
Or use [https://bitbucket.org/piotrkarbowski/better-initramfs better-initramfs] with raid-1 mdadm support
<console>
###i## git clone git@bitbucket.org:piotrkarbowski/better-initramfs.git
</console>
This script is well documented at it's GitHub overview site (which displays the documentation from README.rst).
 
== Grub2 configuration ==
 
Do not forget:
<pre>enc_root=/dev/md0</pre>
 
Also, in the [[Funtoo_Linux_Installation#Running_grub-install_and_boot-update|non-RAID Funtoo Linux Installation doc]], GRUB is only installed into the MBR of <tt>/dev/sda</tt>. When using RAID1 for <tt>/boot</tt>, you most probably want to be able to boot your system from the second device in this array, if the first one dies. To be able to do so, a bootloader must be present in the MBR of each array member disk.
Because of this, apply the command shown in the main doc to install GRUB to the MBR for each of your array member disks, instead of only the first one.
If, for example, you are building a RAID from <tt>/dev/sda1</tt> and <tt>/dev/sdb1</tt> to contain <tt>/boot</tt>, you will have to issue the following:
 
<console>
(chroot) # ##i##grub-install --no-floppy /dev/sda
(chroot) # ##i##grub-install --no-floppy /dev/sdb
(chroot) # ##i##boot-update
</console>
 
= Additional links =
* [http://en.gentoo-wiki.com/wiki/RAID/Software RAID/Software]
* [http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml Gentoo Linux x86 with Software Raid and LVM2 Quick Install Guide]
 
[[Category:HOWTO]]

Latest revision as of 21:25, August 18, 2015

Zugriff auf das Netzwerk