Difference between pages "Package:Pass" and "Rootfs over encrypted lvm over raid-1 on GPT"

From Funtoo
(Difference between pages)
Jump to navigation Jump to search
(Created page with "pass is a password manager following the Unix philosophy. From the [http://www.passwordstore.org/|pass website]: <blockquote> With pass, each password lives inside of a gpg e...")
 
(Added another necessary step to take if /boot resides on a raid)
 
Line 1: Line 1:
pass is a password manager following the Unix philosophy.
This howto describes how to setup LVM and rootfs with cryptoLUKS-encrypted raid-1 over drive with GPT
= Rootfs over encrypted lvm over raid-1 on GPT =


From the [http://www.passwordstore.org/|pass website]:
To start read [[Rootfs_over_encrypted_lvm|Rootfs over encrypted lvm]]
<blockquote>
With pass, each password lives inside of a gpg encrypted file whose filename is the title of the website or resource that requires the password. These encrypted files may be organized into meaningful folder hierarchies, copied from computer to computer, and, in general, manipulated using standard command line file management utilities.


pass makes managing these individual password files extremely easy. All passwords live in ~/.password-store, and pass provides some nice commands for adding, editing, generating, and retrieving passwords. It is a very short and simple shell script. It's capable of temporarily putting passwords on your clipboard and tracking password changes using git.
How to prepare the hard disk for GPT read [[Funtoo_Linux_Installation#GPT_Partitions|Funtoo Linux Installation on GPT_Partitions]].
For example, installing a new system on <code>/dev/sdb</code>


You can edit the password store using ordinary unix shell commands alongside the pass command. There are no funky file formats or new paradigms to learn. There is bash completion so that you can simply hit tab to fill in names and commands, as well as completion for zsh and fish available in the completion folder. The community has even produced a GUI client, an iOS app, a Firefox plugin, a dmenu script, and even an emacs package.
<console>
</blockquote>
###i## gdisk -l /dev/sdb
GPT fdisk (gdisk) version 0.6.13
 
Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present
 
Found valid GPT with protective MBR; using GPT.
Disk /dev/sdb: 625142448 sectors, 298.1 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 67AC0F92-E033-4B53-B6C5-D99DD8F49D90
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 625142414
Partitions will be aligned on 2048-sector boundaries
Total free space is 3038 sectors (1.5 MiB)
 
Number  Start (sector)    End (sector)  Size      Code  Name
  1            2048          206847  100.0 MiB  0700  Linux/Windows data
  2          206848          207871  512.0 KiB  EF02  BIOS boot partition
  3          208896      625142414  298.0 GiB  FD00  Linux RAID
</console>
 
If you plan to use a raid-1 for installing only one partition (/dev/sdb3 in example) and, if successful, later add more to the mirror, issue something like:
<console>
###i## mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb3
</console>
 
If you prefer to add the two final destination devices to the array in the first place, issue something like:
<console>
###i## mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda3 /dev/sdb3
</console>
 
If everything worked well, the arrays will start synchronising immediately. You can monitor this progress by looking at the contents of <code>/proc/mdstat</code>:


== Installation ==
You can install 'pass' the usual way:
<console>
<console>
###i## emerge -a pass
###i## cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4]
md2 : active raid1 sdb5[1] sda5[0]
      581595328 blocks [2/2] [UU]
        resync=DELAYED
 
md1 : active raid1 sdb4[1] sda4[0]
      41942976 blocks [2/2] [UU]
      [>....................]  resync =  1.6% (691456/41942976) finish=8.9min speed=76828K/sec
 
md0 : active raid1 sdb1[1] sda1[0]
      511936 blocks [2/2] [UU]
 
unused devices: <none>
###i##
</console>
</console>


== USE Flags ==
Now, that's awesome, isn't it? :)
However if you want the features like zsh completion or dmenu script you should add some USe flags to configure pass to your needs.
Even more awesome is the fact that you can immediately start using your shiny new RAID. It will finish it's sync in the background while you do changes to its filesystem.
 
== Encrypting the raid-1 ==
 
<console>
###i## cryptsetup -c aes-xts-plain luksFormat /dev/md0
###i## cryptsetup luksOpen /dev/md0 dmcrypt_root
</console>


If you have 'equery' installed you can check the all available USE flags including their description using:
== Initramfs setup and configuration ==
=== No initramfs ===
To activate the raid-1 during boot without an initramfs, perform:
<syntaxhighlight lang="bash">
echo "Activating RAID device."
if [ ! -e '/etc/mdadm.conf' ]
then
echo "DEVICE /dev/sda[0-9] /dev/sdb[0-9] /dev/md[0-9]" >> /etc/mdadm.conf
mdadm --examine --scan --config=/etc/mdadm.conf  >> /etc/mdadm.conf
mdadm --assemble --scan
fi
</syntaxhighlight>


=== Better-initramfs ===
Or use [https://bitbucket.org/piotrkarbowski/better-initramfs better-initramfs] with raid-1 mdadm support
<console>
<console>
###i## equery u pass
###i## git clone git@bitbucket.org:piotrkarbowski/better-initramfs.git
</console>
</console>
{{fancynote|Soon the wiki will list the use flags of packages too!}}
This script is well documented at it's GitHub overview site (which displays the documentation from README.rst).
 
== Grub2 configuration ==
 
Do not forget:
<pre>enc_root=/dev/md0</pre>
 
Also, in the [[Funtoo_Linux_Installation#Running_grub-install_and_boot-update|non-RAID Funtoo Linux Installation doc]], GRUB is only installed into the MBR of <tt>/dev/sda</tt>. When using RAID1 for <tt>/boot</tt>, you most probably want to be able to boot your system from the second device in this array, if the first one dies. To be able to do so, a bootloader must be present in the MBR of each array member disk.
Because of this, apply the command shown in the main doc to install GRUB to the MBR for each of your array member disks, instead of only the first one.
If, for example, you are building a RAID from <tt>/dev/sda1</tt> and <tt>/dev/sdb1</tt> to contain <tt>/boot</tt>, you will have to issue the following:


For example if you want pass the abillity to import passwords form other other password managers you should add 'importers':
<console>
<console>
###i## echo "app-admin/pass importers" >> /etc/portage/package.use"
(chroot) # ##i##grub-install --no-floppy /dev/sda
(chroot) # ##i##grub-install --no-floppy /dev/sdb
(chroot) # ##i##boot-update
</console>
</console>
= Additional links =
* [http://en.gentoo-wiki.com/wiki/RAID/Software RAID/Software]
* [http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml Gentoo Linux x86 with Software Raid and LVM2 Quick Install Guide]
[[Category:HOWTO]]

Revision as of 12:47, September 25, 2014

This howto describes how to setup LVM and rootfs with cryptoLUKS-encrypted raid-1 over drive with GPT

Rootfs over encrypted lvm over raid-1 on GPT

To start read Rootfs over encrypted lvm

How to prepare the hard disk for GPT read Funtoo Linux Installation on GPT_Partitions. For example, installing a new system on /dev/sdb

root # gdisk -l /dev/sdb
GPT fdisk (gdisk) version 0.6.13

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sdb: 625142448 sectors, 298.1 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 67AC0F92-E033-4B53-B6C5-D99DD8F49D90
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 625142414
Partitions will be aligned on 2048-sector boundaries
Total free space is 3038 sectors (1.5 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048          206847   100.0 MiB   0700  Linux/Windows data
   2          206848          207871   512.0 KiB   EF02  BIOS boot partition
   3          208896       625142414   298.0 GiB   FD00  Linux RAID

If you plan to use a raid-1 for installing only one partition (/dev/sdb3 in example) and, if successful, later add more to the mirror, issue something like:

root # mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb3

If you prefer to add the two final destination devices to the array in the first place, issue something like:

root # mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda3 /dev/sdb3

If everything worked well, the arrays will start synchronising immediately. You can monitor this progress by looking at the contents of /proc/mdstat:

root # cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4]
md2 : active raid1 sdb5[1] sda5[0]
      581595328 blocks [2/2] [UU]
        resync=DELAYED

md1 : active raid1 sdb4[1] sda4[0]
      41942976 blocks [2/2] [UU]
      [>....................]  resync =  1.6% (691456/41942976) finish=8.9min speed=76828K/sec

md0 : active raid1 sdb1[1] sda1[0]
      511936 blocks [2/2] [UU]

unused devices: <none>
root #

Now, that's awesome, isn't it? :) Even more awesome is the fact that you can immediately start using your shiny new RAID. It will finish it's sync in the background while you do changes to its filesystem.

Encrypting the raid-1

root # cryptsetup -c aes-xts-plain luksFormat /dev/md0
root # cryptsetup luksOpen /dev/md0 dmcrypt_root

Initramfs setup and configuration

No initramfs

To activate the raid-1 during boot without an initramfs, perform:

echo "Activating RAID device."
if [ ! -e '/etc/mdadm.conf' ]
then
	echo "DEVICE /dev/sda[0-9] /dev/sdb[0-9] /dev/md[0-9]" >> /etc/mdadm.conf
	mdadm --examine --scan --config=/etc/mdadm.conf  >> /etc/mdadm.conf
	mdadm --assemble --scan
fi

Better-initramfs

Or use better-initramfs with raid-1 mdadm support

root # git clone git@bitbucket.org:piotrkarbowski/better-initramfs.git

This script is well documented at it's GitHub overview site (which displays the documentation from README.rst).

Grub2 configuration

Do not forget:

enc_root=/dev/md0

Also, in the non-RAID Funtoo Linux Installation doc, GRUB is only installed into the MBR of /dev/sda. When using RAID1 for /boot, you most probably want to be able to boot your system from the second device in this array, if the first one dies. To be able to do so, a bootloader must be present in the MBR of each array member disk. Because of this, apply the command shown in the main doc to install GRUB to the MBR for each of your array member disks, instead of only the first one. If, for example, you are building a RAID from /dev/sda1 and /dev/sdb1 to contain /boot, you will have to issue the following:

(chroot) # grub-install --no-floppy /dev/sda
(chroot) # grub-install --no-floppy /dev/sdb
(chroot) # boot-update

Additional links