Difference between pages "Windows 10 Virtualization with KVM" and "GlusterFS"

From Funtoo
(Difference between pages)
Jump to navigation Jump to search
 
 
Line 2: Line 2:
|Author=Drobbins
|Author=Drobbins
}}
}}
This page describes how to set up Funtoo Linux to run Windows 7 Professional 32-bit within a KVM virtual machine. KVM is suitable for running Windows 7 for general desktop application use. It does not provide 3D support, but offers a nice, high-performance virtualization solution for day-to-day productivity applications. It is also very easy to set up.
== GlusterFS Distribution ==


== Introduction ==
Below, we create a distributed volume using two bricks (XFS filesystems.) This spreads IO and files among two bricks.


KVM is a hardware-accelerated full-machine hypervisor and virtualization solution included as part of kernel 2.6.20 and later. It allows you to create and start hardware-accelerated virtual machines under Linux using the QEMU tools.
<console>
# ##i##gluster peer status
No peers present
# ##i##gluster peer probe rhs-lab2
Probe successful
# ##i##gluster peer status
Number of Peers: 1


[[File:Windows7virt.png|400px|Windows 7 Professional 32-bit running within qemu-kvm]]
Hostname: rhs-lab2
Uuid: 6b6c9ffc-da79-4d24-8325-086d44869338
State: Peer in Cluster (Connected)
# ##i##gluster peer probe rhs-lab3
Probe successful
# ##i##gluster peer probe rhs-lab4
Probe successful
# ##i##gluster peer status
Number of Peers: 3


== KVM Setup ==
Hostname: rhs-lab2
Uuid: 6b6c9ffc-da79-4d24-8325-086d44869338
State: Peer in Cluster (Connected)


You will need KVM to be set up on the machine that will be running the virtual machine. This can be a local Linux system, or if you are using SPICE (see [[#SPICE (Accelerated Remote Connection)|SPICE]]), a local or remote system. See the SPICE section for tweaks that you will need to make to these instructions if you plan to run Windows 7 on a Funtoo Linux system that you will connect to remotely.
Hostname: rhs-lab3
Uuid: cbcd508e-5f80-4224-91df-fd5f8e12915d
State: Peer in Cluster (Connected)


Follow these steps for the system that will be running the virtual machine.
Hostname: rhs-lab4
Uuid: a02f68d8-88af-4b79-92d8-1057dd85af45
State: Peer in Cluster (Connected)
# ##i##gluster volume create dist rhs-lab1:/data/dist rhs-lab2:/data/dist
Creation of volume dist has been successful. Please start the volume to access data.
</console>


If you are using an automatically-built kernel, it is likely that kernel support for KVM is already available.
<console>
# ##i##gluster volume info
Volume Name: dist
Type: Distribute
Volume ID: f9758871-20dc-4728-9576-a5bb5b24ca4f
Status: Created
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: rhs-lab1:/data/dist
Brick2: rhs-lab2:/data/dist
</console>


If you build your kernel from scratch, please see [[KVM|the KVM page]] for detailed instructions on how to enable KVM. These instructions also cover the process of emerging qemu, which is also necessary. [[KVM|Do this first, as described on the KVM page]] -- then come back here.
<console>
# ##i##gluster volume start dist
Starting volume dist has been successful
</console>


{{fancyimportant|Before using KVM, be sure that your user account is in the <tt>kvm</tt> group so that <tt>qemu</tt> can access <tt>/dev/kvm</tt>. You will need to use a command such as <tt>vigr</tt> as root to do this, and then log out and log back in for this to take effect.}}
<console>
 
# ##i##gluster volume info
Prior to using KVM, modprobe the appropriate accelerated driver for Intel or AMD, as root:
Volume Name: dist
Type: Distribute
Volume ID: f9758871-20dc-4728-9576-a5bb5b24ca4f
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: rhs-lab1:/data/dist
Brick2: rhs-lab2:/data/dist
</console>


<console>
<console>
# ##i##modprobe kvm_intel
# ##i##mount -t glusterfs rhs-lab1:/dist /mnt/dist
</console>
</console>


== Windows 7 ISO Images ==
== GlusterFS Mirroring ==


In this tutorial, we are going to install Windows 7 Professional, 32-bit Edition. Microsoft provides a free download of the ISO DVD image, but this does require a valid license key for installation. You can download Windows 7 Professional, 32 bit at the following location:
Below, we mirror data between two bricks (XFS volumes). This creates a redundant system and also allows for read performance to be improved.


http://msft-dnl.digitalrivercontent.net/msvista/pub/X15-65804/X15-65804.iso
<console>
# ##i##gluster volume create mirror replica 2 rhs-lab1:/data/mirror rhs-lab2:/data/mirror
Creation of volume mirror has been successful. Please start the volume to access data.
# ##i##gluster volume start mirror
Starting volume mirror has been successful
# ##i##gluster volume info mirror
Volume Name: mirror
Type: Replicate
Volume ID: 4edacef8-982c-46a9-be7e-29e34fa40f95
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: rhs-lab1:/data/mirror
Brick2: rhs-lab2:/data/mirror
# ##i##install -d /mnt/mirror
# ##i##mount -t glusterfs rhs-lab1:/mirror /mnt/mirror


{{fancynote|Windows 7 Professional, 32-bit Edition is a free download but requires a valid license key for installation.}}
</console>


In addition, it's highly recommended that you download "VirtIO" drivers produced by Red Hat. These drivers are installed under Windows and significantly improve Windows 7 network and disk performance. You want to download the ISO file (not the ZIP file) at the following location:
== Growing GlusterFS ==


http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/
Now we will add a new brick to our distributed filesystem. We will run a rebalance (optional) to get the files distributed ideally. This will involve distributing some existing files on to our new brick on rhs-lab3:


== Create Raw Disk ==
<console>
# ##i##gluster volume add-brick dist rhs-lab3:/data/dist
Add Brick successful
# ##i##gluster volume rebalance dist start
Starting rebalance on volume dist has been successful
</console>


In this tutorial, we are going to create a 30GB raw disk image for Windows 7. Raw disk images offer better performance than the commonly-used QCOW2 format. Do this as a regular user:
After the rebalance, our distributed GlusterFS filesystem will have optimal performance and one third of the files will have moved to rhs-lab3.


<console>
<console>
$ ##i##cd
# ##i##gluster volume rebalance dist status
$ ##i##qemu-img create -f raw win7.img 30G
                                    Node Rebalanced-files          size      scanned      failures        status
                              ---------      -----------  -----------  -----------  -----------  ------------
                              localhost                0            0            0            0      completed
                                rhs-lab4                0            0            0            0      completed
                                rhs-lab3                0            0            0            0      completed
                                rhs-lab2                0            0            0            0      completed
 
</console>
</console>


We now have an empty virtual disk image called <tt>win7.img</tt> in our home directory.
== Growing a GlusterFS Replicated Volume ==


== QEMU script ==
You can grow a replicated volume by adding pairs of bricks:
 
<console>
# ##i##gluster volume add-brick mirror rhs-lab3:/data/mirror rhs-lab4:/data/mirror
Add Brick successful
# ##i##gluster volume info mirror
Volume Name: mirror
Type: Distributed-Replicate
Volume ID: 4edacef8-982c-46a9-be7e-29e34fa40f95
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: rhs-lab1:/data/mirror
Brick2: rhs-lab2:/data/mirror
Brick3: rhs-lab3:/data/mirror
Brick4: rhs-lab4:/data/mirror
</console>


Now, we'll create the following script to start our virtual machine and begin Windows 7 installation. Note that this script assumes that the two ISO files downloaded earlier were placed in the user's <tt>Downloads</tt> directory. Adjust paths as necessary if that is not the case. Also be sure to adjust the following parts of the script:
== GlusterFS Brick Migration ==


* Adjust the name of <tt>VIRTIMG</tt> to match the exact name of the VirtIO ISO image you downloaded earlier
Here is how you migrate data off of an existing brick and on to a new brick:
* Adjust the <tt>smp</tt> option to use the number of CPU cores and threads (if your system has hyperthreading) of your Linux system's CPU.


Use your favorite text editor to create the following script. Name it something like <tt>vm.sh</tt>:
<console>
# ##i##gluster volume replace-brick dist rhs-lab3:/data/dist rhs-lab4:/data/dist start
replace-brick started successfully
# ##i##gluster volume replace-brick dist rhs-lab3:/data/dist rhs-lab4:/data/dist status
Number of files migrated = 0        Migration complete
# ##i##gluster volume replace-brick dist rhs-lab3:/data/dist rhs-lab4:/data/dist commit
replace-brick commit successful
# ##i##gluster volume info
Volume Name: dist
Type: Distribute
Volume ID: f9758871-20dc-4728-9576-a5bb5b24ca4f
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: rhs-lab1:/data/dist
Brick2: rhs-lab2:/data/dist
Brick3: rhs-lab4:/data/dist
Volume Name: mirror
Type: Distributed-Replicate
Volume ID: 4edacef8-982c-46a9-be7e-29e34fa40f95
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: rhs-lab1:/data/mirror
Brick2: rhs-lab2:/data/mirror
Brick3: rhs-lab3:/data/mirror
Brick4: rhs-lab4:/data/mirror
</console>


<syntaxhighlight lang="bash">
== Removing a Brick ==
#!/bin/sh
export QEMU_AUDIO_DRV=alsa
DISKIMG=~/win7.img
WIN7IMG=~/Downloads/X15-65804.iso
VIRTIMG=~/Downloads/virtio-win-0.1-74.iso
qemu-system-x86_64 --enable-kvm -drive file=${DISKIMG},if=virtio -m 2048 \
-net nic,model=virtio -net user -cdrom ${WIN7IMG} \
-drive file=${VIRTIMG},index=3,media=cdrom \
-rtc base=localtime,clock=host -smp cores=2,threads=4 \
-usbdevice tablet -soundhw ac97 -cpu host -vga vmware
</syntaxhighlight>


Now, make the script executable:
Here's how you remove a brick. The add-brick and remove-brick commands will ensure that you don't break mirrors, so you will need to remove both volumes in a mirror if you are working with a replicated volume.


<console>
<console>
$ ##i##chmod +x vm.sh
# ##i##gluster volume remove-brick dist rhs-lab4:/data/dist start
</console>
Remove Brick start successful
# ##i##gluster volume remove-brick dist rhs-lab4:/data/dist status
                                    Node Rebalanced-files          size      scanned      failures        status
                              ---------      -----------  -----------  -----------  -----------  ------------
                              localhost                0            0            0            0    not started
                                rhs-lab3                0            0            0            0    not started
                                rhs-lab2                0            0            0            0    not started
                                rhs-lab4                0            0            0            0      completed


Here is a brief summary of what the script does. It starts the <tt>qemu-kvm</tt> program and instructs it to use KVM to accelerate virtualization. The display will be shown locally, in a window. If you are using the SPICE method, described later in this document, no window will appear, and you will be able to connect remotely to your running virtual machine.
# ##i##gluster volume remove-brick dist rhs-lab4:/data/dist commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
Remove Brick commit successful


The system disk is the 30GB raw image you created, and we tell QEMU to use "virtio" mode for this disk, as well as "virtio" for network access. This will require that we install special drivers during installation to access the disk and enable networking, but will give us better performance.
</console>


To assist us in installing the VirtIO drivers, we have configured the system with two DVD drives -- the first holds the Windows 7 installation media, and the second contains the VirtIO driver ISO that we will need to access during Windows 7 installation.
== Georeplication ==


The <tt>-usbdevice tablet</tt> option will cause our mouse and keyboard interaction with our virtual environment to be intuitive and easy to use.
At the local GlusterFS site:


{{fancyimportant|1=
<console>
For optimal performance, adjust the script so that the <tt>-smp</tt> option specifies the exact number of cores and threads on your system -- on non-HyperThreading systems (AMD and some Intel), simply remove the <tt>,threads=X</tt> option entirely and just specify cores. Also ensure that the <tt>-m</tt> option provides enough RAM for Windows 7, without eating up all your system's RAM. On a 4GB Linux system, use <tt>1536</tt>. For an 8GB system, <tt>2048</tt> is safe.}}
# gluster volume create georep rhs-lab1:/data/georep
Creation of volume georep has been successful. Please start the volume to access data.
# gluster volume start georep
Starting volume georep has been successful
# gluster volume info georep
Volume Name: georep
Type: Distribute
Volume ID: 001bc914-74ad-48e6-846a-1767a5b2cb58
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: rhs-lab1:/data/georep
# mkdir /mnt/georep
# mount -t glusterfs rhs-lab1:/georep /mnt/georep
# cd /mnt/georep/
# ls
# df -h .
Filesystem            Size  Used Avail Use% Mounted on
rhs-lab1:/georep      5.1G  33M  5.0G  1% /mnt/georep
</console>


== SPICE (Accelerated Remote Connection) ==
At the remote site, set up a <tt>georep-dr</tt> volume:
SPICE is a new technology that has been incorporated into QEMU, which allows the virtual machine to run on one system, and allows you to use <code>spicec</code>, the SPICE client, to connect to your remote virtual machine. In real-world use, you can run a SPICE server (via QEMU) and client on the same machine if you like, or have them on the same local area network, or have server and client connect over an Internet connection. Here are some important facts about SPICE:
* SPICE provides accelerated, optimized video updates over the network, similar to VNC
* QEMU can be configured to run a SPICE server, which you can connect to via <code>spicec</code>, the SPICE client. The SPICE client renders to a local window on your system.
* SPICE allows easy copying and pasting across operating systems -- for example, you can copy something in GNOME, paste it into the <code>spicec</code> window and have it appear on your Windows 7 system.


=== SPICE Setup ===
To set up SPICE, you need to perform the following changes to the "standard" steps described in this document:
# Emerge QEMU with the <code>spice</code> USE variable on the system that will be running the Windows 7 virtual machine.
# Emerge <code>spice</code> on the system that you will be using to connect to your remote Windows 7 virtual machine.
# In the <code>vm.sh</code> script, remove the existing <code>-vga vmware</code> <code>qemu-kvm</code> option, and add these options: <code>-vga qxl -device virtio-serial-pci -spice port=5900,password=mypass -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent</code>
# Run <code>vm.sh</code> as described in the next section on your remote server (your Windows 7 system will now boot, but you can't see the virtual machine display) and then connect to it by running the following command on your local system:
<console>
<console>
# ##i##spicec -h remotehost -p 5900 -w mypass
# ##i##gluster volume create georep-dr rhs-lab4:/data/georep-dr
# ##i##gluster volume start georep-dr
</console>
</console>
The SPICE client window will appear locally and allow you to interact with your Windows 7 system.


== Starting Windows 7 Installation ==
Local side:
 
Now, it's time to start Windows 7 installation. Run <tt>vm.sh</tt> as follows:


<console>
<console>
$ ##i##./vm.sh
# ##i##gluster volume geo-replication georep status
MASTER              SLAVE                                              STATUS   
--------------------------------------------------------------------------------
# ##i##gluster volume geo-replication georep ssh://rhs-lab4::georep-dr start
Starting geo-replication session between georep & ssh://rhs-lab4::georep-dr has been successful
</console>
</console>


Windows 7 installation will begin. During the installation process, you will need to enter a valid license key, and also load ''both'' VirtIO drivers from Red Hat when prompted (Browse to the second DVD, then win7 directory, then x86).
== GlusterFS Security ==
 
After some time, Windows 7 installation will complete. You will be able to perform Windows Update, as by default, you will have network access if your host Linux system has network access.


Enjoy your virtualized Windows 7 system!
Currently, any GlusterFS peer can join your volume if it exists on your LAN. Securing GlusterFS can be accomplished with <tt>iptables</tt> by blocking TCP ports.


[[Category:Tutorial]]
[[Category:Filesystems]]
[[Category:First Steps]]
[[Category:Virtualization]]
[[Category:KVM]]
[[Category:Official Documentation]]
[[Category:Articles]]
[[Category:Articles]]
{{PageNeedsUpdates}}
{{ArticleFooter}}
{{ArticleFooter}}

Latest revision as of 17:02, December 28, 2014

   Support Funtoo!
Get an awesome Funtoo container and support Funtoo! See Funtoo Containers for more information.

GlusterFS Distribution

Below, we create a distributed volume using two bricks (XFS filesystems.) This spreads IO and files among two bricks.

root # gluster peer status
No peers present
root # gluster peer probe rhs-lab2
Probe successful
root # gluster peer status
Number of Peers: 1

Hostname: rhs-lab2
Uuid: 6b6c9ffc-da79-4d24-8325-086d44869338
State: Peer in Cluster (Connected)
root # gluster peer probe rhs-lab3
Probe successful
root # gluster peer probe rhs-lab4
Probe successful
root # gluster peer status
Number of Peers: 3

Hostname: rhs-lab2
Uuid: 6b6c9ffc-da79-4d24-8325-086d44869338
State: Peer in Cluster (Connected)

Hostname: rhs-lab3
Uuid: cbcd508e-5f80-4224-91df-fd5f8e12915d
State: Peer in Cluster (Connected)

Hostname: rhs-lab4
Uuid: a02f68d8-88af-4b79-92d8-1057dd85af45
State: Peer in Cluster (Connected)
root # gluster volume create dist rhs-lab1:/data/dist rhs-lab2:/data/dist
Creation of volume dist has been successful. Please start the volume to access data.
root # gluster volume info
 
Volume Name: dist
Type: Distribute
Volume ID: f9758871-20dc-4728-9576-a5bb5b24ca4f
Status: Created
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: rhs-lab1:/data/dist
Brick2: rhs-lab2:/data/dist
root # gluster volume start dist
Starting volume dist has been successful
root # gluster volume info
 
Volume Name: dist
Type: Distribute
Volume ID: f9758871-20dc-4728-9576-a5bb5b24ca4f
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: rhs-lab1:/data/dist
Brick2: rhs-lab2:/data/dist
root # mount -t glusterfs rhs-lab1:/dist /mnt/dist

GlusterFS Mirroring

Below, we mirror data between two bricks (XFS volumes). This creates a redundant system and also allows for read performance to be improved.

root # gluster volume create mirror replica 2 rhs-lab1:/data/mirror rhs-lab2:/data/mirror
Creation of volume mirror has been successful. Please start the volume to access data.
root # gluster volume start mirror
Starting volume mirror has been successful
root # gluster volume info mirror
 
Volume Name: mirror
Type: Replicate
Volume ID: 4edacef8-982c-46a9-be7e-29e34fa40f95
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: rhs-lab1:/data/mirror
Brick2: rhs-lab2:/data/mirror
root # install -d /mnt/mirror
root # mount -t glusterfs rhs-lab1:/mirror /mnt/mirror

Growing GlusterFS

Now we will add a new brick to our distributed filesystem. We will run a rebalance (optional) to get the files distributed ideally. This will involve distributing some existing files on to our new brick on rhs-lab3:

root # gluster volume add-brick dist rhs-lab3:/data/dist
Add Brick successful
root # gluster volume rebalance dist start
Starting rebalance on volume dist has been successful

After the rebalance, our distributed GlusterFS filesystem will have optimal performance and one third of the files will have moved to rhs-lab3.

root # gluster volume rebalance dist status
                                    Node Rebalanced-files          size       scanned      failures         status
                               ---------      -----------   -----------   -----------   -----------   ------------
                               localhost                0            0            0            0      completed
                                rhs-lab4                0            0            0            0      completed
                                rhs-lab3                0            0            0            0      completed
                                rhs-lab2                0            0            0            0      completed

Growing a GlusterFS Replicated Volume

You can grow a replicated volume by adding pairs of bricks:

root # gluster volume add-brick mirror rhs-lab3:/data/mirror rhs-lab4:/data/mirror
Add Brick successful
root # gluster volume info mirror
 
Volume Name: mirror
Type: Distributed-Replicate
Volume ID: 4edacef8-982c-46a9-be7e-29e34fa40f95
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: rhs-lab1:/data/mirror
Brick2: rhs-lab2:/data/mirror
Brick3: rhs-lab3:/data/mirror
Brick4: rhs-lab4:/data/mirror

GlusterFS Brick Migration

Here is how you migrate data off of an existing brick and on to a new brick:

root # gluster volume replace-brick dist rhs-lab3:/data/dist rhs-lab4:/data/dist start
replace-brick started successfully
root # gluster volume replace-brick dist rhs-lab3:/data/dist rhs-lab4:/data/dist status
Number of files migrated = 0        Migration complete 
root # gluster volume replace-brick dist rhs-lab3:/data/dist rhs-lab4:/data/dist commit
replace-brick commit successful
root # gluster volume info
 
Volume Name: dist
Type: Distribute
Volume ID: f9758871-20dc-4728-9576-a5bb5b24ca4f
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: rhs-lab1:/data/dist
Brick2: rhs-lab2:/data/dist
Brick3: rhs-lab4:/data/dist
 
Volume Name: mirror
Type: Distributed-Replicate
Volume ID: 4edacef8-982c-46a9-be7e-29e34fa40f95
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: rhs-lab1:/data/mirror
Brick2: rhs-lab2:/data/mirror
Brick3: rhs-lab3:/data/mirror
Brick4: rhs-lab4:/data/mirror

Removing a Brick

Here's how you remove a brick. The add-brick and remove-brick commands will ensure that you don't break mirrors, so you will need to remove both volumes in a mirror if you are working with a replicated volume.

root # gluster volume remove-brick dist rhs-lab4:/data/dist start
Remove Brick start successful
root # gluster volume remove-brick dist rhs-lab4:/data/dist status
                                    Node Rebalanced-files          size       scanned      failures         status
                               ---------      -----------   -----------   -----------   -----------   ------------
                               localhost                0            0            0            0    not started
                                rhs-lab3                0            0            0            0    not started
                                rhs-lab2                0            0            0            0    not started
                                rhs-lab4                0            0            0            0      completed

root # gluster volume remove-brick dist rhs-lab4:/data/dist commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
Remove Brick commit successful

Georeplication

At the local GlusterFS site:

root # gluster volume create georep rhs-lab1:/data/georep
Creation of volume georep has been successful. Please start the volume to access data.
root # gluster volume start georep
Starting volume georep has been successful
root # gluster volume info georep
 
Volume Name: georep
Type: Distribute
Volume ID: 001bc914-74ad-48e6-846a-1767a5b2cb58
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: rhs-lab1:/data/georep
root # mkdir /mnt/georep
root # mount -t glusterfs rhs-lab1:/georep /mnt/georep
root # cd /mnt/georep/
root # ls
root # df -h .
Filesystem            Size  Used Avail Use% Mounted on
rhs-lab1:/georep      5.1G   33M  5.0G   1% /mnt/georep

At the remote site, set up a georep-dr volume:

root # gluster volume create georep-dr rhs-lab4:/data/georep-dr
root # gluster volume start georep-dr

Local side:

root # gluster volume geo-replication georep status
MASTER               SLAVE                                              STATUS    
--------------------------------------------------------------------------------
root # gluster volume geo-replication georep ssh://rhs-lab4::georep-dr start
Starting geo-replication session between georep & ssh://rhs-lab4::georep-dr has been successful

GlusterFS Security

Currently, any GlusterFS peer can join your volume if it exists on your LAN. Securing GlusterFS can be accomplished with iptables by blocking TCP ports.



   Note

Browse all our available articles below. Use the search field to search for topics and keywords in real-time.

Article Subtitle
Article Subtitle
Awk by Example, Part 1 An intro to the great language with the strange name
Awk by Example, Part 2 Records, loops, and arrays
Awk by Example, Part 3 String functions and ... checkbooks?
Bash by Example, Part 1 Fundamental programming in the Bourne again shell (bash)
Bash by Example, Part 2 More bash programming fundamentals
Bash by Example, Part 3 Exploring the ebuild system
BTRFS Fun
Funtoo Filesystem Guide, Part 1 Journaling and ReiserFS
Funtoo Filesystem Guide, Part 2 Using ReiserFS and Linux
Funtoo Filesystem Guide, Part 3 Tmpfs and Bind Mounts
Funtoo Filesystem Guide, Part 4 Introducing Ext3
Funtoo Filesystem Guide, Part 5 Ext3 in Action
GUID Booting Guide
Learning Linux LVM, Part 1 Storage management magic with Logical Volume Management
Learning Linux LVM, Part 2 The cvs.gentoo.org upgrade
Libvirt
Linux Fundamentals, Part 1
Linux Fundamentals, Part 2
Linux Fundamentals, Part 3
Linux Fundamentals, Part 4
LVM Fun
Making the Distribution, Part 1
Making the Distribution, Part 2
Making the Distribution, Part 3
Maximum Swappage Getting the most out of swap
On screen annotation Write on top of apps on your screen
OpenSSH Key Management, Part 1 Understanding RSA/DSA Authentication
OpenSSH Key Management, Part 2 Introducing ssh-agent and keychain
OpenSSH Key Management, Part 3 Agent Forwarding
Partition Planning Tips Keeping things organized on disk
Partitioning in Action, Part 1 Moving /home
Partitioning in Action, Part 2 Consolidating data
POSIX Threads Explained, Part 1 A simple and nimble tool for memory sharing
POSIX Threads Explained, Part 2
POSIX Threads Explained, Part 3 Improve efficiency with condition variables
Sed by Example, Part 1
Sed by Example, Part 2
Sed by Example, Part 3
Successful booting with UUID Guide to use UUID for consistent booting.
The Gentoo.org Redesign, Part 1 A site reborn
The Gentoo.org Redesign, Part 2 The Documentation System
The Gentoo.org Redesign, Part 3 The New Main Pages
The Gentoo.org Redesign, Part 4 The Final Touch of XML
Traffic Control
Windows 10 Virtualization with KVM