Difference between pages "Windows 7 Virtualization with KVM" and "GlusterFS"

(Difference between pages)
 
 
Line 2: Line 2:
 
|Author=Drobbins
 
|Author=Drobbins
 
}}
 
}}
This page describes how to set up Funtoo Linux to run Windows 7 Professional 32-bit within a KVM virtual machine. KVM is suitable for running Windows 7 for general desktop application use. It does not provide 3D support, but offers a nice, high-performance virtualization solution for day-to-day productivity applications. It is also very easy to set up.
+
== GlusterFS Distribution ==
  
== Introduction ==
+
Below, we create a distributed volume using two bricks (XFS filesystems.) This spreads IO and files among two bricks.
  
KVM is a hardware-accelerated full-machine hypervisor and virtualization solution included as part of kernel 2.6.20 and later. It allows you to create and start hardware-accelerated virtual machines under Linux using the QEMU tools.
+
<console>
 +
# ##i##gluster peer status
 +
No peers present
 +
# ##i##gluster peer probe rhs-lab2
 +
Probe successful
 +
# ##i##gluster peer status
 +
Number of Peers: 1
  
[[File:Windows7virt.png|400px|Windows 7 Professional 32-bit running within qemu-kvm]]
+
Hostname: rhs-lab2
 +
Uuid: 6b6c9ffc-da79-4d24-8325-086d44869338
 +
State: Peer in Cluster (Connected)
 +
# ##i##gluster peer probe rhs-lab3
 +
Probe successful
 +
# ##i##gluster peer probe rhs-lab4
 +
Probe successful
 +
# ##i##gluster peer status
 +
Number of Peers: 3
  
== KVM Setup ==
+
Hostname: rhs-lab2
 +
Uuid: 6b6c9ffc-da79-4d24-8325-086d44869338
 +
State: Peer in Cluster (Connected)
  
You will need KVM to be set up on the machine that will be running the virtual machine. This can be a local Linux system, or if you are using SPICE (see [[#SPICE (Accelerated Remote Connection)|SPICE]]), a local or remote system. See the SPICE section for tweaks that you will need to make to these instructions if you plan to run Windows 7 on a Funtoo Linux system that you will connect to remotely.
+
Hostname: rhs-lab3
 +
Uuid: cbcd508e-5f80-4224-91df-fd5f8e12915d
 +
State: Peer in Cluster (Connected)
  
Follow these steps for the system that will be running the virtual machine.
+
Hostname: rhs-lab4
 +
Uuid: a02f68d8-88af-4b79-92d8-1057dd85af45
 +
State: Peer in Cluster (Connected)
 +
# ##i##gluster volume create dist rhs-lab1:/data/dist rhs-lab2:/data/dist
 +
Creation of volume dist has been successful. Please start the volume to access data.
 +
</console>
  
If you are using an automatically-built kernel, it is likely that kernel support for KVM is already available.
+
<console>
 +
# ##i##gluster volume info
 +
 +
Volume Name: dist
 +
Type: Distribute
 +
Volume ID: f9758871-20dc-4728-9576-a5bb5b24ca4f
 +
Status: Created
 +
Number of Bricks: 2
 +
Transport-type: tcp
 +
Bricks:
 +
Brick1: rhs-lab1:/data/dist
 +
Brick2: rhs-lab2:/data/dist
 +
</console>
  
If you build your kernel from scratch, please see [[KVM|the KVM page]] for detailed instructions on how to enable KVM. These instructions also cover the process of emerging qemu, which is also necessary. [[KVM|Do this first, as described on the KVM page]] -- then come back here.
+
<console>
 +
# ##i##gluster volume start dist
 +
Starting volume dist has been successful
 +
</console>
  
{{fancyimportant|Before using KVM, be sure that your user account is in the <tt>kvm</tt> group so that <tt>qemu</tt> can access <tt>/dev/kvm</tt>. You will need to use a command such as <tt>vigr</tt> as root to do this, and then log out and log back in for this to take effect.}}
+
<console>
 
+
# ##i##gluster volume info
Prior to using KVM, modprobe the appropriate accelerated driver for Intel or AMD, as root:
+
 +
Volume Name: dist
 +
Type: Distribute
 +
Volume ID: f9758871-20dc-4728-9576-a5bb5b24ca4f
 +
Status: Started
 +
Number of Bricks: 2
 +
Transport-type: tcp
 +
Bricks:
 +
Brick1: rhs-lab1:/data/dist
 +
Brick2: rhs-lab2:/data/dist
 +
</console>
  
 
<console>
 
<console>
# ##i##modprobe kvm_intel
+
# ##i##mount -t glusterfs rhs-lab1:/dist /mnt/dist
 
</console>
 
</console>
  
== Windows 7 ISO Images ==
+
== GlusterFS Mirroring ==
  
In this tutorial, we are going to install Windows 7 Professional, 32-bit Edition. Microsoft provides a free download of the ISO DVD image, but this does require a valid license key for installation. You can download Windows 7 Professional, 32 bit at the following location:
+
Below, we mirror data between two bricks (XFS volumes). This creates a redundant system and also allows for read performance to be improved.
  
http://msft-dnl.digitalrivercontent.net/msvista/pub/X15-65804/X15-65804.iso
+
<console>
 +
# ##i##gluster volume create mirror replica 2 rhs-lab1:/data/mirror rhs-lab2:/data/mirror
 +
Creation of volume mirror has been successful. Please start the volume to access data.
 +
# ##i##gluster volume start mirror
 +
Starting volume mirror has been successful
 +
# ##i##gluster volume info mirror
 +
 +
Volume Name: mirror
 +
Type: Replicate
 +
Volume ID: 4edacef8-982c-46a9-be7e-29e34fa40f95
 +
Status: Started
 +
Number of Bricks: 1 x 2 = 2
 +
Transport-type: tcp
 +
Bricks:
 +
Brick1: rhs-lab1:/data/mirror
 +
Brick2: rhs-lab2:/data/mirror
 +
# ##i##install -d /mnt/mirror
 +
# ##i##mount -t glusterfs rhs-lab1:/mirror /mnt/mirror
  
{{fancynote|Windows 7 Professional, 32-bit Edition is a free download but requires a valid license key for installation.}}
+
</console>
  
In addition, it's highly recommended that you download "VirtIO" drivers produced by Red Hat. These drivers are installed under Windows and significantly improve Windows 7 network and disk performance. You want to download the ISO file (not the ZIP file) at the following location:
+
== Growing GlusterFS ==
  
http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/
+
Now we will add a new brick to our distributed filesystem. We will run a rebalance (optional) to get the files distributed ideally. This will involve distributing some existing files on to our new brick on rhs-lab3:
 
+
== Create Raw Disk ==
+
 
+
In this tutorial, we are going to create a 30GB raw disk image for Windows 7. Raw disk images offer better performance than the commonly-used QCOW2 format. Do this as a regular user:
+
  
 
<console>
 
<console>
$ ##i##cd
+
# ##i##gluster volume add-brick dist rhs-lab3:/data/dist
$ ##i##qemu-img create -f raw win7.img 30G
+
Add Brick successful
 +
# ##i##gluster volume rebalance dist start
 +
Starting rebalance on volume dist has been successful
 
</console>
 
</console>
  
We now have an empty virtual disk image called <tt>win7.img</tt> in our home directory.
+
After the rebalance, our distributed GlusterFS filesystem will have optimal performance and one third of the files will have moved to rhs-lab3.
  
== QEMU script ==
+
<console>
 +
# ##i##gluster volume rebalance dist status
 +
                                    Node Rebalanced-files          size      scanned      failures        status
 +
                              ---------      -----------  -----------  -----------  -----------  ------------
 +
                              localhost                0            0            0            0      completed
 +
                                rhs-lab4                0            0            0            0      completed
 +
                                rhs-lab3                0            0            0            0      completed
 +
                                rhs-lab2                0            0            0            0      completed
  
Now, we'll create the following script to start our virtual machine and begin Windows 7 installation. Note that this script assumes that the two ISO files downloaded earlier were placed in the user's <tt>Downloads</tt> directory. Adjust paths as necessary if that is not the case. Also be sure to adjust the following parts of the script:
+
</console>
  
* Adjust the name of <tt>VIRTIMG</tt> to match the exact name of the VirtIO ISO image you downloaded earlier
+
== Growing a GlusterFS Replicated Volume ==
* Adjust the <tt>smp</tt> option to use the number of CPU cores and threads (if your system has hyperthreading) of your Linux system's CPU.
+
  
Use your favorite text editor to create the following script. Name it something like <tt>vm.sh</tt>:
+
You can grow a replicated volume by adding pairs of bricks:
  
<syntaxhighlight lang="bash">
+
<console>
#!/bin/sh
+
# ##i##gluster volume add-brick mirror rhs-lab3:/data/mirror rhs-lab4:/data/mirror
export QEMU_AUDIO_DRV=alsa
+
Add Brick successful
DISKIMG=~/win7.img
+
# ##i##gluster volume info mirror
WIN7IMG=~/Downloads/X15-65804.iso
+
VIRTIMG=~/Downloads/virtio-win-0.1-74.iso
+
Volume Name: mirror
qemu-system-x86_64 --enable-kvm -drive file=${DISKIMG},if=virtio -m 2048 \
+
Type: Distributed-Replicate
-net nic,model=virtio -net user -cdrom ${WIN7IMG} \
+
Volume ID: 4edacef8-982c-46a9-be7e-29e34fa40f95
-drive file=${VIRTIMG},index=3,media=cdrom \
+
Status: Started
-rtc base=localtime,clock=host -smp cores=2,threads=4 \
+
Number of Bricks: 2 x 2 = 4
-usbdevice tablet -soundhw ac97 -cpu host -vga vmware
+
Transport-type: tcp
</syntaxhighlight>
+
Bricks:
 +
Brick1: rhs-lab1:/data/mirror
 +
Brick2: rhs-lab2:/data/mirror
 +
Brick3: rhs-lab3:/data/mirror
 +
Brick4: rhs-lab4:/data/mirror
 +
</console>
  
Now, make the script executable:
+
== GlusterFS Brick Migration ==
 +
 
 +
Here is how you migrate data off of an existing brick and on to a new brick:
  
 
<console>
 
<console>
$ ##i##chmod +x vm.sh
+
# ##i##gluster volume replace-brick dist rhs-lab3:/data/dist rhs-lab4:/data/dist start
 +
replace-brick started successfully
 +
# ##i##gluster volume replace-brick dist rhs-lab3:/data/dist rhs-lab4:/data/dist status
 +
Number of files migrated = 0        Migration complete
 +
# ##i##gluster volume replace-brick dist rhs-lab3:/data/dist rhs-lab4:/data/dist commit
 +
replace-brick commit successful
 +
# ##i##gluster volume info
 +
 +
Volume Name: dist
 +
Type: Distribute
 +
Volume ID: f9758871-20dc-4728-9576-a5bb5b24ca4f
 +
Status: Started
 +
Number of Bricks: 3
 +
Transport-type: tcp
 +
Bricks:
 +
Brick1: rhs-lab1:/data/dist
 +
Brick2: rhs-lab2:/data/dist
 +
Brick3: rhs-lab4:/data/dist
 +
 +
Volume Name: mirror
 +
Type: Distributed-Replicate
 +
Volume ID: 4edacef8-982c-46a9-be7e-29e34fa40f95
 +
Status: Started
 +
Number of Bricks: 2 x 2 = 4
 +
Transport-type: tcp
 +
Bricks:
 +
Brick1: rhs-lab1:/data/mirror
 +
Brick2: rhs-lab2:/data/mirror
 +
Brick3: rhs-lab3:/data/mirror
 +
Brick4: rhs-lab4:/data/mirror
 
</console>
 
</console>
  
Here is a brief summary of what the script does. It starts the <tt>qemu-kvm</tt> program and instructs it to use KVM to accelerate virtualization. The display will be shown locally, in a window. If you are using the SPICE method, described later in this document, no window will appear, and you will be able to connect remotely to your running virtual machine.
+
== Removing a Brick ==
  
The system disk is the 30GB raw image you created, and we tell QEMU to use "virtio" mode for this disk, as well as "virtio" for network access. This will require that we install special drivers during installation to access the disk and enable networking, but will give us better performance.
+
Here's how you remove a brick. The add-brick and remove-brick commands will ensure that you don't break mirrors, so you will need to remove both volumes in a mirror if you are working with a replicated volume.
  
To assist us in installing the VirtIO drivers, we have configured the system with two DVD drives -- the first holds the Windows 7 installation media, and the second contains the VirtIO driver ISO that we will need to access during Windows 7 installation.
+
<console>
 +
# ##i##gluster volume remove-brick dist rhs-lab4:/data/dist start
 +
Remove Brick start successful
 +
# ##i##gluster volume remove-brick dist rhs-lab4:/data/dist status
 +
                                    Node Rebalanced-files          size      scanned      failures        status
 +
                              ---------      -----------  -----------  -----------  -----------  ------------
 +
                              localhost                0            0            0            0    not started
 +
                                rhs-lab3                0            0            0            0    not started
 +
                                rhs-lab2                0            0            0            0    not started
 +
                                rhs-lab4                0            0            0            0      completed
  
The <tt>-usbdevice tablet</tt> option will cause our mouse and keyboard interaction with our virtual environment to be intuitive and easy to use.
+
# ##i##gluster volume remove-brick dist rhs-lab4:/data/dist commit
 +
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
 +
Remove Brick commit successful
  
{{fancyimportant|1=
+
</console>
For optimal performance, adjust the script so that the <tt>-smp</tt> option specifies the exact number of cores and threads on your system -- on non-HyperThreading systems (AMD and some Intel), simply remove the <tt>,threads=X</tt> option entirely and just specify cores. Also ensure that the <tt>-m</tt> option provides enough RAM for Windows 7, without eating up all your system's RAM. On a 4GB Linux system, use <tt>1536</tt>. For an 8GB system, <tt>2048</tt> is safe.}}
+
 
 +
== Georeplication ==
  
== SPICE (Accelerated Remote Connection) ==
+
At the local GlusterFS site:
SPICE is a new technology that has been incorporated into QEMU, which allows the virtual machine to run on one system, and allows you to use <code>spicec</code>, the SPICE client, to connect to your remote virtual machine. In real-world use, you can run a SPICE server (via QEMU) and client on the same machine if you like, or have them on the same local area network, or have server and client connect over an Internet connection. Here are some important facts about SPICE:
+
* SPICE provides accelerated, optimized video updates over the network, similar to VNC
+
* QEMU can be configured to run a SPICE server, which you can connect to via <code>spicec</code>, the SPICE client. The SPICE client renders to a local window on your system.
+
* SPICE allows easy copying and pasting across operating systems -- for example, you can copy something in GNOME, paste it into the <code>spicec</code> window and have it appear on your Windows 7 system.
+
  
=== SPICE Setup ===
 
To set up SPICE, you need to perform the following changes to the "standard" steps described in this document:
 
# Emerge QEMU with the <code>spice</code> USE variable on the system that will be running the Windows 7 virtual machine.
 
# Emerge <code>spice</code> on the system that you will be using to connect to your remote Windows 7 virtual machine.
 
# In the <code>vm.sh</code> script, remove the existing <code>-vga vmware</code> <code>qemu-kvm</code> option, and add these options: <code>-vga qxl -device virtio-serial-pci -spice port=5900,password=mypass -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent</code>
 
# Run <code>vm.sh</code> as described in the next section on your remote server (your Windows 7 system will now boot, but you can't see the virtual machine display) and then connect to it by running the following command on your local system:
 
 
<console>
 
<console>
# ##i##spicec -h remotehost -p 5900 -w mypass
+
# gluster volume create georep rhs-lab1:/data/georep
 +
Creation of volume georep has been successful. Please start the volume to access data.
 +
# gluster volume start georep
 +
Starting volume georep has been successful
 +
# gluster volume info georep
 +
 +
Volume Name: georep
 +
Type: Distribute
 +
Volume ID: 001bc914-74ad-48e6-846a-1767a5b2cb58
 +
Status: Started
 +
Number of Bricks: 1
 +
Transport-type: tcp
 +
Bricks:
 +
Brick1: rhs-lab1:/data/georep
 +
# mkdir /mnt/georep
 +
# mount -t glusterfs rhs-lab1:/georep /mnt/georep
 +
# cd /mnt/georep/
 +
# ls
 +
# df -h .
 +
Filesystem            Size  Used Avail Use% Mounted on
 +
rhs-lab1:/georep      5.1G  33M  5.0G  1% /mnt/georep
 
</console>
 
</console>
The SPICE client window will appear locally and allow you to interact with your Windows 7 system.
 
  
== Starting Windows 7 Installation ==
+
At the remote site, set up a <tt>georep-dr</tt> volume:
 
+
Now, it's time to start Windows 7 installation. Run <tt>vm.sh</tt> as follows:
+
  
 
<console>
 
<console>
$ ##i##./vm.sh
+
# ##i##gluster volume create georep-dr rhs-lab4:/data/georep-dr
 +
# ##i##gluster volume start georep-dr
 
</console>
 
</console>
  
Windows 7 installation will begin. During the installation process, you will need to enter a valid license key, and also load ''both'' VirtIO drivers from Red Hat when prompted (Browse to the second DVD, then win7 directory, then x86).
+
Local side:
 +
 
 +
<console>
 +
# ##i##gluster volume geo-replication georep status
 +
MASTER              SLAVE                                              STATUS   
 +
--------------------------------------------------------------------------------
 +
# ##i##gluster volume geo-replication georep ssh://rhs-lab4::georep-dr start
 +
Starting geo-replication session between georep & ssh://rhs-lab4::georep-dr has been successful
 +
</console>
  
After some time, Windows 7 installation will complete. You will be able to perform Windows Update, as by default, you will have network access if your host Linux system has network access.
+
== GlusterFS Security ==
  
Enjoy your virtualized Windows 7 system!
+
Currently, any GlusterFS peer can join your volume if it exists on your LAN. Securing GlusterFS can be accomplished with <tt>iptables</tt> by blocking TCP ports.
  
[[Category:Tutorial]]
+
[[Category:Filesystems]]
[[Category:First Steps]]
+
[[Category:Virtualization]]
+
[[Category:KVM]]
+
[[Category:Official Documentation]]
+
 
[[Category:Articles]]
 
[[Category:Articles]]
 +
{{PageNeedsUpdates}}
 
{{ArticleFooter}}
 
{{ArticleFooter}}

Latest revision as of 17:02, December 28, 2014

Support Funtoo and help us grow! Donate $15 per month and get a free SSD-based Funtoo Virtual Container.

GlusterFS Distribution

Below, we create a distributed volume using two bricks (XFS filesystems.) This spreads IO and files among two bricks.

# gluster peer status
No peers present
# gluster peer probe rhs-lab2
Probe successful
# gluster peer status
Number of Peers: 1

Hostname: rhs-lab2
Uuid: 6b6c9ffc-da79-4d24-8325-086d44869338
State: Peer in Cluster (Connected)
# gluster peer probe rhs-lab3
Probe successful
# gluster peer probe rhs-lab4
Probe successful
# gluster peer status
Number of Peers: 3

Hostname: rhs-lab2
Uuid: 6b6c9ffc-da79-4d24-8325-086d44869338
State: Peer in Cluster (Connected)

Hostname: rhs-lab3
Uuid: cbcd508e-5f80-4224-91df-fd5f8e12915d
State: Peer in Cluster (Connected)

Hostname: rhs-lab4
Uuid: a02f68d8-88af-4b79-92d8-1057dd85af45
State: Peer in Cluster (Connected)
# gluster volume create dist rhs-lab1:/data/dist rhs-lab2:/data/dist
Creation of volume dist has been successful. Please start the volume to access data.
# gluster volume info
 
Volume Name: dist
Type: Distribute
Volume ID: f9758871-20dc-4728-9576-a5bb5b24ca4f
Status: Created
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: rhs-lab1:/data/dist
Brick2: rhs-lab2:/data/dist
# gluster volume start dist
Starting volume dist has been successful
# gluster volume info
 
Volume Name: dist
Type: Distribute
Volume ID: f9758871-20dc-4728-9576-a5bb5b24ca4f
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: rhs-lab1:/data/dist
Brick2: rhs-lab2:/data/dist
# mount -t glusterfs rhs-lab1:/dist /mnt/dist

GlusterFS Mirroring

Below, we mirror data between two bricks (XFS volumes). This creates a redundant system and also allows for read performance to be improved.

# gluster volume create mirror replica 2 rhs-lab1:/data/mirror rhs-lab2:/data/mirror
Creation of volume mirror has been successful. Please start the volume to access data.
# gluster volume start mirror
Starting volume mirror has been successful
# gluster volume info mirror
 
Volume Name: mirror
Type: Replicate
Volume ID: 4edacef8-982c-46a9-be7e-29e34fa40f95
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: rhs-lab1:/data/mirror
Brick2: rhs-lab2:/data/mirror
# install -d /mnt/mirror
# mount -t glusterfs rhs-lab1:/mirror /mnt/mirror

Growing GlusterFS

Now we will add a new brick to our distributed filesystem. We will run a rebalance (optional) to get the files distributed ideally. This will involve distributing some existing files on to our new brick on rhs-lab3:

# gluster volume add-brick dist rhs-lab3:/data/dist
Add Brick successful
# gluster volume rebalance dist start
Starting rebalance on volume dist has been successful

After the rebalance, our distributed GlusterFS filesystem will have optimal performance and one third of the files will have moved to rhs-lab3.

# gluster volume rebalance dist status
                                    Node Rebalanced-files          size       scanned      failures         status
                               ---------      -----------   -----------   -----------   -----------   ------------
                               localhost                0            0            0            0      completed
                                rhs-lab4                0            0            0            0      completed
                                rhs-lab3                0            0            0            0      completed
                                rhs-lab2                0            0            0            0      completed

Growing a GlusterFS Replicated Volume

You can grow a replicated volume by adding pairs of bricks:

# gluster volume add-brick mirror rhs-lab3:/data/mirror rhs-lab4:/data/mirror
Add Brick successful
# gluster volume info mirror
 
Volume Name: mirror
Type: Distributed-Replicate
Volume ID: 4edacef8-982c-46a9-be7e-29e34fa40f95
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: rhs-lab1:/data/mirror
Brick2: rhs-lab2:/data/mirror
Brick3: rhs-lab3:/data/mirror
Brick4: rhs-lab4:/data/mirror

GlusterFS Brick Migration

Here is how you migrate data off of an existing brick and on to a new brick:

# gluster volume replace-brick dist rhs-lab3:/data/dist rhs-lab4:/data/dist start
replace-brick started successfully
# gluster volume replace-brick dist rhs-lab3:/data/dist rhs-lab4:/data/dist status
Number of files migrated = 0        Migration complete 
# gluster volume replace-brick dist rhs-lab3:/data/dist rhs-lab4:/data/dist commit
replace-brick commit successful
# gluster volume info
 
Volume Name: dist
Type: Distribute
Volume ID: f9758871-20dc-4728-9576-a5bb5b24ca4f
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: rhs-lab1:/data/dist
Brick2: rhs-lab2:/data/dist
Brick3: rhs-lab4:/data/dist
 
Volume Name: mirror
Type: Distributed-Replicate
Volume ID: 4edacef8-982c-46a9-be7e-29e34fa40f95
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: rhs-lab1:/data/mirror
Brick2: rhs-lab2:/data/mirror
Brick3: rhs-lab3:/data/mirror
Brick4: rhs-lab4:/data/mirror

Removing a Brick

Here's how you remove a brick. The add-brick and remove-brick commands will ensure that you don't break mirrors, so you will need to remove both volumes in a mirror if you are working with a replicated volume.

# gluster volume remove-brick dist rhs-lab4:/data/dist start
Remove Brick start successful
# gluster volume remove-brick dist rhs-lab4:/data/dist status
                                    Node Rebalanced-files          size       scanned      failures         status
                               ---------      -----------   -----------   -----------   -----------   ------------
                               localhost                0            0            0            0    not started
                                rhs-lab3                0            0            0            0    not started
                                rhs-lab2                0            0            0            0    not started
                                rhs-lab4                0            0            0            0      completed

# gluster volume remove-brick dist rhs-lab4:/data/dist commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
Remove Brick commit successful

Georeplication

At the local GlusterFS site:

# gluster volume create georep rhs-lab1:/data/georep
Creation of volume georep has been successful. Please start the volume to access data.
# gluster volume start georep
Starting volume georep has been successful
# gluster volume info georep
 
Volume Name: georep
Type: Distribute
Volume ID: 001bc914-74ad-48e6-846a-1767a5b2cb58
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: rhs-lab1:/data/georep
# mkdir /mnt/georep
# mount -t glusterfs rhs-lab1:/georep /mnt/georep
# cd /mnt/georep/
# ls
# df -h .
Filesystem            Size  Used Avail Use% Mounted on
rhs-lab1:/georep      5.1G   33M  5.0G   1% /mnt/georep

At the remote site, set up a georep-dr volume:

# gluster volume create georep-dr rhs-lab4:/data/georep-dr
# gluster volume start georep-dr

Local side:

# gluster volume geo-replication georep status
MASTER               SLAVE                                              STATUS    
--------------------------------------------------------------------------------
# gluster volume geo-replication georep ssh://rhs-lab4::georep-dr start
Starting geo-replication session between georep & ssh://rhs-lab4::georep-dr has been successful

GlusterFS Security

Currently, any GlusterFS peer can join your volume if it exists on your LAN. Securing GlusterFS can be accomplished with iptables by blocking TCP ports.


Support Funtoo and help us grow! Donate $15 per month and get a free SSD-based Funtoo Virtual Container.

About the Author

Daniel Robbins is best known as the creator of Gentoo Linux and author of many IBM developerWorks articles about Linux. Daniel currently serves as Benevolent Dictator for Life (BDFL) of Funtoo Linux. Funtoo Linux is a Gentoo-based distribution and continuation of Daniel's original Gentoo vision.

Got Funtoo?

Have you installed Funtoo Linux yet? Discover the power of a from-source meta-distribution optimized for your hardware! See our installation instructions and browse our CPU-optimized builds.

Funtoo News

Drobbins

IP Space Migration Continues

All Funtoo user containers in the 8.28 IP space will be moving into our new IP space (172.97) over the next few days. If you have DNS set up -- be sure to watch your container and update to the new IP! container.host.funtoo.org DNS will be updated after the move.
2015-08-27 by Drobbins
Drobbins

Funtoo Hosting IP Move

Funtoo user containers with IPs in the 72.18.x.x range will be gradually migrating to new IP addresses this week. If you have DNS entries for your containers, please be aware that your DNS will need to be updated.
2015-08-11 by Drobbins
Drobbins

New ARM Stages

New ARM Stages, built with a new toolchain, are now hitting mirrors. Existing ARM users should re-install using these stages (dated Aug 3, 2015 or later,) rather than upgrade using emerge.
2015-08-06 by Drobbins
More...

More Articles

Browse all our Linux-related articles, below:

A

B

F

G

K

L

M

O

P

S

T

W

X