GlusterFS

From Funtoo
Jump to navigation Jump to search
   Support Funtoo!
Get an awesome Funtoo container and support Funtoo! See Funtoo Containers for more information.

GlusterFS Distribution

Below, we create a distributed volume using two bricks (XFS filesystems.) This spreads IO and files among two bricks.

root # gluster peer status
No peers present
root # gluster peer probe rhs-lab2
Probe successful
root # gluster peer status
Number of Peers: 1

Hostname: rhs-lab2
Uuid: 6b6c9ffc-da79-4d24-8325-086d44869338
State: Peer in Cluster (Connected)
root # gluster peer probe rhs-lab3
Probe successful
root # gluster peer probe rhs-lab4
Probe successful
root # gluster peer status
Number of Peers: 3

Hostname: rhs-lab2
Uuid: 6b6c9ffc-da79-4d24-8325-086d44869338
State: Peer in Cluster (Connected)

Hostname: rhs-lab3
Uuid: cbcd508e-5f80-4224-91df-fd5f8e12915d
State: Peer in Cluster (Connected)

Hostname: rhs-lab4
Uuid: a02f68d8-88af-4b79-92d8-1057dd85af45
State: Peer in Cluster (Connected)
root # gluster volume create dist rhs-lab1:/data/dist rhs-lab2:/data/dist
Creation of volume dist has been successful. Please start the volume to access data.
root # gluster volume info
 
Volume Name: dist
Type: Distribute
Volume ID: f9758871-20dc-4728-9576-a5bb5b24ca4f
Status: Created
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: rhs-lab1:/data/dist
Brick2: rhs-lab2:/data/dist
root # gluster volume start dist
Starting volume dist has been successful
root # gluster volume info
 
Volume Name: dist
Type: Distribute
Volume ID: f9758871-20dc-4728-9576-a5bb5b24ca4f
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: rhs-lab1:/data/dist
Brick2: rhs-lab2:/data/dist
root # mount -t glusterfs rhs-lab1:/dist /mnt/dist

GlusterFS Mirroring

Below, we mirror data between two bricks (XFS volumes). This creates a redundant system and also allows for read performance to be improved.

root # gluster volume create mirror replica 2 rhs-lab1:/data/mirror rhs-lab2:/data/mirror
Creation of volume mirror has been successful. Please start the volume to access data.
root # gluster volume start mirror
Starting volume mirror has been successful
root # gluster volume info mirror
 
Volume Name: mirror
Type: Replicate
Volume ID: 4edacef8-982c-46a9-be7e-29e34fa40f95
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: rhs-lab1:/data/mirror
Brick2: rhs-lab2:/data/mirror
root # install -d /mnt/mirror
root # mount -t glusterfs rhs-lab1:/mirror /mnt/mirror

Growing GlusterFS

Now we will add a new brick to our distributed filesystem. We will run a rebalance (optional) to get the files distributed ideally. This will involve distributing some existing files on to our new brick on rhs-lab3:

root # gluster volume add-brick dist rhs-lab3:/data/dist
Add Brick successful
root # gluster volume rebalance dist start
Starting rebalance on volume dist has been successful

After the rebalance, our distributed GlusterFS filesystem will have optimal performance and one third of the files will have moved to rhs-lab3.

root # gluster volume rebalance dist status
                                    Node Rebalanced-files          size       scanned      failures         status
                               ---------      -----------   -----------   -----------   -----------   ------------
                               localhost                0            0            0            0      completed
                                rhs-lab4                0            0            0            0      completed
                                rhs-lab3                0            0            0            0      completed
                                rhs-lab2                0            0            0            0      completed

Growing a GlusterFS Replicated Volume

You can grow a replicated volume by adding pairs of bricks:

root # gluster volume add-brick mirror rhs-lab3:/data/mirror rhs-lab4:/data/mirror
Add Brick successful
root # gluster volume info mirror
 
Volume Name: mirror
Type: Distributed-Replicate
Volume ID: 4edacef8-982c-46a9-be7e-29e34fa40f95
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: rhs-lab1:/data/mirror
Brick2: rhs-lab2:/data/mirror
Brick3: rhs-lab3:/data/mirror
Brick4: rhs-lab4:/data/mirror

GlusterFS Brick Migration

Here is how you migrate data off of an existing brick and on to a new brick:

root # gluster volume replace-brick dist rhs-lab3:/data/dist rhs-lab4:/data/dist start
replace-brick started successfully
root # gluster volume replace-brick dist rhs-lab3:/data/dist rhs-lab4:/data/dist status
Number of files migrated = 0        Migration complete 
root # gluster volume replace-brick dist rhs-lab3:/data/dist rhs-lab4:/data/dist commit
replace-brick commit successful
root # gluster volume info
 
Volume Name: dist
Type: Distribute
Volume ID: f9758871-20dc-4728-9576-a5bb5b24ca4f
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: rhs-lab1:/data/dist
Brick2: rhs-lab2:/data/dist
Brick3: rhs-lab4:/data/dist
 
Volume Name: mirror
Type: Distributed-Replicate
Volume ID: 4edacef8-982c-46a9-be7e-29e34fa40f95
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: rhs-lab1:/data/mirror
Brick2: rhs-lab2:/data/mirror
Brick3: rhs-lab3:/data/mirror
Brick4: rhs-lab4:/data/mirror

Removing a Brick

Here's how you remove a brick. The add-brick and remove-brick commands will ensure that you don't break mirrors, so you will need to remove both volumes in a mirror if you are working with a replicated volume.

root # gluster volume remove-brick dist rhs-lab4:/data/dist start
Remove Brick start successful
root # gluster volume remove-brick dist rhs-lab4:/data/dist status
                                    Node Rebalanced-files          size       scanned      failures         status
                               ---------      -----------   -----------   -----------   -----------   ------------
                               localhost                0            0            0            0    not started
                                rhs-lab3                0            0            0            0    not started
                                rhs-lab2                0            0            0            0    not started
                                rhs-lab4                0            0            0            0      completed

root # gluster volume remove-brick dist rhs-lab4:/data/dist commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
Remove Brick commit successful

Georeplication

At the local GlusterFS site:

root # gluster volume create georep rhs-lab1:/data/georep
Creation of volume georep has been successful. Please start the volume to access data.
root # gluster volume start georep
Starting volume georep has been successful
root # gluster volume info georep
 
Volume Name: georep
Type: Distribute
Volume ID: 001bc914-74ad-48e6-846a-1767a5b2cb58
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: rhs-lab1:/data/georep
root # mkdir /mnt/georep
root # mount -t glusterfs rhs-lab1:/georep /mnt/georep
root # cd /mnt/georep/
root # ls
root # df -h .
Filesystem            Size  Used Avail Use% Mounted on
rhs-lab1:/georep      5.1G   33M  5.0G   1% /mnt/georep

At the remote site, set up a georep-dr volume:

root # gluster volume create georep-dr rhs-lab4:/data/georep-dr
root # gluster volume start georep-dr

Local side:

root # gluster volume geo-replication georep status
MASTER               SLAVE                                              STATUS    
--------------------------------------------------------------------------------
root # gluster volume geo-replication georep ssh://rhs-lab4::georep-dr start
Starting geo-replication session between georep & ssh://rhs-lab4::georep-dr has been successful

GlusterFS Security

Currently, any GlusterFS peer can join your volume if it exists on your LAN. Securing GlusterFS can be accomplished with iptables by blocking TCP ports.



   Note

Browse all our available articles below. Use the search field to search for topics and keywords in real-time.

Article Subtitle
Article Subtitle
Awk by Example, Part 1 An intro to the great language with the strange name
Awk by Example, Part 2 Records, loops, and arrays
Awk by Example, Part 3 String functions and ... checkbooks?
Bash by Example, Part 1 Fundamental programming in the Bourne again shell (bash)
Bash by Example, Part 2 More bash programming fundamentals
Bash by Example, Part 3 Exploring the ebuild system
BTRFS Fun
Funtoo Filesystem Guide, Part 1 Journaling and ReiserFS
Funtoo Filesystem Guide, Part 2 Using ReiserFS and Linux
Funtoo Filesystem Guide, Part 3 Tmpfs and Bind Mounts
Funtoo Filesystem Guide, Part 4 Introducing Ext3
Funtoo Filesystem Guide, Part 5 Ext3 in Action
GUID Booting Guide
Learning Linux LVM, Part 1 Storage management magic with Logical Volume Management
Learning Linux LVM, Part 2 The cvs.gentoo.org upgrade
Libvirt
Linux Fundamentals, Part 1
Linux Fundamentals, Part 2
Linux Fundamentals, Part 3
Linux Fundamentals, Part 4
LVM Fun
Making the Distribution, Part 1
Making the Distribution, Part 2
Making the Distribution, Part 3
Maximum Swappage Getting the most out of swap
On screen annotation Write on top of apps on your screen
OpenSSH Key Management, Part 1 Understanding RSA/DSA Authentication
OpenSSH Key Management, Part 2 Introducing ssh-agent and keychain
OpenSSH Key Management, Part 3 Agent Forwarding
Partition Planning Tips Keeping things organized on disk
Partitioning in Action, Part 1 Moving /home
Partitioning in Action, Part 2 Consolidating data
POSIX Threads Explained, Part 1 A simple and nimble tool for memory sharing
POSIX Threads Explained, Part 2
POSIX Threads Explained, Part 3 Improve efficiency with condition variables
Sed by Example, Part 1
Sed by Example, Part 2
Sed by Example, Part 3
Successful booting with UUID Guide to use UUID for consistent booting.
The Gentoo.org Redesign, Part 1 A site reborn
The Gentoo.org Redesign, Part 2 The Documentation System
The Gentoo.org Redesign, Part 3 The New Main Pages
The Gentoo.org Redesign, Part 4 The Final Touch of XML
Traffic Control
Windows 10 Virtualization with KVM