|
|
Line 1: |
Line 1: |
| == GlusterFS Distribution ==
| | With grub2 you can easily boot livecd image from hard drive. |
|
| |
|
| Below, we create a distributed volume using two bricks (XFS filesystems.) This spreads IO and files among two bricks.
| |
|
| |
|
| <console>
| | == General guide == |
| # ##i##gluster peer status
| | This is general example. I will add settings for different livecd's later. |
| No peers present
| |
| # ##i##gluster peer probe rhs-lab2
| |
| Probe successful
| |
| # ##i##gluster peer status
| |
| Number of Peers: 1
| |
|
| |
|
| Hostname: rhs-lab2
| | 1. Copy the iso image to root folder for simplicity : |
| Uuid: 6b6c9ffc-da79-4d24-8325-086d44869338
| | <pre>cp /home/user/downloads/systemrescuecd.iso /src.iso</pre> |
| State: Peer in Cluster (Connected)
| |
| # ##i##gluster peer probe rhs-lab3
| |
| Probe successful
| |
| # ##i##gluster peer probe rhs-lab4
| |
| Probe successful
| |
| # ##i##gluster peer status
| |
| Number of Peers: 3
| |
|
| |
|
| Hostname: rhs-lab2
| | 2. Reboot and when grub2 loads press 'c' for console. Use following commands (tab autocompletion is your friend) : |
| Uuid: 6b6c9ffc-da79-4d24-8325-086d44869338
| | <pre>loopback loop (hd0,2)/src.iso |
| State: Peer in Cluster (Connected)
| | linux (loop)/boot/vmlinuz |
| | initrd (loop)/boot/initrd.lz |
| | boot</pre> |
|
| |
|
| Hostname: rhs-lab3
| | [[Category:HOWTO]] |
| Uuid: cbcd508e-5f80-4224-91df-fd5f8e12915d
| |
| State: Peer in Cluster (Connected)
| |
| | |
| Hostname: rhs-lab4
| |
| Uuid: a02f68d8-88af-4b79-92d8-1057dd85af45
| |
| State: Peer in Cluster (Connected)
| |
| # ##i##gluster volume create dist rhs-lab1:/data/dist rhs-lab2:/data/dist
| |
| Creation of volume dist has been successful. Please start the volume to access data.
| |
| </console>
| |
| | |
| <console>
| |
| # ##i##gluster volume info
| |
|
| |
| Volume Name: dist
| |
| Type: Distribute
| |
| Volume ID: f9758871-20dc-4728-9576-a5bb5b24ca4f
| |
| Status: Created
| |
| Number of Bricks: 2
| |
| Transport-type: tcp
| |
| Bricks:
| |
| Brick1: rhs-lab1:/data/dist
| |
| Brick2: rhs-lab2:/data/dist
| |
| </console>
| |
| | |
| <console>
| |
| # ##i##gluster volume start dist
| |
| Starting volume dist has been successful
| |
| </console>
| |
| | |
| <console>
| |
| # ##i##gluster volume info
| |
|
| |
| Volume Name: dist
| |
| Type: Distribute
| |
| Volume ID: f9758871-20dc-4728-9576-a5bb5b24ca4f
| |
| Status: Started
| |
| Number of Bricks: 2
| |
| Transport-type: tcp
| |
| Bricks:
| |
| Brick1: rhs-lab1:/data/dist
| |
| Brick2: rhs-lab2:/data/dist
| |
| </console>
| |
| | |
| <console>
| |
| # ##i##mount -t glusterfs rhs-lab1:/dist /mnt/dist
| |
| </console>
| |
| | |
| == GlusterFS Mirroring ==
| |
| | |
| Below, we mirror data between two bricks (XFS volumes). This creates a redundant system and also allows for read performance to be improved.
| |
| | |
| <console>
| |
| # ##i##gluster volume create mirror replica 2 rhs-lab1:/data/mirror rhs-lab2:/data/mirror
| |
| Creation of volume mirror has been successful. Please start the volume to access data.
| |
| # ##i##gluster volume start mirror
| |
| Starting volume mirror has been successful
| |
| # ##i##gluster volume info mirror
| |
|
| |
| Volume Name: mirror
| |
| Type: Replicate
| |
| Volume ID: 4edacef8-982c-46a9-be7e-29e34fa40f95
| |
| Status: Started
| |
| Number of Bricks: 1 x 2 = 2
| |
| Transport-type: tcp
| |
| Bricks:
| |
| Brick1: rhs-lab1:/data/mirror
| |
| Brick2: rhs-lab2:/data/mirror
| |
| # ##i##install -d /mnt/mirror
| |
| # ##i##mount -t glusterfs rhs-lab1:/mirror /mnt/mirror
| |
| | |
| </console>
| |
| | |
| == Growing GlusterFS ==
| |
| | |
| Now we will add a new brick to our distributed filesystem. We will run a rebalance (optional) to get the files distributed ideally. This will involve distributing some existing files on to our new brick on rhs-lab3:
| |
| | |
| <console>
| |
| # ##i##gluster volume add-brick dist rhs-lab3:/data/dist
| |
| Add Brick successful
| |
| # ##i##gluster volume rebalance dist start
| |
| Starting rebalance on volume dist has been successful
| |
| </console>
| |
| | |
| After the rebalance, our distributed GlusterFS filesystem will have optimal performance and one third of the files will have moved to rhs-lab3.
| |
| | |
| <console>
| |
| # ##i##gluster volume rebalance dist status
| |
| Node Rebalanced-files size scanned failures status
| |
| --------- ----------- ----------- ----------- ----------- ------------
| |
| localhost 0 0 0 0 completed
| |
| rhs-lab4 0 0 0 0 completed
| |
| rhs-lab3 0 0 0 0 completed
| |
| rhs-lab2 0 0 0 0 completed
| |
| | |
| </console>
| |
| | |
| == Growing a GlusterFS Replicated Volume ==
| |
| | |
| You can grow a replicated volume by adding pairs of bricks:
| |
| | |
| <console>
| |
| # ##i##gluster volume add-brick mirror rhs-lab3:/data/mirror rhs-lab4:/data/mirror
| |
| Add Brick successful
| |
| # ##i##gluster volume info mirror
| |
|
| |
| Volume Name: mirror
| |
| Type: Distributed-Replicate
| |
| Volume ID: 4edacef8-982c-46a9-be7e-29e34fa40f95
| |
| Status: Started
| |
| Number of Bricks: 2 x 2 = 4
| |
| Transport-type: tcp
| |
| Bricks:
| |
| Brick1: rhs-lab1:/data/mirror
| |
| Brick2: rhs-lab2:/data/mirror
| |
| Brick3: rhs-lab3:/data/mirror
| |
| Brick4: rhs-lab4:/data/mirror
| |
| </console>
| |
| | |
| == GlusterFS Brick Migration ==
| |
| | |
| Here is how you migrate data off of an existing brick and on to a new brick:
| |
| | |
| <console>
| |
| # ##i##gluster volume replace-brick dist rhs-lab3:/data/dist rhs-lab4:/data/dist start
| |
| replace-brick started successfully
| |
| # ##i##gluster volume replace-brick dist rhs-lab3:/data/dist rhs-lab4:/data/dist status
| |
| Number of files migrated = 0 Migration complete
| |
| # ##i##gluster volume replace-brick dist rhs-lab3:/data/dist rhs-lab4:/data/dist commit
| |
| replace-brick commit successful
| |
| # ##i##gluster volume info
| |
|
| |
| Volume Name: dist
| |
| Type: Distribute
| |
| Volume ID: f9758871-20dc-4728-9576-a5bb5b24ca4f
| |
| Status: Started
| |
| Number of Bricks: 3
| |
| Transport-type: tcp
| |
| Bricks:
| |
| Brick1: rhs-lab1:/data/dist
| |
| Brick2: rhs-lab2:/data/dist
| |
| Brick3: rhs-lab4:/data/dist
| |
|
| |
| Volume Name: mirror
| |
| Type: Distributed-Replicate
| |
| Volume ID: 4edacef8-982c-46a9-be7e-29e34fa40f95
| |
| Status: Started
| |
| Number of Bricks: 2 x 2 = 4
| |
| Transport-type: tcp
| |
| Bricks:
| |
| Brick1: rhs-lab1:/data/mirror
| |
| Brick2: rhs-lab2:/data/mirror
| |
| Brick3: rhs-lab3:/data/mirror
| |
| Brick4: rhs-lab4:/data/mirror
| |
| </console>
| |
| | |
| == Removing a Brick ==
| |
| | |
| Here's how you remove a brick. The add-brick and remove-brick commands will ensure that you don't break mirrors, so you will need to remove both volumes in a mirror if you are working with a replicated volume.
| |
| | |
| <console>
| |
| # ##i##gluster volume remove-brick dist rhs-lab4:/data/dist start
| |
| Remove Brick start successful
| |
| # ##i##gluster volume remove-brick dist rhs-lab4:/data/dist status
| |
| Node Rebalanced-files size scanned failures status
| |
| --------- ----------- ----------- ----------- ----------- ------------
| |
| localhost 0 0 0 0 not started
| |
| rhs-lab3 0 0 0 0 not started
| |
| rhs-lab2 0 0 0 0 not started
| |
| rhs-lab4 0 0 0 0 completed
| |
| | |
| # ##i##gluster volume remove-brick dist rhs-lab4:/data/dist commit
| |
| Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
| |
| Remove Brick commit successful
| |
| | |
| </console>
| |
| | |
| == Georeplication ==
| |
| | |
| At the local GlusterFS site:
| |
| | |
| <console>
| |
| # gluster volume create georep rhs-lab1:/data/georep
| |
| Creation of volume georep has been successful. Please start the volume to access data.
| |
| # gluster volume start georep
| |
| Starting volume georep has been successful
| |
| # gluster volume info georep
| |
|
| |
| Volume Name: georep
| |
| Type: Distribute
| |
| Volume ID: 001bc914-74ad-48e6-846a-1767a5b2cb58
| |
| Status: Started
| |
| Number of Bricks: 1
| |
| Transport-type: tcp
| |
| Bricks:
| |
| Brick1: rhs-lab1:/data/georep
| |
| # mkdir /mnt/georep
| |
| # mount -t glusterfs rhs-lab1:/georep /mnt/georep
| |
| # cd /mnt/georep/
| |
| # ls
| |
| # df -h .
| |
| Filesystem Size Used Avail Use% Mounted on
| |
| rhs-lab1:/georep 5.1G 33M 5.0G 1% /mnt/georep
| |
| </console>
| |
| | |
| At the remote site, set up a <tt>georep-dr</tt> volume:
| |
| | |
| <console>
| |
| # ##i##gluster volume create georep-dr rhs-lab4:/data/georep-dr
| |
| # ##i##gluster volume start georep-dr
| |
| </console>
| |
| | |
| Local side:
| |
| | |
| <console>
| |
| # ##i##gluster volume geo-replication georep status
| |
| MASTER SLAVE STATUS
| |
| --------------------------------------------------------------------------------
| |
| # ##i##gluster volume geo-replication georep ssh://rhs-lab4::georep-dr start
| |
| Starting geo-replication session between georep & ssh://rhs-lab4::georep-dr has been successful
| |
| </console>
| |
| | |
| == GlusterFS Security ==
| |
| | |
| Currently, any GlusterFS peer can join your volume if it exists on your LAN. Securing GlusterFS can be accomplished with <tt>iptables</tt> by blocking TCP ports.
| |
| | |
| [[Category:Filesystems]] | |