Recently, at combahton, we had to migrate several older GlusterFS Nodes to new nodes, as they were not able to keep up with our current performance requirements.
We had basically two options:
- Physically move the drives node-by-node into the new servers
- Use GlusterFS inbuilt features to move a brick (a brick is a node/directory combination, which acts as datastore)
As it was not feasible to move drives and might have taken much longer, we decided to setup new servers using Hardware Raid 10 and NVMe drives together with bcache, to speed up the whole storage layer.
Actually, moving data from Node A to B is pretty simple:
gluster volume replace-brick VOLUME-Name Node-A-IP:/glusterfs Node-B-IP:/glusterfs commit force
Please note: replace-brick is afaik only supported for distributed-replicated and replicated volumes.
This will trigger a replace-brick operation. Depending on the amount of data, the process will take some time. Once the operation is finished, you can safely remove the old A peer (using gluster peer detach) and turn it off.