No luck with glusterfs

Recently, we’ve been experimenting with glusterfs as an alternative network storage backing our VM hosting. It looked like a very promising candidate to replace our current iSCSI stack: scale-out with decent performance, mostly self-configuring, self-replicating, self-healing. And all of this out-of-the-box without complex setup. In contrast, the conventional architecture with a complex layering of iSCSI targets, DRBD, and Linux-HA glued together with a pack of shell scripts looks rather 90’s.

We played with glusterfs for a while. Setting up and configuring the software went quite smooth compared to the traditional stuff. But after some stress testing in a replicated scenario, we found severe problems.

Synchronisation

On the storage, the virtual machines represent themselves basically as one big image file. This image can become several hundreds of Gigabytes big. This is OK as long as the replicated file servers are in sync. But once one goes offline and online again, the versions of the image may differ and the self-healing algorithm is triggered. Due to glusterfs’ architecture, this happens  entirely on the filesystem client (i.e., the KVM host). After re-connecting a file server, all VM I/O is to be paused until self-healing is complete. The live VM is stuck for some amount of time between several seconds and more than a minute. A considerable portion of our hosting cluster could freeze for minutes. This is cleary unacceptable. Re-connecting a previously disconnected file server would be a risky operation: quite the opposite of what replication is good for.

No global state

Another feature of glusterfs is that replication is handled entirely on the filesystem client and not on the server. This leads to an orthogonal and modular approach which has a lot of advantages. But it makes it hard to determine when a file server can be disconnected safely: Given that self-healing takes a considerable amount of time, we cannot be sure if there is still some self-heal operation in progress. But disconnecting a replicated file server which had the newer copy of a VM image before the other file server has caught up would render the VM unusable. Unfortunately, there seems to be no easy way to query a glusterfs file server for active self-healing operations. This makes disconnecting a file server a risky operation, too.

Good for its intended use

In summary, we learned that glusterfs’ architecture is a good fit for the use case it has originally been designed: a NFS replacement with lots of small files. But for our scenario where continuously running processes need to access a few large image files uninterruptedly, glusterfs seems not to be the best fit.

So we will stick to the good ol’ iSCSI stack for now. Perhaps Ceph or Sheepdog will become viable alternatives in the future once they stabilise.