I recently went looking to see what sort of open source scalable filesystem projects existed. I wanted to see about putting together a storage solution that would scale to upwards of 100 TB using open source software and commodity hardware. During the search I became reacquainted with the GluserFS project.
I had configured a 3 brick ‘unify’ cluster a while back with one of their 1.3.x builds, however I had not gotten an opportunity to play with it much more after that.
After looking at the various other options out there, spending a considerable amount of time on IRC and reviewing the contents of their mailing lists, I ended up settling on GlusterFS due to it’s seemingly simple design, management, configuration and future roadmap goals.
As it turns out a few days after I started my search, the gluster team released version 2.0 of their software. At this point I have setup a 5 brick ‘distribute’ (DHT) cluster on a few of our Proxmox (OpenVZ) servers.
I now have 5 independent 4GB bricks and a 20GB mountpoint it representes to the client. In this case I am currently exporting CIFS (Samaba) on top of the gluster mountpoint. I found some very useful instructions on setup, etc here. I plan to test NFS as well at some point on some real physical hardware, due to current OpenVZ limitations on NFS servers inside of a container.
One thing I was unable to get working at this point is to have the glusterfs client and server running on the same machine. The single client/server setup worked flawlessly on my Ubutu laptop, so I suspect that is just an OpenVZ issue that I need to work out.
At Azouk (http://www.azouk.com) we are successfully using GlusterFS (1.4) and OpenVZ, and some of the nodes contains both server and a client.