We have been running Gluster in our production environment for about 1 month now, so I figured I would post some details about our setup and our experiences with Gluster and OpenSolaris so far.
Overview:
Currently we have a 2 node Gluster cluster, we are using the replicate translator in order to provide Raid-1 type mirroring of the filesystem. The initial requirements involved providing a solution that would house our digital media archive (audio, video, etc), would scale up to around 150TB, support exports such as CIFS and NFS, and be extremely stable.
It was decided that we would use ZFS as our underlying filesystem, due to it’s data integerity features as well as it’s support for taking filesystem snapshots, both considered very high on the requirement list for this project as well.
Although FreeBSD has had ZFS support for quite some time, there were some known issues (with 32 vs 64 bit inode numbers) at the time of my research that prevented us from going that route.
Just this week KQstor released their native ZFS kernel module for Linux, which as of this latest release is supposed to fully support extended filesystem attributes, these are requirement in order for Gluster to function properly. This software was Beta at the time, and did not support extended attributes, so we were unable to consider and/or test this configuration either.
The choice was then made to go with ZFS on OpenSolaris (2008.11 specifically due to the 3ware drivers available at the time). Currently there is no FUSE support under Solaris, so although you can use it without a problem on the server side, if you choose to use a Solaris variant for your storage nodes, you will be required to use a head node with an OS that does support it on the client side.
The latest version of Gluster to be fully supported on the Solaris platform is version 3.0.5. 3.1.x introduced some nice new features, however we will have to either port our storage nodes to Linux, or wait until the folks at Gluster decide to release 3.1.x for Solaris (which I am not sure will happen anytime soon).
Here is the current hardware/software configuration:
- 2 x Intel Xeon E5410 @ 2.33GHz:CPU
- 32 GB DDR2 DIMMS:RAM
- 48 X 2TB Western Digital SATA II:HARD DRIVES
- 2 x 3WARE 9650SE-24M8 PCIE:RAID CONTROLLER
- Opensolaris version 2008.11
- Glusterfs version 3.0.5
- Samba version 3.2.5 (Gluster1)
ZFS Setup:
Setup for the two OS drives was pretty straight forward, we created a two disk mirrored rpool. This will allow us to have a disk failure in the root pool and still be able to boot the system.
Since we have 48 disks to work with for our data pool, we created a total of 6 Raid-z2 vdevs, each consisting of 7 physical disks. This setup gives up 75TB of space (53TB usable) per node, while leaving 6 disks available to use as spares.
user@server1:/# zpool list
NAMEÂ Â Â Â Â Â SIZEÂ Â USEDÂ AVAILÂ Â Â CAPÂ HEALTHÂ ALTROOT
rpool    1.81T 19.6G 1.79T    1% ONLINE -
datapool 75.8T 9.01T 66.7T   11% ONLINE -
Gluster setup:
Creating the Gluster .vol configuration files is easily done via the glusterfs-volgen command:
user1@host1:/#glusterfs-volgen --name cluster01 --raid 1 server1.hostname.com:/data/path server2.hostname.com:/data/path
That command will produce 2 volume files, one is called ‘glusterfsd.vol’ used on the server side and one called ‘glusterfs.vol’ used on the client.
Starting glusterd on the serverside is straightforward:
user1@host1:/# /usr/glusterfs/sbin/glusterfsd
Starting gluster on the client side is straightforward as well:
user1@host2:/#/usr/glusterfs/sbin/glusterfs --volfile=/usr/glusterfs/etc/glusterfs/glusterfs.vol /mnt/glusterfs/
In a later blog post I plan to talk more about issues that we have encountered running this specific setup in a production environment.