I run a fairly large repository server which my children have nicknamed FatDrive. It is a 19 drive unit, with a total capacity of 35TB. Not in the extremely large size, but good enough. The box, like most of my Linux runs Gentoo. It is just faster.
At first I was just running LVM with mdadm. But after a drive failure, I found out my data was not as well protected as it should. See standard filesystems, like ext(2,3,4) do not have the ability to correct/repair data stored on the drives. As a standard, the data on drives start to decay. So it is best to have a filesystem which scrubs the data and verifies the information, recovering as possible.
Back when I decided to switch, the only real option was ZFS.
ZFS is very mature system designed for very large datacenter systems. As such it has many capabilities and strengths. The only downside to ZFS is that it is not native to Linux. It is the creation of Sun and has been ported to Linux.
The port has it’s problems right now, but on the whole it is stable.
So, why move. Well, ZFS is designed around the use of very high end drives. ZFS expects the drives to be highly stable and have very large write caches. The kind of drives I am using are in the spirit of RAID (Inexpensive). In addition, to run ZFS takes a very large amount of dedicated RAM. Without these two pieces, ZFS running very large arrays is very slow.
As of this writing, BTRFS is still beta, however, having my drives sit with 100+ transactions per minute for 4+ days with no services running is a bit excessive. The ZFS drives, when not beating up my system on writes is VERY fast. The system is stable, and keeps my data secure, but so does my backup server. What I need is the ability to write to my drives without it taking 6 to 10 times as long as writing to a standard array.
BTRFS may not be fully stable on the full RAID, however, I do have backups, and am looking at reconfiguring my RAID to be more distributed. As it turns out I only need one 8TB drive, and the rest can be much stronger.