Had I been using RAID5, I would've lost all 18 terabytes of data. But since I was using RAID6 which has TWO redundant drives, everything was safe. The NAS was able to both rebuild the first drive AND remap the bad sectors of the second drive.
I point this out because RAID5 and RAID1 are still by far the most popular RAID configurations, and they are a Really Bad Idea. Error rates per terabyte remain the same, but we can fit more terabytes per drive every year. That means error rate PER DRIVE increase every year. Capacities are so high now that it becomes probable that one drive will experience an error while rebuilding a RAID array due to the failure of another drive. That means you must have at least 2 disks of redundancy for a RAID array, meaning RAID6.
In other words, RAID5 isn't sufficient redundancy anymore, you need RAID6.
Oh, an by the way, not only is RAID5 long dead, RAID6 will be dead in another 5 years as drives reach the point where THREE redundant drives will be necessary.
@erratarob welcome to 2009 queue.acm.org/detail.cfm?id=…
— Ignacio Marambio (@imarambiocatan) October 2, 2012
What happens if you stagger the MTBF rates on each drive, so that a new replacement comes way before the EOL of a disk?
Sometimes you do RAID5 for weird reasons, but after scaling ISPs/NSPs and IT Enterprise for 17 years, I can tell you that there are some.
This all being said I usually love your logic and points. Personally would run or suggest RAID6 in most cases.
Also worth noting that filesystem is as important as drive geometry and parity.
I use ZFS with RAID-Z2 -- geographically dispersed clones with snapshots to cloud storage... And I still don't feel safe.
Yea, but currently there are no plug-and-play RAID-Z2 systems. It'd be a lot better, but right now, I can't afford the overhead of adding yet another system needing sysadmin attention.
As for "cloud storage", is there something that gives me 18-terabytes for $50/month?
Just going to RAID-6, while it solves some problems, is by no means the end of the story. There are more subtle failure modes. http://www.cs.berkeley.edu/~krioukov/parityLost.pdf is a great paper that looks at how actual disk problems interact with a variety of redundant disk implementations. Using a model checker, they are able to find faults in *every* implementation they examine (where a fault is a sequence of failures and operations - typically quite short - that causes data to be lost irretrievably).
This is *much* more complicated than it looks - and than the vendors of RAID solutions want to discuss.
"As for "cloud storage", is there something that gives me 18-terabytes for $50/month?"
You also have the little problem of bandwidth. Even if you had a 25 Mbps symmetrical fiber connection, that's barely over 3 MB/sec transfer rates not to mention nasty latency delays for high transaction rates. I believe your current NAS setup will deliver around 250 MB/sec which is over 80 times faster than a 25 Mbps Internet connection.
It's a good idea to run regular scrubbing on RAID arrays, to catch errors when you have all drives online, i.e. before it's too late.
I'm not sure consumer NAS allow that, but it's a feature of Linux's md, and ZFS.
Rob: Check out Nexenta/OpenIndiana
"Had I been using RAID5, I would've lost all 18 terabytes of data."
Really? Wouldn't you just have lost the data on the chunks which contain error? In case they contained filesystem metadata, the real losses would have been greater of course.
Post a Comment