This document starts with an Introduction to NAS, following by my Requirements, then a Spreadsheet of price/performance of some units, then finally which one I chose (and runners up).
The TL;DR version is: get a 5-drive NAS with RAID-tuned drives and use RAID6 (instead of RAID5).
Why a NAS/RAID?
“Cloud” storage is too small, but rotating disks are too unreliable. After about 2 years, there is a 10% chance per year that a drive will fail, increasing each year. Drive failure is not really a question of “if” but “when”. Therefore, you need a “RAID” – an array of disks with some redundancy, so that corruption on a disk can easily be corrected, and failed disks can be replaced, all without losing data.
You can put a RAID inside your desktop computer or directly attached outside via USB, eSATA, or Thunderbolt (DAS or “direct attached storage”). This is the cheaper and faster option, but it’s also less flexible and requires more electrical power. A NAS appliance attaches via the network, uses a lot less electrical power, and can be shoved out of the way, such as in closet somewhere.
A NAS is just a file-server running Linux with a simplified web-based management so that you’ll never see the underlying Linux (unless you really want to). Windows NAS appliances also exist, but for some reason they are in a different category. HP sells some nice “Windows Home Server 2011” boxes if want a Windows platform.
What you are building with your NAS is a “home cloud”. Instead of a paltry few gigabytes connected at a megabit/second with the “external cloud”, your home cloud is a thousand times bigger and faster, storing many terabytes of data connected at gigabit speeds.
The first thing you want with your home cloud is “backup” for all your desktops and laptop devices. All the NAS boxes support this, even vendor-specific schemes like “Time Machine” for the Macintoshes.
The next thing you’ll want is “video/audio streaming”. All the boxes support DLNA for streaming to non-Apple devices (supported by almost all new TVs and gaming consoles), as well as iTunes DAAP streaming.
And then there is “mass storage” for all Linux distros that you BitTorrent. Geeks like me have a surprising number of weird mass storage needs, from “packet capture libraries” to “password dumps”.
Finally, something to keep in mind is “database”. PHP-on-MySQL should be considered a standard computing skill along with manipulating spreadsheets or word processing. A NAS/RAID performs poorly for heavy database loads, but is great for home use and for prototyping.
You can build your own NAS with any computer and Linux. But even if you have the necessary expertise, you may not want to. I have enough Linux systems at home demanding my attention; the reason I’m buying a NAS appliance is to avoid having yet another one.
With the rotating disks in the NAS, I’m converting all my laptops and desktops to SSDs (actually, SSD RAID0 on my desktop which is blindingly fast). After the transition, rotating disks will largely be gone or invisible inside the NAS.
Requirement: RAID6
If you are buying a NAS, you are probably planning on RAID1 or RAID5. I recommend RAID6 instead. The problem is Moore’s Law: while disk capacity has continuously improved, error rates have not.
RAID1 and RAID5 provide only a single redundant disk. When that disk fails (eventually all disks fail), there is a high chance of data corruption, either because there is an undetected sector failure on the remaining disks, or another disk fails at the same time (disks bought at the same time tend to fail near the same time).
Around the point we reached 1-terabyte disks, with the correspondingly high read error rates, RAID5 became obsolete.
RAID6 provides two disks of redundancy. When one disk fails, errors can still be corrected. You should still backup your most critical data (to blu-ray disks or to the cloud), but RAID6 is reliable enough for things like your automatic notebook backup.
This document assumes RAID6 – I don’t list products that don’t support it. The problem with RAID6 is that the industry standard is still RAID5. Half the products don’t support it, only the newer or higher-end products support it.
Note that “RAID5+hot-spare” is not the same as RAID6. The hot-spare simply means that the system can immediately start rebuilding after a disk failure, instead of waiting for you to notice a problem, buy a drive, and replace a drive. It solves the problem of users not noticing a failure, and of reducing the time spent running without redundancy (and thus, less risk of a second unrecoverable failure), but it’s not the same level of robustness as RAID6.
Requirement: Ease-of-use
This is my biggest requirement. The reason I’m going with a NAS rather than configuring my own system is because I don’t want to waste time having to maintain it.
But it’s also the hardest system to judge. Synology has the best support website (important to me) and a demo system accessible via the web with a very nice UI. But when I bought it and created my first array, I encountered enormous problems because a disk failed during the initial parity check. It’s not Synology’s fault that the disk failed (I bought them separately), but their failure in handling this cost me hours of effort trying to fix things.
Also, unlike some vendors, Synology doesn’t provide an automated background parity/health check, which is an important feature (although my usage patterns are likely to accomplish the same thing anyway).
Thus, while before buying Synology seemed like it’d be the best in terms of ease-of-use, since buying it has performed in a mediocre manner.
All the NAS vendors use roughly the same underlying hardware and software, so there shouldn’t be too much difference in ease-of-use. When a disk fails and you need to rebuild the array, all vendors will be roughly equal, I believe. If the rebuild fails, it’s more likely that the user did something wrong (like chose RAID5 instead of RAID6) than the box being at fault.
Requirement: The vendor
Browsing the net (such as benchmark review sites and NewEgg listings), I came down to a short list of the following vendors to consider:
- Synology
- QNAP
- Thecus
- Buffalo
- Netgear (ReadNAS)
- D-Link
- Iomega/EMC
- Lacie
The first three companies are pure NAS vendors, they don’t sell any other product. I’d pick one of them as my first choice, simply because they specialize in this.
Buffalo, D-Link, and Netgear sell a huge range of home networking products (e.g. WiFi access-points), with NAS being yet another product on their list, but while NAS is a relatively small part of their business, they still sell a ton of it in absolute terms, probably as much as the dedicated NAS vendors. You might call Netgear a “dedicated” NAS vendor, because even though NAS isn’t their primary business, the put as much support effort into their NAS as the dedicated vendors.
Iomega and Lacie are smaller vendors with more specialized products. LaCie has the cheapest 5-drive box at $330 (diskless).
A company called “Drobo” is a major home RAID vendor, but they sell products that connected directly to the desktop via USB, eSATA, or Thunderbolt, so they weren’t on my list.
Cisco is a “major” vendor, of course, but they just sell rebadged QNAP products at higher prices.
Buffalo seems like a major vendor, but tend to sell system pre-populated with drives (a plus or minus, as described below), and most of the systems listed on NewEgg don’t support RAID6. Thus, I don’t have much information on Buffalo.
Iomega is a division of EMC, the best known enterprise storage vendor. They claim their NAS is built with EMC technology. In reality, I doubt that it has any technology from EMC. Instead, it’s probably the other way, deliberately limiting features so it doesn’t cannibalize sales from the higher margin enterprise products.
At the end of this post, I list various models from these vendors with data on their various features, plus the price on NewEgg.
Requirement: Where to buy?
Your choice of NAS is probably going to be influenced by your favorite hi-tech vendor.
I buy mostly from NewEgg. It’s because their prices are always within 5% of the lowest prices I can find on the Internet, and the amount I would save isn’t worth hunting around. Also, they have the best site I can find for comparing/contrasting products -- although this feature sucks for comparing NAS. I couldn’t search on criteria like “RAID6”. They did have “number of drive bays”, but it was too inaccurate as to be useless.
They also have a painless RMA process, which is important to me since when buying 8 drives, chances are good that one of them would be DOA (which indeed was the case).
I don’t mean to promote NewEgg as the best vendor, only one that I find adequate.
Requirement: 2 or 3 terabyte drives
Right now, the sweet spot is 3 terabyte drives. For some typical drives that work well in NAS:
$110 for 1-terabyte drive ($110/terabyte)
$130 for 2-terabyte drive ($65/terabyte)
$180 for 3-terabyte ($60/terabyte)
$330 for 4-terabyte ($83/terabyte).
But the overhead is roughly $120/drive due to the additional NAS hardware, so this changes the equation:
$230 for 1-terabyte drive ($230/terabyte)
$250 for 2-terabyte drive ($125/terabyte)
$300 for 3-terabyte ($100/terabyte)
$450 for 4-terabyte ($150/terabyte).
RAID6 requires 2 spare drives regardless of the number of remaining drives in the array. That means the more drives in the array, the less overhead RAID6 will cause. A larger array of smaller drives can therefore be more cost effective. A 5-drive array of 2-terabyte drives delivers 6-terabytes of effective data for $1250 (according to the rough numbers above). This is the same amount of storage and price of a 4-drive array of 3-terabyte drives. Keep this option in mind when considering the precise system you want to buy.
Ultimately, it looks like 3-terabyte drives are the most cost effective, with 2-terabyte drives as an option. Avoid 1-terabyte or 4-terabyte drives. (These numbers are for July 2012, and are likely to change by the time you read this).
Requirement: NAS-optimized drives
Not all drives work well, even those that a NAS vendor certifies and sticks in their own boxes.
The problem is that sometimes drives can take a long time responding to a read request (“read error recovery”). When a sector is corrupted, the drive will spend many seconds (often a minute) re-reading that sector over and over again, recalibrating the drive head in the meanwhile.
This is both unnecessary and bad for RAID. When the drive takes too long, RAID arrays often assume the drive has failed, and kick it out of the array. Since RAID already has redundant information, it doesn’t need such error recovery anyway.
If you read NAS forums, here’s what happens. A disk sector has gone bad, but nobody notices for a year, because that sector is never read. Then, another drive in a RAID5 configuration fails. This is replaced with a good drive. The RAID array then goes into a “rebuild” step, where it reads all the information on the existing drives to build the redundant information on the new drive. During this rebuild, the bad sector is encountered, where the drive then spends a minute re-reading many times trying to recover the data. This causes the RAID controller to think that drive has failed, and since there is no secondary backup for RAID5, causes the entire array to be shutdown, screwing the user.
Western Digital “Green” drives have the worst reputation for this, possibly because they spin at a low speed to conserve power thus increasing the delay caused by read errors. Therefore, Western Digital has come out with their “Red” drives. These are essentially the same as a “Green” drive (also spinning slowly to conserve power), but reconfigured to spend less time recovering from read errors. Complete failure of sectors are thus more common, but in RAID, that’s a good thing. It means the sector gets remapped before problems get worse, and the redundant information in the RAID is used to reconstruct the sector anyway.
Western Digital also claims the “Red” drives produce less vibration, which is important when sticking a lot of them in a RAID array.
I’m not an expert in drive technology, so I don’t know how actually important all this is. Right now NewEgg is offering a “deal” on the Western Digital “Green” 2-terabyte drive for $100, while charging $130 for the equivalent “Red” drive. So I’m paying a premium for what may turn out to make little difference. But for me, I want the least amount of hassle.
If price is of more of a concern, then the better strategy is to find a site like NewEgg or TigerDirector or Buy.com that is having a sale on hard drives, and buy those drives. You are at the mercy of the quality of those drives, though, but it’ll save a lot of money.
Note that you should avoid “enterprise drives”. They can cost 10 times as much, but are in fact no more reliable than cheap desktop drives. RAID means “redundant array of INEXPENSIVE drives”, so you should follow that dictum.
Links:
http://www.pcper.com/reviews/Storage/Western-Digital-Red-3TB-SATA-SOHO-NAS-Drive-Full-Review
Requirement: with drives or without?
Some NAS products came with drives, some come bare without any drives (requiring you to buy them separately), and some come part way with some drive bays empty.
In general, most vendors mark up the drives. That’s because they have to go through a longer supply chain. Some vendors mark up the drives absurdly so, selling to the dumbest of customers that want the least amount of work who would be daunted by having to mount a drive in a tray. Even though my goal is to pay more for savings in effort, even I can’t stomach this.
But often you’ll find special deals where the combined product is cheaper. That appears to be especially true of Buffalo, which appears to have some trick for buying drives cheaply that the other vendors can’t duplicate. More importantly, there will often be sales where Buffalo or the retailer is trying to clear inventory, giving you exceptionally good deals on the price.
In summary: buying a diskless system and the drives yourself is often cheaper than pre-populated systems, sometimes populated systems can be even cheaper, especially if you catch them on sale. Likewise, look for hard-drive sales.
Requirement: UPS support
You need to spend at least $50 on a UPS (uninterruptable power supply) along with the NAS. Unexpected power outages and power surges will corrupt data. Since the entire point of getting a RAID is data reliability, these power outages are bad thing.
The sort of UPS you are looking for is one with a USB port to connect to the NAS. This allows the UPS to tell the NAS that power is about to fail when it’s been running too long on battery, allowing the NAS to shutdown without corrupting data. There are many available that meet this requirement starting at $50.
You can plug multiple computers into the same UPS. Connect the USB port to one of them, which then notifies the other devices over Ethernet. Several NAS boxes support this feature, maybe all of them.
Some of the Thecus boxes include a “mini-UPS”.
I bought the cheapest “Cyberpower” UPS from NewEgg that was listed on Synology’s website as being compatible. It cost $45. It’s a small, light UPS, and will only 1000-seconds/15-minutes (according to what Synology reports) before running out of power, but the Synology box will cleanly shutdown before power gets that low, without corrupting data.
Requirement: Speed
For large file transfers, even the cheapest boxes can sustain about 400-mbps, which is about as fast as USB 2.0. The more expensive boxes can transfer at 1.6-gbps using Ethernet link aggregation. A single drive in your desktop does about 1-gbps.
In the spreadsheet at the end of this post, I try to list the various speeds claimed either by the vendor or from benchmarks reported on the net. Most of these are RAID5 benchmarks, which should closely match the RAID6 performance.
For me, getting “link aggregation” to work was important. My desktop, the switch, and the NAS all support “IEEE 802.3ad LACP”. Note that some devices support other variants, like “trunking”, that may not be capable of this. Also, I had to string yet another Ethernet cable from my office to my server room through the attic to get this to work. But in the end, for sustained transfers, this made the NAS actually faster than the hard drive inside the computer.
However, for small files, performance is going to suck. I noticed that while doing backups. I’ve done unwise things like download the Mozilla source tree, which has a bajillion small files that take forever to copy.
The Netgear ReadyNAS has two Ethernet connections, but for some reason, does not support combining them into a single link. That’s my major reason for not looking more at the Netgear boxes.
Requirement: Power consumption
Home NAS tries for the lowest possible power consumption. Every Watt of power that the device uses equates to 8.75-kilowatts per year, which (depending on where you live) equates to about $1 per year in electricity costs. If a NAS uses 50-Watts of power, that’s $50 a year in electricity power. For that reason, home NAS products use ARM processors (like those found in your iPhone) or Atom processors (ultra-low power x86 processor from Intel).
There are different levels of power usage, depending upon the state of the NAS:
- 0.5 watt: Off, but plugged in, draining a tiny amount of electricity through parasitic resistance. All “off” electronic products have parasitic resistance if not unplugged.
- 1 watt: Off, but can be woken with “wake-on-LAN”. A useless feature in my opinion, but they all support it.
- 10 to 30 watts: Disk drives turned off because they are inactive, but the NAS itself is turned on. It’ll spin up the disks as soon as you start accessing them.
- 20-50 watts: Disk drives spinning, but not much data is being read/written
- 30-70 watts: Heavy disk activity.
The cheapest 4-drive NAS boxes use ARM processors and will consume 10 watts while the disks are idle. A large 8-drive NAS using an Atom processor will consume about 30 watts while the disks are idle.
I document in my spreadsheet the power usage of the various products, with the inactive number being when the drives are a sleep, and the active number being when there is heavy activity.
Requirement: Number of drives
NAS boxes come in sizes with 4, 5, 6, and 8 drive bays.
Well, they actually come in smaller and larger sizes, but I didn’t consider them. Smaller units don’t support RAID6, larger systems are “enterprise” class systems that cost too much and provide much more storage than I need.
I overspent and got an 8-drive system, but I think the “sweet spot” that most people would be happiest with is a 5-drive system. LaCie has what appears to be a nice 5-drive unit that supports RAID6 and link-aggregation for only $330, and I think would be a good starting point to compare all other boxes to.
Requirement: File system expansion
All the vendors promise that you can easily expand their systems by adding more drives (in case there is a drive bay free), or replacing existing disks with larger disks. This is not an inherent feature of RAID in theory, but something that has been part of the Linux kernel for several years. I’m not precisely sure how well this works, such as what happens when a disk fails during expansion.
Some of these boxes also contain eSATA ports, allowing you to expand the RAID that way. Synology even provides external boxes with 5 drive slots, and allow you to connect two of these boxes to most of their units. Thus, your 5-drive NAS can magically expand to a 15-drive NAS. On the other hand, these boxes cost as much as a full NAS themselves, and it adds a lot more risk: if the expansion box fails, it takes down all the rest of the drives with it. (Synology provides a special cable that locks the external box in place, making that failure less likely).
While this is a “nice-to-have”, I wouldn’t place too much importance on it.
Requirement: Moore’s Law and depreciation
Don’t “future proof” your NAS. The value of your purchase drops in half every year due to Moore’s Law. The NAS boxes next year will be faster, and the hard drives bigger. The top-of-the-line box you buy today will be a boat anchor in 3 years.
The biggest thing coming in the next few years is 10-gbps Ethernet. Sure, you’ll be able to plug larger drives into your current NAS and expand your array, but you won’t want to, because it’ll be too slow.
There is also the “cloud”. Over time, more of your data will out in the Internet cloud and less at home. Why BitTorrent video from the Internet when you can just stream it from Hulu, Amazon, or NetFlix? It’ll still take many years for this to happen. Your Internet connection is still only 10-mbps (instead of 1-gbps of your local network), and free cloud services only give you 5-gigabytes of storage (instead of 5-terabytes), so we have a ways to go. But in decade, this will look very different.
I bought as much NAS as I think I’ll need for the next 4 years, or 18-terabytes. After that point, I plan on replacing it with a new NAS. Amortized my $2500 across 50 months is $50/month for that storage, assuming no hard drive fails. A more typical home purchase of an $800 system over 5 years amortizes to $14/month.
Requirement: miscellaneous
There’s a ton more things to discuss about home NAS, such as integrating with DLNA/iTunes for streaming video, encryption, setting up dynamic DNS and UPNP to allow access through your firewall, integrating with video cameras, integration with virtual machines, backup software (like TimeMachine support), CIFS/SMB/NFS/AFP, and so on.
However, all these devices appear to be roughly equivalent, as they use all use similar hardware/software.
Some of these features might be missing. For example, video camera integration requires video transcoding from motion-JPEG to MP4, which requires floating-point, which doesn’t exist on some ARM processors. But none of these things were of much interest to me, so I didn’t research them. I leave that up to you.
When you decide upon a product, do check online to make that the product does indeed support all the features you want.
Spreadsheet results
Here is a spreadsheet of the price/performance. I've spent an hour trying to figure out a way to copy/paste this into blogger, such as round tripping it through Google docs, but no matter what I try, it comes out really bad in this document, but also somehow screws up the rest of the formatting. Therefore, I have to paste it as a picture. You'll have to click on it to expand it to see it.
Also look at this website for performance benchmarks:
http://www.smallnetbuilder.com/nas/nas-charts/bar/41-naspt-r5-filecopy-to
Conclusion
I got the Synology 1812+, mostly because it was the cheapest 8-drive system, it has the best website (positively aggressive in support options), and it provided a demo of their user-interface, which is awesome. It’s not a perfect system, I’ve had problems with it (such as dealing with a bad drive in the initial RAID build), but it’s adequate.
I think a good starting point for you would the “LaCie Big5” from NewEgg for only $330. I’m not promoting it as the best unit, but as a good starting point for comparing/contrasting other possible purchases.
Or you could make the smart choice: build a PC (or use an old one), put something like FreeNAS. You get ZFS and encryption.
ReplyDeleteYou even went with a NAS that uses an Atom, which is widely available.
ZFS would be my preferred choice, especially because of things like the write-hole and health checking, and also because of things like the lack of ECC memory on Atom CPUs. A box based on a notebook CPU should provide low-power near that of an Atom, but be a lot faster for things like encryption (with the built-in AES instructions).
ReplyDeleteBut that would take several days setting up the hardware and learning how to do ZFS. I've got enough boxes demanding my time for sysadmin work, I don't need yet another.
something else troubling about the WD greens that I found out the hard way is that they have insanely aggressive head-park timers and there's an old DOS firmware tool that allows you to remove most of these sort of "green" features.
ReplyDeleteA worst case scenario (happened at times on my FBSD/geli/graid3 NAS before I made the switch to ZFS) would occur that went something like d1(write->park), d2(wake,write->park), d3(wake->write->park) literary getting several years worth of head park cycles in the course of a month or two before failing.
Drobo also makes the DroboFS, which is network attached storage. I have had the Drobo gen2 and now the DroboFS. It works nice for my purposes: set it and forget it. Its the Ron Popeil of NAS devices.
ReplyDeleteSetting up FreeNAS takes less time than writing this article.
ReplyDeleteDrobo is shit that does not work. Don't fall for their marketing, or be prepared to lose your data.
Loved the entry. Been doing home NAS for many years now. I prefer Thecus. The sweet spot for me has been 4.5TB of RAID5 and XFS support on the N4100Pro. TwonkyMedia also creates a great "bridge" between my Windows Media Center(PC) + Xbox360 + various Apple devices. People also need to consider a backup for their NAS. 2TB drives are easy to get now, but in 5 years = dinosaur bones.
ReplyDeleteBe careful about "simple" solutions - like reading all the files on a RAID periodically to force error correction. They can make things worse. It's hard to visualize the effects of really rare events; you need to model them. People have (there was a nice paper a couple of years back, but I don't have a ready reference).
ReplyDeleteThe firmware in a modern disk can have 1-2 million lines of code. Do you think it's bug-free? One *observed* failure mode is: You write data to block A, but it's actually written to block B on the disk. Now think about what this does to a RAID. Block A and block B are parts of redundancy groups RA and RB, which (say) are parts of files FA and FB. After the bad write, both RA and RB have an incorrect block in them. But neither block reports an error when read - at the level of the individual block, it contains valid data. A RAID that recovers based entirely on disk read errors will happily deliver incorrect data in this situation. Even a RAID-6, which is supposed to survive two errors, is at a loss.
One can prevent this by adding more redundancy - e.g., writing the block number into the block itself, or doing higher-level checksums. ZFS does this kind of thing; so do enterprise-level RAID boxes. Historically, the algorithms have been ad hoc and the modeling and simulation showed that *all* the published approaches had error paths that would lead to silent data corruption - and those that tried to "heal" bad data could be induced to "heal" it into a permanently bad value that would then be completely undetectable. As I recall, ZFS was among the better solutions, though it had failures, too. Perhaps now that the analysis techniques are out there, better solutions have been developed - but I haven't seen anyone pushing them, so maybe not.
Anyway ... take RAID storage for what it's worth: A (much) more reliable alternative to individual disks, but not a *perfect* alternative. Personally, I use my RAID box as a target for backups. So live data is redundantly stored on the original system and on the RAID box. Old, deleted files are only on the RAID box so are less redundant - but also less likely to be needed. (And, actually, I make a separate backup of the most important files to an ioSafe box, which isn't RAID - in fact, I just had the single disk in it go bad; ioSafe replaced it - but it *is* protected against various physical issues like house fires and theft. The backups to the ioSafe are done using CrashPlan, so have their own internal redundancy. I may add some cloud backup to the mix, too. Layering, redundancy, disjoint failure modes - ultimately, they are the only way you get robustness.
I'm very happy with FreeNAS 8.2 running on an HP Proliant Microserver. The CPU (AMD Athlon II Neo N36L) supports ECC RAM. I also added an Intel PCI-E NIC which noticeably sped up file transfers.
ReplyDelete