First build, 16TB raw

Joined
Mar 28, 2009
Messages
14
This is my first attempt at building a non-workstation hardware solution. I usually just purchase older enterprise-class commodity gear

The plan is a low-ish power, quiet-ish, redundant 2U NAS box. Not sure if I will want to use this for VM image storage or not. I have a small 15k SCSI shelf that might be better suited.

The primary function will be streaming media and database storage, with the database app running in a VM.

OS will be FreeNAS, Openfiler, or Slackware. I need to try them out and see what is the best fit.

I have had this case for a few months:
NORCO RPC-2208

Just ordered these:
GIGABYTE GA-MA785GM-US2H

Diablotek PHD Series PHD450 450W ATX12V V2.2 Power Supply


Kingston HyperX 2GB 240-Pin DDR2 SDRAM DDR2 1066


AMD Athlon II X2 245 Regor 2.9GHz 2 x 1MB L2 Cache Socket AM3 65W Dual-Core Processor


Plan to get:

(8x) Western Digital AV-GP WD20EVDS 2TB 32MB

Probably a used dell PERC 6/i to run hardware raid 6

Network will be 4x GIG bridged, possibly using the on-board on a separate vlan for management.




Any thoughts/comments or any glaring failure that I have set myself up for?
(I am a bit iffy on the power supply, but have to try before I can get the experience of success/failure and gauge how much more/less PS I need)
 
1. be careful... non-raid edition drives have a high possibility of not playing nice with hardware raid cards due to time out or TLER functions that are disabled in "cheap" consumer intended drives :)
 
Are you certain that the standard ATX PSU will fit? Norco's specs call for a "2U PSU", which I understood to be a bit smaller. I actually don't know for sure...but its worth checking.
 
An ATX PSU can fit in a 2U chassis - just - but if the chassis isn't designed for an ATX supply, it's obviously the wrong choice. Either way, I would switch the PSU for one with an 80mm rear fan rather than a bottom fan, as there will be < 1/4" clearance below the PSU for airflow, and preferably a higher quality one as well. From the photos it does seem to be a smaller than standard PSU opening. Just checked their website and the dimensions specified for a 'standard 2U power supply' are considerably smaller than ATX. It won't fit. You need something like this.


I would stick to Intel chipsets if you're going to be running FreeBSD/Linux, but I guess that is personal preference more than anything else. They tend to 'just work'.
 
Forget about WD if you want to run hardware raid. Hitachi 2TB drives FTW.
 
Agree on making sure your drives are TLER enabled. Go Hitachi if need be. WD can take their enterprise drives and shove 'em.

Even if they are deathstars, you'll still be much better off.

Other then that... The PERC 6/i is the best raid card on the plan IMHO; given the bad-ass performance, price and it may be the only raid card on the planet to properly support S3 sleep mode! Can even shut down and / or sleep during a rebuild.

now, a 24-port PERC 6/i would be sweet.
 
I would second odditory's Hitachi 2TB vote. Great drives from what I have seen thus far.
 
Agree on making sure your drives are TLER enabled. Go Hitachi if need be. WD can take their enterprise drives and shove 'em.

Even if they are deathstars, you'll still be much better off.

You tell 'em. So what that it's been >8< years since Hitachi bought out the deathstar-era IBM drive business, and the only thing Deskstar drives produced since then have in common with the old IBM drives is name. Better not trust 'em. /sarc

Other then that... The PERC 6/i is the best raid card on the plan IMHO; given the bad-ass performance, price and it may be the only raid card on the planet to properly support S3 sleep mode!

LOL. Not by a longshot, but it might happen to be a fit for his budget.
 
yes... but aren't Hitachi drives power hogs?
Their speed is not that important here IMO.
 
Well WD Black 2TB is 8.2W idle, Hitachi 2TB is 7.5W at idle. So I don't think they are abnormally high on power consumption for a home server where a good portion of the time the disks are idle.
 
Have you given any thought to RAID-Z/2/3 + ZFS? It won't care what kind of drives you are using, and rebuilds would be much more efficient (if needed).
 
Why the recommendations for the 2TB Hitachi hard drives? Can you enable TLER on them (or whatever the Hitachi equivalent is)?
 
I'm using the Seagate 2TB LP drives and an Adaptec 5805 with great success in my server (check my post in the show-off thread for full specs).
 
Have you given any thought to RAID-Z/2/3 + ZFS? It won't care what kind of drives you are using, and rebuilds would be much more efficient (if needed).
I agree with this guy. Can be done reliably, and cheaply if needed. Some great non-raid cards out there, personally using an UIO Supermicro (LSI) SAS card modded in a couple of my PCI-Express x8 slots. Though there's suppose to be a new SAS2 card out that should work fine.

Though I'm considering those Hitachi drives to bump up my 8 drive WD RE4-GP OpenSolaris server (4driveRaidz+4driveRaidz) to ~20TB.

Great price. Though I'll need to read up on some more specs and learn more about the Warranty process (WD Advance RMA process has been good to me).
 
The original PS did not fit, i replaced with:

http://www.newegg.com/Product/Product.aspx?Item=N82E16817104162

Just needs another molex connector for the fans.

The box is racked, waiting for the PERC card and NIC to arrive (and a cable management arm)

I have started looking into Z, and it might be for me. This isnt necessarily a "budget build". The goal is to get a good bang per buck. Not to overbuild the box, but make it able to handle business and be resilient to hardware failure(s). It will be on an UPS and have a battery on the raid card.

I am a big fan of NetApp (use their products at work) and thir RAID-DP (RAID6 variable parity location) implementation. With standard RAID6 performance hit, I will have to do much testing, and will compare to a RAID5+hot spare configuration.

I do like the concept of RAID-Z2, and the ability to increase capacity by iteratively upgrading single drives.

Question on the RAID-Z, Z2: I understand the concept, but for implementation, are the physical drives presented individually to the OS? So in my case, I will still be fine with a PERC card, but instead of RAID6 or 5+1 I wouldn't really need any of the RAID functionality on the card and let a BSD or opensolaris handle it.

Files will be accessed by windows and linux clients. 1080P video streaming to <= 4 clients, audio to <=2 clients, and VM image storage (possible private cloud) are the highest demands that will be put on this box.

PAUSE->READING MORE ABOUT ZFS.
LATER> this looks like the best fit for me, and I can get 1TB drives to start and get some experience with it, see if this will fit my requirements before committing to the larger storage.

The disk in question now is down to WD, Hitachi, or Samsung. But with the change in direction to OS management of redundancy etc, 'green' drives are an option now?
 
The original PS did not fit, i replaced with:

http://www.newegg.com/Product/Product.aspx?Item=N82E16817104162

Just needs another molex connector for the fans.

The box is racked, waiting for the PERC card and NIC to arrive (and a cable management arm)

I have started looking into Z, and it might be for me. This isnt necessarily a "budget build". The goal is to get a good bang per buck. Not to overbuild the box, but make it able to handle business and be resilient to hardware failure(s). It will be on an UPS and have a battery on the raid card.

I am a big fan of NetApp (use their products at work) and thir RAID-DP (RAID6 variable parity location) implementation. With standard RAID6 performance hit, I will have to do much testing, and will compare to a RAID5+hot spare configuration.

I do like the concept of RAID-Z2, and the ability to increase capacity by iteratively upgrading single drives.

Question on the RAID-Z, Z2: I understand the concept, but for implementation, are the physical drives presented individually to the OS? So in my case, I will still be fine with a PERC card, but instead of RAID6 or 5+1 I wouldn't really need any of the RAID functionality on the card and let a BSD or opensolaris handle it.

Files will be accessed by windows and linux clients. 1080P video streaming to <= 4 clients, audio to <=2 clients, and VM image storage (possible private cloud) are the highest demands that will be put on this box.

PAUSE->READING MORE ABOUT ZFS.
LATER> this looks like the best fit for me, and I can get 1TB drives to start and get some experience with it, see if this will fit my requirements before committing to the larger storage.

The disk in question now is down to WD, Hitachi, or Samsung. But with the change in direction to OS management of redundancy etc, 'green' drives are an option now?

You should be careful here. ZFS is great technology, but has some limitations. The biggest is that there is no OCE. That is, if I have a 7 drive RAID-Z2 pool, and then want to add an 8th drive, I can't expand the existing array by one. I have to move all the data off the pool, destroy and recreate it with 8 drives. Or, add another pool of 3 drives and go from there and add that space to the volume.

Also, ZFS is not part of common linux distributions. If you run opensolaris, you will find a lot of drivers for popular consumer and even enterprise hardware are no t there. You have to pick hardware really carefully.

And then the issue is that after the Sun/Oracle merger, ZFS is almost certainly going to be unsupported, as Oracle has their own next gen filesystem called BTRFS, and I think that will be the new choice of filesystem for the merged company going forward.

Given all that, if you still want to use it, go ahead, but do not be ignorant of the significant limitations it forces on a administrator.
 
I am really digging ZFS/RAIDZ

even with the possibility of an unsupported future, I am changing direction for this box to Z.

I have a LSI SAS3442E-R
http://www.newegg.com/Product/Product.aspx?Item=N82E16816118101

on the way. It says 4x internal SFF-8484 SAS connector, so I need a
http://www.newegg.com/Product/Produ...2200166&cm_re=SFF-8484-_-12-200-166-_-Product
cable. Now comes in my unfamiliarity with this type of hardware. One of those cables will work for 4 drives. In this instance, what is the best route to have 8 total internal sata connectors? It looks like that cable will use the whole internal connector area (what I had assumed was 2 separate ports)
Is it just a different cable, or will I need a SAS expander to achieve the goal of 8 internal satas?

Sofar im failing at googling "sas expander" I just get full enclosures of disk as results.
 
If your end goal is just an 8 disk RAIDZ array, it will likely be cheaper to find a card with two 8087 ports and get a forward breakout cable to connect to your sata drives (instead of a sas->expander). Each 8087 can handle 4 drives. I have this setup (mdadm raid tho) with a HP 2680 card acting as a jbod. There are others as well. Just make sure they work in whatever you're going to use.
 
If your end goal is just an 8 disk RAIDZ array, it will likely be cheaper to find a card with two 8087 ports and get a forward breakout cable to connect to your sata drives (instead of a sas->expander). Each 8087 can handle 4 drives. I have this setup (mdadm raid tho) with a HP 2680 card acting as a jbod. There are others as well. Just make sure they work in whatever you're going to use.

Indeed, the goal for this box is just an 8 disk ZFS/RAIDZ array. the decision for that particular card was it was named as a known good card for opensolaris. And taking a step back, of course I wont need an expander for this setup, the card is fairly inexpensive (~$200) so I can just get another.

Next weekend I should have it all assembled for the first iteration: 4x1tb drives. After proof of concept, I will likely get the second card and 4x 2TB drives or 8x2TB, and a multiport NIC

With the drives being so big, RAIDZ2 with a hot spare is where I am looking. After overhead, that should leave ~10TB usable. ... Maybe no hot spare, this is not HA and personal use only.
 
Forget about WD if you want to run hardware raid. Hitachi 2TB drives FTW.

Hitachi drives are a disaster waiting to happen. No one uses them in the enterprise for a good reason. They are 5 platter drives with a less than stellar warranty. Buy an enterprise drive if you want RAID.
 
i love my RE2s

they are no longer in RAID, but they are solid performers. They do however use much more power than a consumer grade HDD
 
Exactly. Pay the premium for enterprise class drives. There is a reason they cost more.
 
Hitachi drives are a disaster waiting to happen. No one uses them in the enterprise for a good reason. They are 5 platter drives with a less than stellar warranty. Buy an enterprise drive if you want RAID.
Funny how Google uses them...
 
Funny how I read somewhere here on H about anotehr computer than google uses them as well... must of been in that thread about the hitachi drives..

Also funny how nearly everyone I've spoken with who has bought them thinks they are kick ass..

Then now that seagate and WD disabled TLER....

yeah... up until this year I refused to buy hitachi, now more than half the drives in my WHS are hitachi.

And I don't think as a home user I should have to spend $30-$50 more for the exact same drive with TLER, just cause some company wants more money.
 
Where did you get the idea that the enterprise drives are the same as consumer drives?

* Warranty - The desktop drives usually have a 3-year (or less) warranty, while the Enterprise drives usually have a 5-year warranty (IIRC, some of the Samsung drives have a 7-year warranty).
* MTTF/MTBF - This calculated value is an estimate of the number of hours until the drive fails, and is only valid for the warranty period. For Enterprise drives, it's usually higher than for their Desktop counterparts.
* Error Recovery - This is called various things by various manufacturers (TLER, ERC, CCTL). In a Desktop drive, the goal is to do everything possible to recover the data. In an Enterprise, the goal is to ALWAYS
return SOMETHING within the timeout period; if the data can't be recovered within that time, let the RAID controller reconstruct it. Wikipedia article.
* Bit Error Rate - This is a statistical measure of how often an error (either recoverable or unrecoverable) will occur. The rate for Enterprise drives is usually lower than for Desktop drives.
* Vibration resistance - When several disks are in a single chassis, the vibration from one or more of the drives can effect the others. Some Enterprise drives have firmware (with sensors?) that specifically reduces
the effect of this vibration on the error rate. On those drives where this is done, it can sometimes be seen as a difference in the vibration specs of the drive.
* Target market segment - Enterprise drives are designed for 5x9 operation with light use, while Enterprise drives are designed for 7x24 operation with heavy use. I believe this effects the motor & voice-coil
drive electronics, possibly the overall disk heatsink capabilities, and possibly the operating temperature range of the drive.
* Request Queuing - Enterprise drives usually have improved queuing algorithms for I/O requests. This gives more IOPS as the load increases (something you're not likely to see in a lightly loaded Desktop usage).
 
Where did you get the idea that the enterprise drives are the same as consumer drives?

* Warranty - The desktop drives usually have a 3-year (or less) warranty, while the Enterprise drives usually have a 5-year warranty (IIRC, some of the Samsung drives have a 7-year warranty).
* MTTF/MTBF - This calculated value is an estimate of the number of hours until the drive fails, and is only valid for the warranty period. For Enterprise drives, it's usually higher than for their Desktop counterparts.
* Error Recovery - This is called various things by various manufacturers (TLER, ERC, CCTL). In a Desktop drive, the goal is to do everything possible to recover the data. In an Enterprise, the goal is to ALWAYS
return SOMETHING within the timeout period; if the data can't be recovered within that time, let the RAID controller reconstruct it. Wikipedia article.
* Bit Error Rate - This is a statistical measure of how often an error (either recoverable or unrecoverable) will occur. The rate for Enterprise drives is usually lower than for Desktop drives.
* Vibration resistance - When several disks are in a single chassis, the vibration from one or more of the drives can effect the others. Some Enterprise drives have firmware (with sensors?) that specifically reduces
the effect of this vibration on the error rate. On those drives where this is done, it can sometimes be seen as a difference in the vibration specs of the drive.
* Target market segment - Enterprise drives are designed for 5x9 operation with light use, while Enterprise drives are designed for 7x24 operation with heavy use. I believe this effects the motor & voice-coil
drive electronics, possibly the overall disk heatsink capabilities, and possibly the operating temperature range of the drive.
* Request Queuing - Enterprise drives usually have improved queuing algorithms for I/O requests. This gives more IOPS as the load increases (something you're not likely to see in a lightly loaded Desktop usage).


Keep in mind those are numbers put out by the same companies that make those drives, they could easily package the same model with a different label on and sell it as two different models. In fact with the SATA models, i bet that is all they do for the most part. SAS is another matter.

Without actually disassembling two drives you cant prove the components are any better.

Furthermore, here is the forum MANY people have run "consumer" drives in 24x7 arrays with out issue for quite a few years.

I think the whole reason they disabled TLER on all but the "enterprise" SATA drives is purely because they realised a good way to make an extra $30-$50 per drive.

And also a warranty on a hard drive is only useful if it fails. It won't get your data back, and in most cases they'll send you a refurb, depending on the maker.
 
Even if the enterprise SATA drives have the same parts and assembly processes as the consumer drives, I think the tests the enterprise drives must pass are more stringent. So, essentially, the drives that test out with the best specs get binned into enterprise, and the rest get thrown into consumer.

But I think that there probably are differences in the parts and processes for the enterprise drives. For example, I would not be surprised if the enterprise drives use more expensive components in the bearings and other moving parts.
 
sanity check. You are going to spend how much money on hardware to have 12TBs (raid 6 = number of disks (8) - 2 for redudancy - 6x2TB = 12TB) of storage for streaming media when netflix is $7.99 a month?


Assuming you think this through and I don't think you have because RAID six isn't necessary (RAID five would suffice if you want redundancy and you wuold have 14TB free instead of 12TB). DON'T buy a cheap PSU - buy one that is from a reputable brand and ensure it is an 80% + certified to save money on your power bill. You don't need that much processor either. I'd consider a low power draw processor since this setup will be on all the time - look at a 45 watt cpu -- file sharing does not need cutting edge hardware. We have banks with 1500 employees that are still using HP B2000 boxes as their NAS solution -- the processors are Pentium 3s on these boxes and they are perfectly adequate. Your I/O needs will nowhere near approach those levels.
 
Lots of good reading here for Opensolaris/ZFS home servers:

http://blogs.sun.com/constantin/entry/opensolaris_home_server_zfs_and


My original direction was just "box of storage that won't lose data"

Now I am more focused on getting a ZFS system running.

Even with backing up to tape, I am still hesitant to have less than 2 disks for fails (either 2 parity, or one parity, one hot spare) with the size of the disks in mind. Personally/professionally, I have never seen/used anything over 1TB, and I have found most cases erring on the side of caution will never come back to bite you. As they say "Nobody ever got fired for buying Dell, Microsoft, and Cisco"

That said, the more that I read about ZFS, the more that I feel comfortable reducing "overhead" disk. If I have a cold spare outside the chassis, one parity disk might do, but again, I prefer a worry-free, over-engineered solution, even if I need to spend a bit more initially. As I said, this isn't a "budget build", but more of a learning experience stepping stone, before I tackle the larger storage project. That doesn't mean money grows on trees, or to be wasteful, but TCO and ROI should be taken into account.

Also: For my situation, this is going to host VM images, all my data that is spread around direct attached storage on small desktops, not just steaming movies, etc.

I purchased 2 cheap 2.5 drives as a backup plan, but I am going to shoot for an internal mirrored root filesystem on a pair of flash drives. ZFS looks to be able to accommodate this configuration.
 
... but again, I prefer a worry-free, over-engineered solution, even if I need to spend a bit more initially.

Usually an engineered solution has some measurable standard. You seem to just make up stuff.

For example:

My performance metric is to have 14 days of TV recording space on my hard drives. I think that is about 100 hours. When I get near 100 hours of free space left, I either delete some recordings or buy more space.

12TB of media seems to be around $1000 for drives. About $100-200/year in lost income. Seems that the recommendation to use Netflix is cost competitive.
 
WD Green drives also support TLER but default to disabled; you do not need RAID edition harddrives.

Also, using TLER will cause misbehaving HDDs to be kicked out of the array even faster - and NOT PREVENT IT! TLER means Time-Limited Error Recovery, and it will LIMIT the time it spends on recovering bad sectors. As a result, the RAID engine would get a I/O error on the drive much sooner and kick it out of the array much sooner.

In an enterprise, mission-critical environment, that's what you want. You don't want your important server to freeze for 30 seconds or even longer. If any disks even but scratches its bum, you want to throw it in the waste-bin and replace with any of the hundred reserve disk lying on your desk. Half a minute downtime can mean millions of damage in some cases.

So TLER is useful for keeping the array responsive during individual HDD errors; not for preventing them to drop out of the RAID array! Using TLER in RAID0/JBOD configurations is also strongly not recommended; you only increase the risk of data loss.

TLER is not very well understood by its users. Virtually all home users do not need it.

By the way, have you considered ZFS?
 
Other then that... The PERC 6/i is the best raid card on the plan IMHO; given the bad-ass performance,

Yes, its one of the best RAID cards out there right now, but not because it is blazing fast. Its because its fast enough, stable, and reliable.
 
I think your assessment of TLER is incorrect. Can someone chime in ...



WD Green drives also support TLER but default to disabled; you do not need RAID edition harddrives.

Also, using TLER will cause misbehaving HDDs to be kicked out of the array even faster - and NOT PREVENT IT! TLER means Time-Limited Error Recovery, and it will LIMIT the time it spends on recovering bad sectors. As a result, the RAID engine would get a I/O error on the drive much sooner and kick it out of the array much sooner.

In an enterprise, mission-critical environment, that's what you want. You don't want your important server to freeze for 30 seconds or even longer. If any disks even but scratches its bum, you want to throw it in the waste-bin and replace with any of the hundred reserve disk lying on your desk. Half a minute downtime can mean millions of damage in some cases.

So TLER is useful for keeping the array responsive during individual HDD errors; not for preventing them to drop out of the RAID array! Using TLER in RAID0/JBOD configurations is also strongly not recommended; you only increase the risk of data loss.

TLER is not very well understood by its users. Virtually all home users do not need it.

By the way, have you considered ZFS?
 
Hitachi drives are a disaster waiting to happen. No one uses them in the enterprise for a good reason. They are 5 platter drives with a less than stellar warranty. Buy an enterprise drive if you want RAID.
Exactly. Pay the premium for enterprise class drives. There is a reason they cost more.

danman I see you making these speculations now in multiple threads. let me ask you, do you have any actual experience with desktop drives in RAID arrays? Have you run any quantity of 5 platter drives alongside 4 platter drives and seen any noticeable difference to make that assertion? Or do you just assume "5 vs 4 platters = 20% more likely to fail". I used to think that but experience has proven otherwise. Lastly, exactly *how* is paying 2-3x more for an enterprise class drive going to protect a home user in day to day use, aside from paying double or triple the price of a desktop drive just to get 2 more years on the warranty?

Sorry but there's absolutely no need for people to buy enterprise class drives for home raid arrays. Having run desktop drives in hardware arrays for over 10 years, and enterprise drives even longer, and as a current owner of 118 desktop class drives in current active RAID arrays at home, including 68 "5 platter" Hitachi's, 48 x 1Tb of which have been running over 2 years 24x7 with not as much as a single bad sector developed on any one of them, I *think* I'll take my chances. Contrasted to the 40 x Seagate ES.2 1Tb drives running at work that we see 1-2 failures in once a month, running alongside 48 x WD GP drives that have only suffered 1 failure in a year, well if there was any "disaster waiting to happen", then in retrospect it was paying double for those Seagate enterprise drives.

I RMA a WD or Seagate about once a month on average. Conversely, I have never RMA'd a single Hitachi- I don't know what their RMA process even looks like. Maybe I've just been lucky? Maybe not JUST lucky. Maybe FLECOM has just been lucky too since he in fact runs Hitachi 2Tb desktop drives in an enterprise capacity:

"we have a large SAN at work that uses these, 96 of them to be exact, split up into 16 RAID5 arrays in 4x 24 disk enclosures 4 arrays per enclosure... it gets hammered pretty hard every day and they have been running great, so far so good *knocks on wood*
 
Last edited:
WD Green drives also support TLER but default to disabled; you do not need RAID edition harddrives.

No you can't. It's been disabled on new WD consumer drives (Blue, Black, Green) for some time now. Even the ATA-8 SCT commands are disabled on the newer firmwares, e.g.:
Code:
infinity:/home/ktims# smartctl -l scterc,70,70 /dev/sdf
smartctl 5.39 2009-12-09 r2995 [x86_64-unknown-linux-gnu] (local build)
Copyright (C) 2002-9 by Bruce Allen, http://smartmontools.sourceforge.net

Warning: device does not support SCT Error Recovery Control command
infinity:/home/ktims# smartctl -i /dev/sdf
smartctl 5.39 2009-12-09 r2995 [x86_64-unknown-linux-gnu] (local build)
Copyright (C) 2002-9 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Caviar Blue Serial ATA family
Device Model:     WDC WD5000AAKS-00A7B2
Serial Number:    WD-WCASY8162250
Firmware Version: 01.03B01
User Capacity:    500,107,862,016 bytes
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   8
ATA Standard is:  Exact ATA specification draft version not indicated
Local Time is:    Tue Feb 16 10:11:05 2010 PST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

infinity:/home/ktims# smartctl -l scterc,70,70 /dev/sde
smartctl 5.39 2009-12-09 r2995 [x86_64-unknown-linux-gnu] (local build)
Copyright (C) 2002-9 by Bruce Allen, http://smartmontools.sourceforge.net

SCT Error Recovery Control:
           Read:     70 (7.0 seconds)
          Write:     70 (7.0 seconds)
infinity:/home/ktims# smartctl -i /dev/sde
smartctl 5.39 2009-12-09 r2995 [x86_64-unknown-linux-gnu] (local build)
Copyright (C) 2002-9 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Caviar Blue Serial ATA family
Device Model:     WDC WD5000AAKS-00YGA0
Serial Number:    WD-WCAS82285949
Firmware Version: 12.01C02
User Capacity:    500,107,862,016 bytes
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   8
ATA Standard is:  Exact ATA specification draft version not indicated
Local Time is:    Tue Feb 16 10:11:13 2010 PST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

Though I agree with the rest of your analysis. TLER will prevent the array from freezing for more than a few seconds, and I think some 'enterprise' RAID cards might do a poor job of dropping disks due to timeouts (assuming they'll fail out on their own and return an error).
 
Where did you get the idea that the enterprise drives are the same as consumer drives?

* Warranty - The desktop drives usually have a 3-year (or less) warranty, while the Enterprise drives usually have a 5-year warranty (IIRC, some of the Samsung drives have a 7-year warranty).
* MTTF/MTBF - This calculated value is an estimate of the number of hours until the drive fails, and is only valid for the warranty period. For Enterprise drives, it's usually higher than for their Desktop counterparts.
* Error Recovery - This is called various things by various manufacturers (TLER, ERC, CCTL). In a Desktop drive, the goal is to do everything possible to recover the data. In an Enterprise, the goal is to ALWAYS
return SOMETHING within the timeout period; if the data can't be recovered within that time, let the RAID controller reconstruct it. Wikipedia article.
* Bit Error Rate - This is a statistical measure of how often an error (either recoverable or unrecoverable) will occur. The rate for Enterprise drives is usually lower than for Desktop drives.
* Vibration resistance - When several disks are in a single chassis, the vibration from one or more of the drives can effect the others. Some Enterprise drives have firmware (with sensors?) that specifically reduces
the effect of this vibration on the error rate. On those drives where this is done, it can sometimes be seen as a difference in the vibration specs of the drive.
* Target market segment - Enterprise drives are designed for 5x9 operation with light use, while Enterprise drives are designed for 7x24 operation with heavy use. I believe this effects the motor & voice-coil
drive electronics, possibly the overall disk heatsink capabilities, and possibly the operating temperature range of the drive.
* Request Queuing - Enterprise drives usually have improved queuing algorithms for I/O requests. This gives more IOPS as the load increases (something you're not likely to see in a lightly loaded Desktop usage).


I'm just going to touch on a couple of things...

Error Recovery
Consumer and Enterprise = same. WD just disables TLER in their consumer drives to make you buy their enterprise drives.

Target market segment
You can't really "design" a hard drive to run better in 5x9 operation vs a 7x24 configuration. A 3.5" hard drive will reach any kind of applicable thermal or mechanical equilibrium in an HOUR or less, making the difference between 9 hours and 24 hours moot.
All you can do is manufacture the enterprise drives with better tolerances/better materials which could help them last a longer OVERALL lifetime, but no effect on what impact 9 hr vs 24 hr usage has.
Given that, I've seen no evidence that this is the case in the first place, at least with WD's offerings.
I'd lke to see a survery of 100 enterprise drives vs 100 consumer drives of the same platter count and size. Measure the vibration of each set, I bet they will average out to be the same; I could be wrong.
Lower vibration in a large sample set would be the best way I can think of to determine if one set was designed/made with tighter manufacturing tolerances/materials than the other.

If anything, the 5x9 setup is harder on the drive mechanically, as a disproportionate amount of wear occurs while the drive is still cold and during the transient period.

Request Queuing
I don't buy that one. If the enterprise drive does indeed perform better in some IO task is probably because WD just disabled some feature in the consumer drive. It would be downright bad management / poor engineering to have different electronics for the two lines of drives given the benefits of sharing it between the lines.
 
Back
Top