Hardware RAID the solution for hard drive speed woes?

Nate Finch

Limp Gawd
Joined
May 5, 2005
Messages
298
CPUs get faster and faster. Memory gets cheaper and cheaper. Hard drives... well, they still suck.

99% of the time when something is slowing down my machine at work or at home, it's because something is thrashing the hard drive. I don't want that to be the limiting factor anymore.

SSDs are still too unreliable and too expensive for me right now - there's still way too many unknowns in how they work in real-world situations for me to be dropping 600 bucks on a 500 GB drive.

Would a RAID 5 array (or some other configuration) with a hardware controller and speedy SATA drives be a very noticeable jump? I'm looking at boot times and hard drive intensive tasks, like converting photos or virus scanning.

I'm thinking 4-ish WD black 500 or 640GB drives in RAID 5, using some lower end ($250-ish) PCI express hardware raid controllers. The drives are cheap enough that I can afford to buy a bunch.

Can anyone give me an idea if this sort of setup is going to be stupid fast, or am I still going to sit there watching my hard drive light flicker? Presume the rest of my rig is brand new (core i7, lots of ram, 64 bit OS). if there's another setup that would be faster (without breaking the bank), please let me know.

Thanks,
-Nate
 
RAID-5 would be good for read performance. PERC5 or Adaptec 3085 (external, but I'm just going to loop the cables back in) on Ebay are cheap. 3085 is more expensive, but cheap on Ebay for what it is (very cheap, in fact). You'll need the battery backup unit for either one for RAID-5
 
@OP

RAID 5 is a good balance between (relative) data safety, uptime and cost. It requires a minimum of 3 disks, because the model is N+1 where N is the number of disks. The +1 is the parity disk. However, its write performance is poor. RAID 6 is more of the same, but the model is N+2 (extra parity).


RAID 0 is absolute fastest due to interleaved reads and writes, but has NO safety at all. It requires a minimum of 2 disks. RAID 0 makes for a good scratch volume, for things like video and image processing.

RAID1 requires a minimum of 2 disks, and is safest, but most controllers will not be smart enough to do interleaved (not parallel) reads from the (minimum) two disks, for RAID0-like read performance.

RAID 10 is gaining popularity, as being fast and safe, and fast enough for most write-busy tasks, however a minimum setup is 4 disks, and your effective storage is half of the number of disks, so 4x 1TB disks = 2TB storage.

Also, be aware of what your chose hardware RAID controller supports. The more modes they support, the more expensive they will be.

Hope this helps.
 
RAID-10 is what I'm going with. Slightly less sequential read, but better random in most cases, and MUCH better write performance.
 
Don't bother with Raid5 for your OS and apps. Even if you have a nice hardware controller, it's still a waste.


Use good! raid 5/6 or even 1 for data storage, and then Raid 0 for your apps/os. I used to run a 4-stripe of 160GB two platter raptors for my OS. That was a friggin fast setup. (4-stripe on ICH10R)
 
Reading the Anandtech article now, thanks for the pointer. I'd seen so many articles about problems with SSD that I had just discounted them for now.
 
To add to the above about SSDs - you say that $600 is too much to drop on a 500GB drive. But what on earth are you doing that would require 500GBs on an SSD?

If you are looking for a general speed increase for a desktop PC without breaking the bank, get a 120GB Vertex. Install OS and Apps to this, move data (MyDocs, media, everything) to a RAID1 array (1TB Blacks?). Providing you have a decent mobo this should give a hell of a boost for most users. Just use a decent backup strategy (make OS drive images regularly & store them on the RAID1) to make sure if anything does go wrong with the SSD you won't lose anything.

This gives you 1TB of redundant storage plus 120GB of fast OS disk for around $550. A bargain if you ask me...
 
To those wondering why I would need a big SSD... it's mostly for image post processing. I need to be able to take 300-500 12 meg RAW files at a time and convert them into JPEGs. I figure with a core i7 and lots of RAM, the hard drive is going to be a lot more of a bottleneck than processing speed. I might be wrong there, please feel free to let me know if you think that's true.

Every time I take photos for a day, I come back with ~4 gigs of RAW files that need processing. I'm not looking to store HD movies or anything on the drive, but I don't want to have to constantly move my files around so I can have the files I want to work with today on the right drive. That's a pain in the ass and defeats the purpose of the fast drive in the first place.

I hadn't been contemplating the OCZ drives, mostly because what I'd heard was "MLC = bad, SLC = good". Obviously, that's not only simplistic, but wrong in many cases.

I'm pretty sure I'll need at least 300 gigs to be comfortable, and don't really want to spend more than $500 on storage.
 
[LYL]Homer;1034044575 said:
Just so everything is level here, including your assumptions about SSD's, have you read the Anandtech article on SSD's?

If you haven't I would highly recommend doing so before moving ahead.

X2 that article is the best read on the internet right now about SSD's
 
The solution to your problem may be a few different things.
When you say "something is thrashing the hard drive" - may I recommend that you look into this a bit further? What is causing this? Do you have memory pressure? What kind of apps are you running? What is your expectation of "fast"? Do you need sequential read speed, or fast seek time? The asking and answering of these questions can really help you make a informed decision. The possible solutions could run into some significant costs; you'll want to spend that $$ wisely.

If you think RAID is part of the answer, you should read this article first: Understanding RAID - it's quite thorough and informative!
 
To those wondering why I would need a big SSD... it's mostly for image post processing. I need to be able to take 300-500 12 meg RAW files at a time and convert them into JPEGs. I figure with a core i7 and lots of RAM, the hard drive is going to be a lot more of a bottleneck than processing speed. I might be wrong there, please feel free to let me know if you think that's true.

Every time I take photos for a day, I come back with ~4 gigs of RAW files that need processing. I'm not looking to store HD movies or anything on the drive, but I don't want to have to constantly move my files around so I can have the files I want to work with today on the right drive. That's a pain in the ass and defeats the purpose of the fast drive in the first place.

I hadn't been contemplating the OCZ drives, mostly because what I'd heard was "MLC = bad, SLC = good". Obviously, that's not only simplistic, but wrong in many cases.

I'm pretty sure I'll need at least 300 gigs to be comfortable, and don't really want to spend more than $500 on storage.

Nate, Always good to start with the desired usage stated. I believe I am in the same boat as you (although i tend to chew through 10GB - 20GB a shoot ;) ). I currently use lightroom and PS CS3 - my subsystem (OS App etc is on SCSI) the data is on RAID 1 arrays on SATA - and here is the important bit my scratch disks are on seperate SCSI disks - this includes the OS page file, the PS scratch AND the lightroom previews - I am sure that aperture also has a similar methodology). The disk activity you refer to may be the background generation of the lightroom previews (remember as standard these only are valid for 1 month before being wiped - hence if you go back to an older shoot these will be regenerated). No need to go SCSI in your case - it just so happened I had a few 10K disks around and put them to good use!. When converting the RAWs to jpegs/PSD - the 8 cores of the i7 920 CPU I have essentially max out - so there is no disk bottleneck I know of.

My advice if you are building new is get yourself a P6T WS board (6SATA + 2 SAS/SATA controllers) - get a 64GB SSD for OS/Apps - a couple of low capacity but fast SATA in RAID 0 for your paging/scratch/previews and then some large SATAs in RAID 1
or 5 for the DATA stores. Also include either an external DAS box or a caddy with another large capacity drive for backup (RAID is NOT backup) - since then you can put those backups in a fire/safe. NB if going the DAS route then get eSATA - the board I mentioned has 2 connectors and it really is that much faster for backups compared to USB or Firewire.

The only bottleneck I am still trying to work out is that of the PSD file format - for some reason once my file size gets above the 250MB+ mark (which it does on most of my full edits - 16Bit full resolution) and I save - the cpus are only showing about 15% usage and the drive activity is near zero. I have a sneaky suspicion the PSD format does not like multi-threading since I have heard so many developers complain about the format!
 
Yeah I was gonna say that I doubt the disk would almost ever be a bottleneck in converting raw -> jpeg since that is mainly CPU intensive. Maybe on an 8 core system you would need more than 1 drive but I would think a single drive would let you max out the CPU.
 
Nate, what is your current drive config?

RAID 5 != performance. Raid 5 is like the bastard child of JBOD and mirroring. Raid 5 is for budget installs requiring minimal hardware protection and maximum storage.

Independent volumes would be your best bet. A source and target drive will reap greater performance than concurent I/O against the same storage device. With a single raid 5 volume, you not only get the write penalty, you get the synchronous I/O delay due to read and write cycles.
 
SSDs are the way to go for speed. That and more RAM.

Be very careful with that assumption - I suggest people take the time to read the 32 page in depth AnAndTech article. If using SSD for OS then you need to spend the money and get one of the more expensive models (Intel or OCZ vertex/summit), else you will be swearing and cursing you ever bought the thing!
 
If you haven't maxed the RAM your system can use (determined by OS or MB) then do that first. Any modern OS will only thrash the HD after your RAM is all used up..

You will see more improvement going to 4GB (32bit OS) or the your MB's max (64bit OS) in most cases than you will going to a faster HD.

The only exception is the initial load off of the HD for whatever program you are using. (eg level loading in a game, or opening a file etc)
 
Thanks for all the tips. Right now I'm working off a 3 year old laptop (Inspiron 9300), which I know is not something to compare *anything* against... but even my Core 2 Duo at work thrashes the hard drive with 4 gigs of RAM installed (granted, it's running Vista 32 bit, so only gets to use about 3 gigs of it *and* I tend to run a ton of memory heavy apps).

Lightroom is one of my biggest hogs, which is probably going to be mostly CPU bound, it sounds like... but I'm also just tired of waiting on my computer. It's 2009 for christssake! I shouldn't ever click on an app and have to wait 15 seconds for it to pop up, even if I'm doing something else in the background.

It sounds like SSDs (the good ones) are the way to go if I want to end this kind of waiting.

cyberjt - thanks for the tips on optimizing lightroom & photoshop for disk access. I'll work on designing my new system with that in mind.

This is actually part of a design of a new desktop.. I'll keep in mind that a separate scratch disk for photoshop & lightroom could make a big difference.
 
CPUs get faster and faster. Memory gets cheaper and cheaper. Hard drives... well, they still suck.

99% of the time when something is slowing down my machine at work or at home, it's because something is thrashing the hard drive. I don't want that to be the limiting factor anymore.

SSDs are still too unreliable and too expensive for me right now - there's still way too many unknowns in how they work in real-world situations for me to be dropping 600 bucks on a 500 GB drive.

Would a RAID 5 array (or some other configuration) with a hardware controller and speedy SATA drives be a very noticeable jump? I'm looking at boot times and hard drive intensive tasks, like converting photos or virus scanning.

I'm thinking 4-ish WD black 500 or 640GB drives in RAID 5, using some lower end ($250-ish) PCI express hardware raid controllers. The drives are cheap enough that I can afford to buy a bunch.

Can anyone give me an idea if this sort of setup is going to be stupid fast, or am I still going to sit there watching my hard drive light flicker? Presume the rest of my rig is brand new (core i7, lots of ram, 64 bit OS). if there's another setup that would be faster (without breaking the bank), please let me know.

Thanks,
-Nate

Hard drives have gotten faster and tremendously cheaper.

SSD isn't unreliable

SSD isn't expensive when you want to start buying a raid array for speed
 
Raid 5 is for budget installs requiring minimal hardware protection and maximum storage.

Perhaps what you're getting at is to match the solution to the need. Which is cool.

But what you said simply isn't true. I know of enterprise SAN solutions that use RAID5 - and I mean some very large installs for business-critical apps. Granted, some of the arrays used here where I work also have geographic redundancy (remote mirroring), but I just didn't want folks thinking that RAID5 is some kludge...
 
If you got the money go ssd,you will be glad you did,but dont expect to get what your buying.Most likey you want reach yop speeds.
 
Perhaps what you're getting at is to match the solution to the need. Which is cool.

But what you said simply isn't true. I know of enterprise SAN solutions that use RAID5 - and I mean some very large installs for business-critical apps. Granted, some of the arrays used here where I work also have geographic redundancy (remote mirroring), but I just didn't want folks thinking that RAID5 is some kludge...

This can be true - but remember that the enterprise SANS usually use higher quality disks and also usually max out at the 300GB disk size - therefore the risk of failure during a rebuild is reduced. The issue comes when you are creating RAID 5 arrays with 1TB+ disks over 3+ disks - the rebuild time and chance for second error or failure (especially if not using enterprise grade hardware controllers) greatly increases as does exposure to failure during a rebuild. Hence why a lot of enterprise arrays using larger disks (ie 146-300GB) run at RAID6 or better such that two failures are required. My stats over 7 years for around 300 desktop stations showed desktop disk failures at 3-5% per annum (don't even ask about the laptop failures - backup, backup, backup is all I can say) - the server storage (probably around 50 spindles total at any 1 time ) over that time I only had 1 faiure in 7 years.
 
Back
Top