Raid setup for a new server.

legrand

Weaksauce
Joined
Mar 30, 2004
Messages
79
I realize that this topic has been beaten to death here, however after searching through all the posts regarding something similar to my particular application, I still don't think I have a clear answer as to what might be the right configuration.

My question? Raid 0+1 (is this Raid 10?) or Raid 5?
We are purchasing a new dual core Opteron server for "work" (a small business with about 20 employees).
The motherboard is a Tyan K8S (s2882gnr-d).
The hard drives will all be SATA II 250 Gb WD's w/16mb cache.
The controller card we are considering is a 3ware Escalade 9500S-4LP 4 Port SATA RAID Controller .
This server will be running a DBA application as well as operating as a fileserver for images (large pictures) and CAD files (some up to 10+mb in size).

The options I am looking at are:

1. Using the onboard Silicon Image Sil3114 to run a 0+1 of the 4 drives.
2. Buy the 3ware card and run that in 0+1
3. Buy the 3ware card and run it in raid 5

Can anyone point to which would be the best solution as well as explain logically why this is the case?

Thank you.
 
Raid 0+1 is slightly different from raid 10, but really, only in the way the data is laid out and the number of drives that can fail in the array before you lose the array. performance wise they are pretty close. They are not raid 5 in anyway.

For a DB/fileserver you have to consider what is going to be more important, space, or speed. If you want space with the greatest return, raid 5 will be the best option. However, it tends to be a bit slower on writes. Ex 4x250 gig drives in raid 5 will give you 750 gig available.

Raid 0+1 will stripe the data across two drives, then mirror the stripped array to other two drives. You can lose a single drive without any issues. Speeds tend to be close to raid 0 speeds for reads and writes. ex. is you have 4x250 in raid 0+1 you will only have 500 gig available on the array.

Raid 10 will mirror the drives, then stripe them across the other two. Two drives can fail and the array will still chug along without an issue. Size available is the same as raid 0+1.

Another thing to consider is network speed, are you running 10, 100, or 1000, if the first two, there is no way in hell that you will hit the array hard enough to need the speed increase for serving files. The speed would still help the local server process data from the DB, just not for sending it out.


Sorry if any of this seems incoherent, It's early, I've got to get to class.
 
If you run the 3Ware, pick up the RAID Edition Caviars. If you run the SI controller, pick up the Desktop version. The Maxtor MaxLine drives are probably a better choice for this application unless you want to get the 400GB Caviar RE2.

Depending on the size of and load on the DBA application, SCSI may be better choice there, but I understand if you can't afford it. At a minimum, you will need NCQ aware drives for the server work, especially if you will have all three of the applications you need on one array.

If you can get by with 500GB of capacity, then 0+1 is the best choice for performance. A three drive RAID-5 for the images and CAD, and a two drive RAID-1 with SCSI drives would be the best choice for the DBA.
 
Please oh please do not plan on putting Caviars in a server environment, unless it is the RE2. The Caviar can NOT handle the load. Please consider the Maxline III for your application. The Caviar drive can not even handle the workload of a low end occasional use desktop without sustaining a high failure rate. Much less a demanding environment such as yours.
 
DougLite said:
If you can get by with 500GB of capacity, then 0+1 is the best choice for performance. A three drive RAID-5 for the images and CAD, and a two drive RAID-1 with SCSI drives would be the best choice for the DBA.
QFT

The best thing would be to have the DBA on seperate disks from the file storage, especially if performance is or will ever be an issue. GigE with jumbo packets all the way; the second part is important. The rule for databases is SAME - stripe and mirror everything. No parity. Since you're on a relatively low budget, just mirror so you'll have a) redundancy and b) two disks to read from. If they're SCSI or you're using a good controller like the 3ware, you should be able to handle twice as many reads and the same number of writes as a single disk.

I'd get the 8-port version and 5 or 7 disks. 3 for a raid-5 storage array, and 2 or 4 for either a mirrored or stripe/mirrored DBA array.

 
Thanks for the input thus far. It looks like I want raid 10. I can explain further that there are only about 10 people accessing the DBA application at one time, one person pulling the CAD data and one person working with larger photo/graphics files. We are getting a 10/100/1000 switch with all this and will update at least the two computers that are pulling the large files to 1000. I figure the DBA application that is running on the server isn't transferring alot of data, it just needs to be able to access it quickly, and the system needs to be able to process inqueries that span many files quickly.

Space is of no concern, as 100 GB would be plenty of space for at least 4 or 5 years.

More disturbing, though, seems to be the concern with the WD Carvier SE HDD's. Are they really that bad? I cannot go the SCSI route (too costly for us) and the Maxline would be an option that I would have to install here (not availaible from the system builder). I can do this (install myself) but I don't know how the management will feel about this. The only other SATA II drives availaible from this vendor are Samsungs as the RAID versions of the WD's are also not availaible. They do have the WD 400 RE2's though....
 
While SirKenin does exaggerate poor reliability of Caviars, he does make a good point. Choose Midline/Nearline/Enterprise drives for your applications. WD and Maxtor make such products with the RAID Edition and MaxLines, repsectively. The price premium for MaxLine is near nil - less than $5 for 250GB drives from ZZF. No reason not to get the peace of mind when the price delta is that small.
 
DougLite said:
While SirKenin does exaggerate poor reliability of Caviars, he does make a good point.
But... But... But I love Caviars. They're my mostest favoritest drives EVAH.

Hehe.

Ok. Seriously. I have RMAed what seems like a zillion Caviar drives. I have seen such a high failure rate on them they are second only to Fujitsu. Even the Deathstars didn't seem as bad, although I didn't sell as many of those as I could have, I guess.

Yes, I sell computer everything and do service as well. I guess that is no secret. What I have learned in the 18 years I have been doing this (and 8 years in business for myself) is that WD just simply isn't up to the task for anything over moderate usage environments. I have no problems selling the SE series and above ONLY to clients provided that I can ascertain the environment they are running in. RMAs cost me money, and I hate that. I can think of one client off the top of my head that is running 5 of my machines with the SEs in them in an office environment, mostly clerical, where the abuse to the drive is minimal. Out of those 5 drives, I lost one so far. Hour and a half drive each way to get the drive at my expense. Tell me that isn't a piss off.

I have two SEs and a 74GB Raptor here. The Raptor is phenomenal. A very reliable drive that takes a hell of a beating and keeps on ticking. I love this thing. The one SE is a slave. I hardly ever use it, but it is there storing some of my business documents and stuff that I need. No big deal, but it is doing it's job just fine. That's all I ask.

The other one I dumped in one of the servers. BIG MISTAKE. The stupid thing can not handle the load. You put it under load and it pukes. That server will be due for some upgrades soon. For now I'm living with the drive because I don't want to pull the whole damn thing apart, just to redo it again shortly.

Please don't make that same mistake. They were never designed to be put under heavy loads. When they are, they choke and/or die... *click* *click* *click* or they corrupt all the data. When I bash Caviar the heaviest, I'm bashing the straight Caviars. They aren't worthy of anything besides being relegated to position of doorstop. They are absolute crap. The SE has it's place, but your server isn't it.

Listen to Doug or I. The Maxline III is definitely your best bet, and it comes with a 5 year warranty.
 
I'd get the 8-port version and 5 or 7 disks. 3 for a raid-5 storage array, and 2 or 4 for either a mirrored or stripe/mirrored DBA array.
Please forgive my ignorance, but are you saying that I can run both a Raid-5 and Raid-10 setup using 8 drives, off of one 8-port card? I would think you'd have to only be able to set up one type of Raid per card, but, then again, I know nothing... :)
 
legrand said:
Please forgive my ignorance, but are you saying that I can run both a Raid-5 and Raid-10 setup using 8 drives, off of one 8-port card? I would think you'd have to only be able to set up one type of Raid per card, but, then again, I know nothing... :)
You would be able to run one raid5 with 3 or 4 disks and one 0+1 or 10 with the other 4 disks. You cannot, however, get only 4 disks and use them for both raid 5 and raid 10. You can make multiple arrays of different sorts on the same card, but not on the same disks.

Hope that clears it up at least a little. In essence, the answer to your question is yes. You wouldn't need two seperate 4-port cards for this setup. The card can handle multiple arrays (up to 4 raid-0 or -1 arrays on an 8 port card), and multiple types of arrays, at the same time.

 
Back
Top