HELP: Finaly got my 8x750GB Raid PC up and running, but is it configured correctly?

pmedia

n00b
Joined
Dec 16, 2006
Messages
21
Hi there,
I'm a total Raid Noob and need your config advise and just general info.
I have been struggling with my new storage server for our office and i finally got the parts accepting each other it was a pain, probably because of the lack of knowledge in my upper section :/

here's the system specs:

P5WDG2-WS | Asus ATX i975X S775 P4 PCIe Crossfire
Intel Celeron D 351 3,2 ghz
2X512 MB Original Samsung DDR-II 533 RAM
Antec PSU, Truepower trio 550W
Intel Raid controller SRCS28X PCI-X 8port SATA-II
8 x Seagate Barracuda SE disks

My idea was to turn all 8 disks into 1 big drive with 4-5TB storage monster but I wasn't aware that the intel raid card couldn't handle larger logical drives than 2TB, i just found out :mad: I have been trying to accept this and then I continued with the Intel Raid dos config but the Intel setup manual didn't explain the different options like "cache policy" read/write policy and so on so i just set it at the defaults except the part with stripe size, i set this at 128KB (i read somewhere that it was better when working with larger files, we work with video and so on?)
I also set the Raid level to 5.

Now because of the size limitations I had to create 2 logical drives from the 1 raid array each at the max size 2TB. I have created those and installed Winxp prof on logicaldrive 1.


Now onto my questions!!
1) would it have been (from a read/write speed point of view) better to have 2 raid arrays and then use disk 1-4 as logical drive 1 and 5-8 as logical2?

2) there seems to be constant activity on the disks, is that normal? Even after a clean boot where i haven't done anything? I don't understand what these disk are doing beside the winxp install i have only installed a gfx driver and nothing else.

3)
any comments and advise for me that can help me getting this system tuned the best way is really needed.

The use will be for storing backups of our movie/gfx files and to use as storage server for the different workstations so people will work with files via LAN directl from the new server.

Thank you for helping me. :confused:
 
1) What you could do is create two RAID5 arrays then layer a RAID0 over the top (Provided your adapter supports this, or a combination of that + a software RAID0) of it since it sounds like what your looking for mostly is a high STR. Provides some decent performance close to RAID0 (STR) with some redundancy (RAID50). One of those things you'd have to benchmark yourself to find the best configuration in your situation.

2) Probably some indexing running in the background, or defrag, etc. Could also be the array syncing if you haven't allowed it to complete that process.

As far as the cache policy, you'll probably want Write Back (delays writes) vs Write Through (Writes immediately). Be aware that this can potentially lead to data loss during a power failure. A UPS is usually recommended with this. 128K sounds good to me, but if your working with large files primarily, a higher chunk might be worth experimenting with.
 
Srry i cant be any of help but i was wondering what going to be stored in your 5TB"?

Avi's and psd GFX and mysql backups and so on. I just thought it would be easier for everyone that everything could be found under 1 directory on X: instead of on c: d: e: f: g: some on h: and so on.
 
1) What you could do is create two RAID5 arrays then layer a RAID0 over the top (Provided your adapter supports this, or a combination of that + a software RAID0) of it since it sounds like what your looking for mostly is a high STR. Provides some decent performance close to RAID0 (STR) with some redundancy (RAID50). One of those things you'd have to benchmark yourself to find the best configuration in your situation.

2) Probably some indexing running in the background, or defrag, etc. Could also be the array syncing if you haven't allowed it to complete that process.

As far as the cache policy, you'll probably want Write Back (delays writes) vs Write Through (Writes immediately). Be aware that this can potentially lead to data loss during a power failure. A UPS is usually recommended with this. 128K sounds good to me, but if your working with large files primarily, a higher chunk might be worth experimenting with.

Thank you for the reply,

Regarding solution 1, is that to "merge" it into 1 big disk or to get better performance? Keep in mind I'm new at this so what you write is quite hard to understand.

2) "Could also be the array syncing if you haven't allowed it to complete that process. " could this initial syncing take 4h+? there's nothing beside the windows install on the disk and it's still working next to me as i type on this other pc??

3) I'll look into these options and thank you for the UPS advice. 128KB was the highest available option, but is it normally possible to tweak the array from other apps than the Intel dos one?
 
I think the main question: should I start copying files to this server or delete the 8 disk array and create some new smaller ones and start from scratch? - I can still hear heavy disk activity, and I would like to know if that is normal with Raid 5 in the beginning.
 
It could definitely still by syncing. My 8 by 400 array took something like 30 hours to sync.
 
It could definitely still by syncing. My 8 by 400 array took something like 30 hours to sync.

wow! thank you for letting me know, so what is it it's syncing? might be a stupid question but currently the disks are almost empty right? And if i copy a 1 gb file to the server later, will that cause another multiple hours session to sync?
 
I'd recommend against multi-layer raid. I'm amazed an Intel hardware based SATAII controller doesn't support > 2TB arrays, but I was unable find any specs during my quick look. Be sure to do a complete check with intel including BIOS updates.

Can you return the card?

Software Raid 5 in Linux is always a great alternative. You will easily saturate gigE with that card on larger transfers.
 
I'd recommend against multi-layer raid. I'm amazed an Intel hardware based SATAII controller doesn't support > 2TB arrays, but I was unable find any specs during my quick look. Be sure to do a complete check with intel including BIOS updates.

Can you return the card?

Software Raid 5 in Linux is always a great alternative. You will easily saturate gigE with that card on larger transfers.

Hi i can't return the card because I got rid of all the packaging and so on because i just assumed all raid cards could handle their max capacity = # of disks x disk capacity so i got a shock when reading about the 2TB limit today. I have looked at Intel's website and so far only found raid cards with the 2TB limit.

I'm unfamiliar with software raid but will try to read up on it.
 
Performance really depends on what size files you're moving around, what IO you want, etc. You really don't seem to be using professional quality components for this. You should invest in a higher quality motherboard, processor and ram given that you'll get the blame for anything that goes wrong. What are your backup strategies and DR strategies for the data? An idea for next time might be to just get a professional to build it for you, so you can avoid any problems and still keep your job if it fails.
 
Performance really depends on what size files you're moving around, what IO you want, etc. You really don't seem to be using professional quality components for this. You should invest in a higher quality motherboard, processor and ram given that you'll get the blame for anything that goes wrong. What are your backup strategies and DR strategies for the data? An idea for next time might be to just get a professional to build it for you, so you can avoid any problems and still keep your job if it fails.

I understand what you mean. The problem is that we're a small company dealing with large files so we had to compromise on the quality. Our current setup is just some workstations with their separate 300 and 500gig disks and a lot of clutter, so it will be a huge step up for us to get this structured at a central location, which is the main objective. I'm using raid 5 to provide the data security and have ordered some spare disks, isn't that security enough if we monitor the disks i the array and swap if necessary? I own the business so the blame I get is mostly from my GF because i need to sit here all night working on the best solution within our budget.

also the reason for the weird crappy CPU is that i ordered a dual core duo for the system but the MB didn't support it and to be on the safe side i bough this 100% compatible one. I gopt the impression that the MB is a good one and meant for business use.
 
Keep in mind, value doesn't always mean lesser quality. Perhaps just lesser features.

Linux software raid is fast and easy and free! Keep this in mind as you buy seats for Windoze!

Also, raid 5 is no substitute for backups! Put one of those 750s in a single enclosure, replicate key files, and take it home with you!
 
Well with RAID-5, you get parity, so one drive failure doesn't mean that the data is lost. Data can always be corrupted though and you can possibly have multiple dirve failures. How important is the data to you? Can your business operate without it? If it can, how much will it cost you to lose everything? You should answer those questions and weigh them against what hardware you have. An electrical surge, fire, water leak, etc, means that you lose everything. If it's your company, I would still track down a server quality motherboard and some ECC ram. Processor speed isn't a huge deal given that you're just moving data.

"Linux software raid is fast and easy and free! "

By that you mean fast and free right? I tried making a data storage server just for home use and spent a few hundred just to not have to deal with it. Given that the guy can barely get this going, I wouldn't think that suggesting that he learn linux is a good idea. There is a very good reason why Microsoft is used so much and linux (or I should say the 2343253 different OS variants based on linux) is still a niche. Had he an IT staff, then it would be good advice, but his business is likely better served with him doing what his business does versus him pulling his hair out dealing with linux issues.

That being said, longblock's backup advice is very good.
 
Thank you for the reply,

Regarding solution 1, is that to "merge" it into 1 big disk or to get better performance? Keep in mind I'm new at this so what you write is quite hard to understand.

2) "Could also be the array syncing if you haven't allowed it to complete that process. " could this initial syncing take 4h+? there's nothing beside the windows install on the disk and it's still working next to me as i type on this other pc??

3) I'll look into these options and thank you for the UPS advice. 128KB was the highest available option, but is it normally possible to tweak the array from other apps than the Intel dos one?

Solution 1 is primarily for some redundancy with some of the benefits (and drawbacks) of high STR performance. Its a balance between full RAID5 and full RAID0 mostly. I haven't dabbled too much with it, I went RAID6 on my 8 drive array. 160MB/s writes was enough for me. It adds a level of complexity however.

As far as 2, it really depends on the hardware. I've seen them take as little as an hour or two all the way up to over a day. My software RAID6 takes about 3 hours to sync 8x 320G drives for example. On another machine it took over a day. If you keep rebooting the machine before it completes it may be starting all over, so.

I can't speak for that Intel controller you have, 128KB may be the highest you can go with that specific one.
 
By that you mean fast and free right? I tried making a data storage server just for home use and spent a few hundred just to not have to deal with it.

Just because you can't figure it out doesn't mean it's hard... But, then, I suppose just because I can figure it out doesn't mean it's easy :p

Taking several hours to sync is not out of the norm - the 8*320 array my brother built took a day or so.

Before you put this machine into operation, I'd test it to make sure it's working like you expect. Unplug a disk and make sure the files are still accessible, then plug it back in and see if it rebuilds.

I strongly suggest you don't run your OS from the raid 5 array. Buy a separate boot disk and put the OS and swapfile on that.
 
I would run RAID 5+0, striped and backup
put the OS on a sata drive on the intel, and have the raid array setup for storage.
hope you have good cooling and lots of memory to push that data!
 
I would run RAID 5+0, striped and backup
put the OS on a sata drive on the intel, and have the raid array setup for storage.
hope you have good cooling and lots of memory to push that data!


Could you please be specific, i mean explain it like it was for a 4 yr old :)

I have 8X750GB disks and I need as much storage with parity as possible and as written in the earlier post it wouldn't be adviced to have the OS on the Raid.

So what do i do exactly?

1 -2 arrays? and logical drives and Raid 5? and bla bla.. I'm just very confused by this.
 
You could buy a separate boot disk; that would simplify things a good deal.

Building 2 4*750 arrays to bypass the 2TB limit seems like your best bet at this point. It'll limit you to 4TB instead of 4.5, but that's the best you'll be likely to get.

Are you doing processing that takes large files and produces large files? If so, you might want to keep one array as the "source" for processing, and make the other the "destination" array. Then you'll get better transfer rates, since the disks won't have to seek as much.
 
What you should problably do is go out and buy yourself 2 more hard disks on the cheap. Like 1st gen 36gb raptors. Connect them both to the Mother Board, and set them up for Raid 1. Then install XP or Linux on the array. If you don't know, Raid 1 basically makes a copy, byte for byte, on a second hard disk, so that if one goes down, there's no interruption to service.

Now, with all of the other 750gb disks, you should make 2 Raid 5 arrays. If you don't know, Raid 5 distributes the parity information across all of this disks, so that if one disk dies, you still have your data. Of course, you'll also end up losing ~1.5 tb of space, but it would be worth it for redundancy/performance.

After this, you'd want to use Raid 0 to merge the two raid 5 arrays into one. This can be done easily in windows.

Alternatively, you could just use linux and do one large software raid 5 array across all of the 750gb drives. You'd lose only 750gb that way.
 
Can that board run raid 6? its basically r5 with 2 parity drives. lose 2, data is still there.
But again, thats a solution for redundancy and uptime. Not for backup.
 
I would also recommend using one of the 750GBs as a hotspare. Sure you will lose the space, but in the event that a drive dies, it will start to auto rebuild to the spare drive immediately. If you are working with large files, and/or multiple people are accessing the array, you run the risk of corrupting the data if you continue to use the array in a degraded state.

By having the hotspare (global hotspare if you decide to run mulitple arrays) the above is minimized.
 
I have tried to create two Raid arrays and it resulted in 4 logical drives since Intel's Raid setup app said it found extra space on Array 1(Logical 1) that it had to use before it could create the 2nd Array strange but ok, now I have 2X2TB and 2X50GB Logical drives.

I have bought a separate boot disk to hold the OS as described in previous posts.

And I merged the 2 logical drives at 2TB in windows via "my computer" and Manage, then used the Spanned option and selected the two 2TB dynamic disks. - Was this the right way to do it? it shows as V: with approx 4TB (nice!!) in winxp now.

I haven't done anything with the 2 x50GB extra space dynamic drives.


I have attached an image of the first tests in Winxp,



1) it's very slow when i move a file to the Raid Volume, both from the boot disk and via LAN. (same speed)
but it will copy a file from the Raid Volume to the boot disk it works at normal speed.

2) If you have followed this thread then you know that this is my 2nd attempt and the first one where i used all 8 disks for the raid array, and also stored the OS on it I noticed heavy activity on the disks for 30 hours (syncing). This time there's no activity at all (beside when i tried to copy files over)

I hope you can help me

Thank you very much for helping me out with this project! :eek:
 
And I merged the 2 logical drives at 2TB in windows via "my computer" and Manage, then used the Spanned option and selected the two 2TB dynamic disks. - Was this the right way to do it? it shows as V: with approx 4TB (nice!!) in winxp now.

1) it's very slow when i move a file to the Raid Volume, both from the boot disk and via LAN. (same speed)
but it will copy a file from the Raid Volume to the boot disk it works at normal speed.

2) If you have followed this thread then you know that this is my 2nd attempt and the first one where i used all 8 disks for the raid array, and also stored the OS on it I noticed heavy activity on the disks for 30 hours (syncing). This time there's no activity at all (beside when i tried to copy files over)

I hope you can help me

Thank you very much for helping me out with this project! :eek:

Yeah, that would be the way to do a RAID50 with Hardware + Software. As far as the speed, we'd need some more raw numbers. You'd have to get some read/write numbers to really see if there's a performance bottleneck. Something like IOMeter or such would be good. For reference I get about 260MB/s reads and 160MB/s writes on my 8x320 RAID6 array. Do note tho that writes WILL be slower on a RAID5 array then reads, just the nature of the beast.

As far as 2, did it take awhile to create the RAID5 arrays when you made them? You may have done the sync part right there. I'm guessing you picked a background sync option the last time you created it.
 
Yeah, that would be the way to do a RAID50 with Hardware + Software. As far as the speed, we'd need some more raw numbers. You'd have to get some read/write numbers to really see if there's a performance bottleneck. Something like IOMeter or such would be good. For reference I get about 260MB/s reads and 160MB/s writes on my 8x320 RAID6 array. Do note tho that writes WILL be slower on a RAID5 array then reads, just the nature of the beast.

As far as 2, did it take awhile to create the RAID5 arrays when you made them? You may have done the sync part right there. I'm guessing you picked a background sync option the last time you created it.

I believe the volume spanning in his setup is more like JBOD, so it is not RAID 50.

OP, you can use Windows DFS (distributed file system) to show all the shares under one root folder that the user can mount.

Your setup is quite expensive as you can only use 2/3 of your starting storage space.
Losing 2TB not including the spare drive is rather a high cost.
 
Your setup is quite expensive as you can only use 2/3 of your starting storage space.

I agree, you would be better served to go with a card that supports larger than 2TB (RR2320). Think of it this way, you either:
A. Lose a $300 drive worth of space
B. Spend $250 on a RR2320 for PCIe or 2220 for PCI-X and utilize that $300 hdd, as well as have a single volume.

Either way, the numbers you are putting up SUCK.
 
This is the problem when comes buildings servers. It is only one thing I rather buy then build myself to avoid such problems.
 
This is the problem when comes buildings servers. It is only one thing I rather buy then build myself to avoid such problems.

I disagree, 1 year ago I knew little about RAID, differences between host-based/software/hardware RAID, cluster sizes(ok, I knew WHAT they were, just not the impacts on performance), Linux, XFS, ext3, and a whole manner of other server related protocols/hardware/software. Now, I am very confident in my knowledge of all of these things.

If I had not built my own fileserver I would probably be running some lower performance solution at 2x the price.
 
I agree, you would be better served to go with a card that supports larger than 2TB (RR2320). Think of it this way, you either:
A. Lose a $300 drive worth of space
B. Spend $250 on a RR2320 for PCIe or 2220 for PCI-X and utilize that $300 hdd, as well as have a single volume.

Either way, the numbers you are putting up SUCK.

I say he should sell one hard drive (or keep it for spare) and the card and buy a better card.
He would be ahead with a 4.5TB array and a better card for the future.
 
I believe the volume spanning in his setup is more like JBOD, so it is not RAID 50.

OP, you can use Windows DFS (distributed file system) to show all the shares under one root folder that the user can mount.

Your setup is quite expensive as you can only use 2/3 of your starting storage space.
Losing 2TB not including the spare drive is rather a high cost.

Woops, I was thinking of striping, which is what I think he wanted. Defiantly don't want to mess with JBOD. As an edit to above then, that would be some kinda weird combination of RAID5 and JBOD. If thats what your going for, then thats correct. If your going for RAID50, you'll want to pick striping.
 
Woops, I was thinking of striping, which is what I think he wanted. Defiantly don't want to mess with JBOD. As an edit to above then, that would be some kinda weird combination of RAID5 and JBOD. If thats what your going for, then thats correct. If your going for RAID50, you'll want to pick striping.


I'm pretty confused right now. I'm not sure which pots are directed at me hehe :)

Should i change from spanning to stripe? I want 1 big "drive" X: but still keep the raid 5 parity security.
 
I'm pretty confused right now. I'm not sure which pots are directed at me hehe :)

Should i change from spanning to stripe? I want 1 big "drive" X: but still keep the raid 5 parity security.

That one was :p. I think your stuck between two bad situations personally. You can go with two RAID5's and Span it. If two drives fail on one of the RAID5 arrays, you lose everything on that array, however I believe you'll still have the data on the other array because of the JBOD overlay. I haven't messed with JBOD much, someone here would probably be able to feed more info about that situation.

If you went with two RAID5's and a stripe over it, you'd almost be stuck in the same position. If one of the RAID5's loses two drives, you'd lose everything since its being split across via RAID0. However, you'd pickup a little more performance then just JBOD'in it.

Both solutions provide one drive letter for the entire array, I'm sure using some DFS as someone suggested above might provide another workaround, I haven't messed with that ability of Windows in years. I'd have to agree mostly with what has been said above tho, a new controller that supports larger arrays (and a faster interface) would be a much better solution.
 
Back
Top