RocketRAID 3560 + (13) 1TB in the works

longblock454

2[H]4U
Joined
Nov 28, 2004
Messages
2,741
Not their fastest but it's an Intel IOP341 (800MHz), plus (13) 1TB ES SATA Seagates.

Special backup sever for a pile of Solaris 10 machines, PO was submitted today for the hardware. Supermicro 933 SATA chassis and MB, E8400, will run Fedora. Probably 3-4 weeks before up and running.

Besides the usual performance tests (bonnie++, dd) any other requests? I should have time to cruise through some RAID 0 and 5 tests.

Requested Tests:

1. Array Init time (movax)
 
Last edited:
What RAID level you planning on running? 5/6, of 13 drives? (eek)
 
In the end it will be RAID5 plus a hot spare.

I'd recommend against that... a hot spare just means it can start rebuilding immediately: you still have a huge (probably several days with an array that size) rebuild time during which another failure could kill the RAID... :eek:

I'd go with RAID 6 if at all possible... if not, perhaps RAID 15 and a couple more drives, though if the PO has already been made there's not much you can do ;)
 
RAID6 has been a thought, just didn't like the idea of the array running degraded till I got another PO cut for a replacement drive!

For the size I need I could also get away with RAID6 + a spare. First things first though, lets pound on it doing benchmarks!
 
Yeah, but with RAID 5EE you'd be running unprotected in the time your first disk fails and the time your array is rebuilt, which previously mentioned, can be a dangerous greater part of a day/to a few days, leaving your data at risk if a disk were to fail. :eek:

RAID 6 at least will offer you protection from a second drive failure while you wait for a replacement drive for the first drive. Unless the idea of using n-2 disks (which is moot, since you are thinking of RAID 5EE which is essentially n-2 drive usage) or the slight impact in overall performance due to the dual parity is a dealbreaker, I don't see why you should use RAID 5EE over RAID 6, personally speaking.
 
Raid 6 degraded (minus 1 drive) > raid 5 in rebuild status.


Do raid 6!
(In other words if you are going to dedicate the resources to having a hot spare, you might as well fill it w/data and use it...)
 
Yeah, I'm already convinced. At first I didn't like the idea of having the array degraded till I could get a replacement, but like mentioned several times above these fairly new RAID6 controllers really make RAID5+Hot Spare irreverent.
 
"I'd recommend against that... a hot spare just means it can start rebuilding immediately: you still have a huge (probably several days with an array that size) rebuild time during which another failure could kill the RAID..."

That cannot be true. Unless hardware cards are total crap. I mean my software raid can rebuild a 8 drive array in a couple hours. All drives are 'maxed' at ~70MB/sec, 7 of them reading and one writing. It makes no difference if I have a 4 drive array or 8 drive array, the time is about 1TB / 70MB/sec = ~3.9hours. Eventually I'd be limited by bandwidth, but worst case is if 8 drive is perfectly at limit, 16 drives would rebuild at 35MB/sec and take 8 hours for rebuild.
 
"I'd recommend against that... a hot spare just means it can start rebuilding immediately: you still have a huge (probably several days with an array that size) rebuild time during which another failure could kill the RAID..."

That cannot be true. Unless hardware cards are total crap. I mean my software raid can rebuild a 8 drive array in a couple hours. All drives are 'maxed' at ~70MB/sec, 7 of them reading and one writing. It makes no difference if I have a 4 drive array or 8 drive array, the time is about 1TB / 70MB/sec = ~3.9hours. Eventually I'd be limited by bandwidth, but worst case is if 8 drive is perfectly at limit, 16 drives would rebuild at 35MB/sec and take 8 hours for rebuild.

I am gonna test this as well.
 
Array creation, Raid 6 took 12 hours. That was 1/2 done in the foreground then a reboot and 1/2 in the background. Don't ask why!!

The array is ext3 with a blocksize of 65k and a sector size of 512.

Bonnie++

Code:
Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
superserver   8096M 83385  96 442763  82 152271  30 90859  97 527670  51 423.1   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
superserver,8096M,83385,96,442763,82,152271,30,90859,97,527670,51,423.1,0,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++

A few quick dd benches:

Code:
[root@superserver backup]date; dd if=/dev/zero of=test.file bs=1M count=1K; sync; date
Tue Jun 23 08:05:09 EDT 2009
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.84414 s, 582 MB/s
Tue Jun 23 08:05:13 EDT 2009
[root@superserver backup]# date; dd if=/dev/zero of=test.file bs=1M count=10K; sync; date
Tue Jun 23 08:05:29 EDT 2009
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 26.8134 s, 400 MB/s
Tue Jun 23 08:05:58 EDT 2009
[root@superserver backup]# date; dd if=/dev/zero of=test.file bs=1M count=2K; sync; date
Tue Jun 23 08:06:06 EDT 2009
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 4.50309 s, 477 MB/s
Tue Jun 23 08:06:14 EDT 2009
[root@superserver backup]# date; dd if=/dev/zero of=test.file bs=1k count=1M; sync; date
Tue Jun 23 08:06:43 EDT 2009
1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB) copied, 4.60902 s, 233 MB/s
Tue Jun 23 08:06:49 EDT 2009
[root@superserver backup]#

Performance is good IMO, I bet their 1.2Ghz card really screams! If I can squeeze it in I'll do some RAID0 and maybe some ext4 tests.
 
Ok, got a couple RAID0 and ext4:

Code:
Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
superserver   8096M 93106  98 631239  73 166612  30 92086  96 451519  43 705.9   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
superserver,8096M,93106,98,631239,73,166612,30,92086,96,451519,43,705.9,0,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++


[root@superserver backup]# date; dd if=/dev/zero of=test.file bs=1M count=1K; sync; date
Tue Jun 23 10:38:00 EDT 2009
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.22964 s, 873 MB/s
Tue Jun 23 10:38:02 EDT 2009
[root@superserver backup]# date; dd if=/dev/zero of=test.file bs=1K count=1M; sync; date
Tue Jun 23 10:38:13 EDT 2009
1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB) copied, 4.36233 s, 246 MB/s
Tue Jun 23 10:38:18 EDT 2009
 
Raid 6 init completely in the foreground took 5.5 hours.

Running Fedora 11 the card was completely recognized by the kernel and no manual drivers were required. Actually any Distro running kernel 2.6.25 and greater.
 
FYI you can also use time to do same thing... it'll give you time to thousandths of a second and is a little more clear compared to viewing timestamps.

$ time ( sudo dd if=/dev/zero of=bigfile bs=1M count=1k ; sync )
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.63079 s, 658 MB/s

real 0m5.729s
user 0m0.000s
sys 0m1.760s
$
 
Quick update, I've been asked off forum a number of times for a status update on this card/setup.

This setup has been production live for 5 months (zero reboots, 100% uptime) now without hiccup. Performance remains good with no card/driver issues, so far perfect!
 
cool, good to hear

RAID 6 is definately the way to go today over RAID 5

I am looking at my RAID 5 array with 1 hotspare and trying to fit in my budget to get a new card and new drives to off load my data and rebuild a RAID 6 array on a PCIe bus
 
Great to find some information about these cards, there are so few reviews of mid/high-end RAID controllers.

I've been thinking about getting a 3540 for one of my older servers. It's the same card except for the number of ports, right?

Ah, I see the 3540 has less cache.
 
Back
Top