LSI RAID Controller help, 2308 chip on-board ASRock Extreme 11 motherboard is slow!

Try the IT firmware and a soft raid in windows, that'll match the test done with the M4s, since that's how it seems they did their testing.

Ya, I was thinking about trying the IT. Don't really know what that does. There is not much on the net about that mode. Just wondering if I flash to the IT mode firmware, do I need a new BIOS too and can I switch back to IF RAID if needed? Don't want to "brick" the board if you will. :D
 
IT mode takes away all the raid functionality and just makes it an HBA, you're then relying on whatever OS you're running to do any and all RAID functionality. You "should" be able to flash back and forth, I think under the downloads for the 9207 and 9217, the last option downloads both the IT and IR firmware along with the tools. The 9217 is labeled IR and 9207 is labeled IT.
 
Why use JBOD when they advertise Raid 0

Found this on JBOD at http://www.msexchange.org/articles_...lly-is-jbod-how-might-used-exchange-2010.html
Disadvantages of JBOD

No hardware increase in drive performance
There is an argument that JBOD can actually affect overall performance where multiple drives are in play, as it is more difficult for the drives to be used sequentially.

No redundancy
This is a major limitation of JBOD – if you lose the disk (in a single spindle JBOD) or one of the disks (using multiple drives) – you are heading back to your backups. If you have no backups then the data is gone!

Just on the basis of those two disadvantages, you might be asking the question – why consider a JBOD implementation in an Enterprise Exchange environment at all?

Well prior to Exchange 2010 many would agree with you wholeheartedly - however in Exchange 2010 the product team have managed (in certain configurations) to make JBOD a cost effective storage option.

Raid0 as a test under mdadm to see what that actual hardware is capable off. I thought that was what the op wanted to see...?
 
Here another user reports speed problems with the Extreme 11 LSI RAID:

Using 8x 1TB mechanica drives on a Adaptec RAID 10:

60d74a72_131337212907.jpeg


And on the Extreme 11 LSI:

38d639f1_311344016409.jpeg


That's like a 70 % performance drop. There is definitely something wrong with this LSI chip on the Extreme 11. :mad:
 
I just tested with IOMeter the same settings as in that Hexus video at Computex in which ASRock was getting 3500 MB/sec 1MB seq Read. I can only get 2100 MB/s with the same settings in IOMeter. Not sure how they've gotten that fast of speed.
 
I do not know if this is the case on this particular board (I have one, but haven't loaded up the SAS ports with SSDs), but Asrock in the past has been guilty of overloading the connections of some of their add-on controllers. Also for sticking some of the controllers on a secondary PHY chip which is less efficient than lanes that come directly off the CPU or PCH. For example, sticking 4 USB3 ports, a Marvell eSata controller and a x1 PCIe on just an internal x1 branch. In this particular case, the SAS chip is more a checklist item where they expect most people will just use them as SATA ports. Don't expect the stars out of this particular LSI controller for performance.
 
3500MB/sec? Did you slip a decimal point somewhere?

No, watch the meter with IOMeter up at 3:40. That Exreme 11 is getting 3500 MB/Sec in seq read.

I do not know if this is the case on this particular board (I have one, but haven't loaded up the SAS ports with SSDs), but Asrock in the past has been guilty of overloading the connections of some of their add-on controllers. Also for sticking some of the controllers on a secondary PHY chip which is less efficient than lanes that come directly off the CPU or PCH. For example, sticking 4 USB3 ports, a Marvell eSata controller and a x1 PCIe on just an internal x1 branch. In this particular case, the SAS chip is more a checklist item where they expect most people will just use them as SATA ports. Don't expect the stars out of this particular LSI controller for performance.

I hear ya, but that wouldn't explain the speed they were getting in that video at Computex nor their 3.8 GB/s speed claim.
 
Try the IT firmware and a soft raid in windows, that'll match the test done with the M4s, since that's how it seems they did their testing.



ERROR: Cannot Flash IT Firmware over IR Firmware!
 
I have eight 128 GB OCZ Vertex 4 SSD's (Firmware 1.5) set up in RAID 0.

This is the very poor speed I am getting:

Question for you: Did you bother to test performance of a single drive connected to the Intel motherboard SATA3 port before making assumptions about what 8 of those drives should be able to achieve with controller based RAID0? Based on the benchmarks I see on the net, the 128GB Vertex4 is going to do read sequentials @ around 300MB/s. 300MB/s x 8 = 2400MB/s. So the benchmark you consider "very poor speed" in the OP is about what I'd expect. Keep in mind RAID0 doesn't scale linearly. More drives in an array = more latency and overhead for the controller. Also, comparing your results to what ASRock showed at a convention is pointless without first understanding their test configuration -- based on their numbers they absolutely did did windows based software raid0 which is kinda cheating and misleading, since its not showing the performance of the controller's raid stack.

Ya, I was thinking about trying the IT. Don't really know what that does. There is not much on the net about that mode.

There are a thousand threads on IT mode and crossflashing on this forum, it comes up every single day. But first ask yourself if you really want to run windows based software RAID0 rather than controller-based RAID0, otherwise you'll just be flashing back to IR anyway.
 
Last edited:
Question for you: Did you bother to test performance of a single drive connected to the Intel motherboard SATA3 port before making assumptions about what 8 of those drives should be able to achieve with controller based RAID0? Based on the benchmarks I see on the net, the 128GB Vertex4 is going to do read sequentials @ around 300MB/s. 300MB/s x 8 = 2400MB/s. So the benchmark you consider "very poor speed" in the OP is about what I'd expect. Keep in mind RAID0 doesn't scale linearly. More drives in an array = more latency and overhead for the controller. Also, comparing your results to what ASRock showed at a convention is pointless without first understanding their test configuration -- based on their numbers they absolutely did did windows based software raid0 which is kinda cheating and misleading, since its not showing the performance of the controller's raid stack.



There are a thousand threads on IT mode and crossflashing on this forum, it comes up every single day. But first ask yourself if you really want to run windows based software RAID0 rather than controller-based RAID0, otherwise you'll just be flashing back to IR anyway.

I'm not sure where you read a Vertex 4 128GB only going 300 MB/sec. I don't think anyone would buy that drive, least alone me.

http://www.xbitlabs.com/articles/storage/display/ocz-vertex-4-128gb-256gb_3.html#sect2

cdm-1.png


And yes, I have tested a single drive and it does right around ~500 MB/sec.
 
Got done talking with LSI tech support. According to them the 2308 chip is one of their newest chips and it is no slouch. Without external RAM it can do 600,000 IO's/second and should not be running nearly this slow. They included this link:

http://www.lsi.com/downloads/Public/SAS ICs/SAS2308 product brief.pdf

Quote from LSI tech: "I know Vertex and LSI had some issues in earlier stages and I do believe the engineers from both LSI and Vertex have worked on performance as best as they can."

That doesn't sound too encouraging... I may have to order a couple of M4's to test them out and see if it is indeed the LSI RAID chip not working properly, or some conflict between the Vertex 4 and the LSI chip not playing nice as originally suspected.
 
Like I said above, there can be a vast difference between the theoretical performance of a controller and the capabilities of that same chip based on the implementation chosen by a third party.
 
Got done talking with LSI tech support. According to them the 2308 chip is one of their newest chips and it is no slouch. Without external RAM it can do 600,000 IO's/second and should not be running nearly this slow. They included this link:

http://www.lsi.com/downloads/Public/SAS ICs/SAS2308 product brief.pdf

Quote from LSI tech: "I know Vertex and LSI had some issues in earlier stages and I do believe the engineers from both LSI and Vertex have worked on performance as best as they can."

That doesn't sound too encouraging... I may have to order a couple of M4's to test them out and see if it is indeed the LSI RAID chip not working properly, or some conflict between the Vertex 4 and the LSI chip not playing nice as originally suspected.

If the question is, is the LSI controller chip defective then no, you've probably just got a learning curve ahead of you in understanding the differences between white papers and theoretical maximums and actual real world performance. Same as people complaining why their 1Gbps NIC isn't giving them 1Gbps throughput. It's called overhead and its not discussed in white papers as it tends to vary depending on what you're doing or how you use the device. Just like no two benchmark tools are 100% alike or test 100% the same thing, no two raid implementations are alike and there's overhead, latency, inefficiencies and there are variables and modifiers that need to be experimented with and tweaked to fit a given usage scenario - stripe size, queue depth, read/write caching, read-ahead, etc etc.

You might also look and see if FastPath is offered for that controller, since its an SSD optimization on supported LSI controllers. But for just pure throughput and to bypass the raid stack then you'll get a little closer to theoretical performance with host/software based raid0 as mentioned. If I were you I'd create a thread at xtreme-systems where there are a lot more users per capita into benchmarking SSD's on raid controllers, including the guy that actually wrote Anvil Utilities, and Computurd who's pretty up on the LSI stuff.
 
Last edited:
Dude, that isn't possible. 3500MB/sec? That's like 28gb/sec. Not. Possible. Unless it's caching in ram or something?
 
I do not know if this is the case on this particular board (I have one, but haven't loaded up the SAS ports with SSDs), but Asrock in the past has been guilty of overloading the connections of some of their add-on controllers. Also for sticking some of the controllers on a secondary PHY chip which is less efficient than lanes that come directly off the CPU or PCH. For example, sticking 4 USB3 ports, a Marvell eSata controller and a x1 PCIe on just an internal x1 branch. In this particular case, the SAS chip is more a checklist item where they expect most people will just use them as SATA ports. Don't expect the stars out of this particular LSI controller for performance.

I disagree somewhat, the SAS chip was always supposed to be there with the X79 chipset, it's not just an idea Asrock had. Now with the implementation, I don't know how it's done, but I'm wondering if the problem doesn't come from his 4 graphic cards sucking up too many lanes. It's not supposed to happen with the chips they added to make 4*16X PCIe but I don't know, it wouldn't surprise me.
 
I disagree somewhat, the SAS chip was always supposed to be there with the X79 chipset, it's not just an idea Asrock had. Now with the implementation, I don't know how it's done, but I'm wondering if the problem doesn't come from his 4 graphic cards sucking up too many lanes. It's not supposed to happen with the chips they added to make 4*16X PCIe but I don't know, it wouldn't surprise me.

Yes, SAS was originally supposed to be part of the X79 chipset (and integrated into the PCH) but relatively early on It was found to be unreliable due to phantom SAS devices appearing and disappearing and random stability issues jumping port to port. The silicon for SAS is still in the PCH, just disabled in almost all implementations (except ECS, which has a hack BIOS floating around which did enable it on their X79R board.) That said, this particular implementation doesn't use the PCH, it uses an additional LSI controller shoehorned in. As I mentioned above, this could be a lane contention issue. With an IB processor, the X79 offers 40 lanes from the CPU and another 8 from the PCH. If you have 32 for the 4 PCIe vids, then another 8 for the 5th PCIe slot, that leaves 8 for everything else (and depending on how it is allocated among all the other controllers (SAS, USB3, Audio, Dual LAN, 2 different Marvell SATA chipsets, IR port, COM Port, Firewire Port, PS2 Ports etc) that could put a crimp in some of that functionality.
 
I do not know if this is the case on this particular board (I have one, but haven't loaded up the SAS ports with SSDs), but Asrock in the past has been guilty of overloading the connections of some of their add-on controllers. Also for sticking some of the controllers on a secondary PHY chip which is less efficient than lanes that come directly off the CPU or PCH. For example, sticking 4 USB3 ports, a Marvell eSata controller and a x1 PCIe on just an internal x1 branch. In this particular case, the SAS chip is more a checklist item where they expect most people will just use them as SATA ports. Don't expect the stars out of this particular LSI controller for performance.

this is what I think. I truly believe you're not getting enough lanes somehow (that was my original post (and I stand by it now that someone agrees with me))

also, to the above question, fastpath is out, that is only for megaraid, this is just an hba.
 
Some of this stuff is pretty basic. I know of overhead, latencies, all of that jazz.

Firstly, a 128GB Vertex 4 can pull down 500 MB/sec seq read on it;s own. I am only getting 2100 MB/sec with eight of them in RAid 0. That is a 50% loss in speed. That must be some overhead!

Eight drives in RAID zero on the LSI chip getting lower 4k-64Thrd scores then a single 128 GB Vertex 4. That makes absolute zero sense as is definitely indicative of an issue with the LSI chip or it's implementation.

Fastpath is not offered on thos controller. 3500 MB/sec read seq is completely possible. It's right in the video ASRock Computex Extreme 11 video showing IOMeter using eight drives. That is far far higher than any numbers I get. They achieved that somehow, and I'd like to know how. Seq read is Seq read, it's not very complicated.

The graphics cards connected to the two 16x CPU lanes have nothing to do with this. The LSI chip has it's own 8x 3.0 decicated lane to the CPU. JBOD test was done. Seq read/write and 4k read/wright all stayed the same. Only thing that increased was 4k-64Thrd using the JBOD/Win 7 stripe config.

X79 is not connected by PCI-E 3.0 to the CPU like the GPU slots are. They are connected with DMI 2.0

X79_blockdiagram.jpg


All ASRock did was take the 8x PCI-E 3.0 lane from the GPU setup and connected it to the LSI controller. That leaves 2x 16-speed lanes to the CPU, 1 for each PLX chip. There should be plenty of bandwidth.

Anyways, with write speeds of 2600 MB/sec, the bandwidth is already quite a bit above the 2100 MB/sec of read. So clearly there is no bandwidth issue there limiting seq read. There are other factors at play. And I am not satisfied that I should just accept this low performance for one reason or the other, because it doesn't add up. There is a flaw with the config and I intend to find it. I am leaning towards some issue with the LSI controller not playing right with the Vertex 4's as ND40 previously mentioned.
 
6x LSI Hardware RAID and bootable:

6xLSIWriteCacheOff.png



6x JBOD Win 7 Stripe volume:

6xV4JBODWin7Stripe.png
 
welp, definitely not the drives

or the controller even

LSI will definitely want to hear about this one, maybe their raid0 is screwed up somehow. they need to fix that 4k64
 
SO you think the hardware RAID option should be just as fast as the JBOD software RAID? I was wondering why the 4k-64Thrd is basically the same as 1 drives or slower in Hardware raid with the LSI and doesn't increase.
 
So I put Win 7 on a WD 10K Raptor on the Intel X79 port so that I can freely change the 8x Vertex 4 LSI RAID setup.

This is what I get when using the LSI hardware RAID with boot-up config utiliity (not selected as a boot device and no Windows installed):

8xLSIRAIDWriteCacheOff.png


Performance is pretty much as poor as ever.



Now for what I get when the drives are seperate volumes and loaded into Win 7 as JBOD and software stripe:

8xLSIJBODWin7WriteCacheOff.png


So obviously there has to be something wrong with the LSI hardware RAID as especially those 4K numbers with 64 queue depth are just horrible (less speed than a single drive).

Thoughts on what can be wrong with the LSI hardware RAID? Both tests are obviously with the same LSI chip, drives and firmware, so not sure why the large difference.
 
Vega, did you try to measure speed by doing 2 ssd's in raid 0 on the sata 3 intel ports and then 2 on Lsi individually? This way we can see for sure if it's a Lsi chip problem. I'm new to this raid stuff but trying to learn. I still didn't get my ssds yet, because I'm not sure if this is a vertex 4 issue or not.
 
Ya, earlier in the thread I tested 2x, 4x, 6x on the LSI controller and 2x on the Intel (all that you can do really at 6 GB speed).
 
What does that mean "TLDR" version?

If we don't use the hardware raid controller which controller should we use? The windows one?
 
You disable hardware raid and let Win stripe all the volumes together. Ends up being quite a bit faster. The downside: it's no longer a bootable volume that you can put your OS on.
 
Ok, so now we need 10 disks instead of 8 I wish they would of explained this in that computex video as to how they achieved those scores.

Another Question I have is, are those 8 disks recognized as 1 large disk on the win stripe, so if you have 8 disks at 128g x8 = 1 terabyte drive or no?

So the vertex 4s are ok then no problems with the disks correct?

Sorry for asking all these newbie???s I have no idea how to set this up, but I'm gonna have to pull it off some how.

EDIT: I'm learning :) http://technet.microsoft.com/en-us/magazine/ff382722.aspx
 
Last edited:
So, are you convinced it's an entry level chip after all ?

I would be interested nonetheless to see a comparison of the hardware RAID versus windows RAID with more real use benchmarks.
 
So, are you convinced it's an entry level chip after all ?

I would be interested nonetheless to see a comparison of the hardware RAID versus windows RAID with more real use benchmarks.

it's a shame they even go through the effort of putting this on the mobo. may as well just buy a lower tier motherboard and get an actual megaraid card.
 
It performs comparably to the 9265-8i, it's just the 9265-8i has fastpath which is why the 4k-64thrd numbers are higher.

http://www.ocztechnologyforum.com/f...03448-8-x-Vertex-4-128-GB-on-LSI-9265-i8-raid

That's a good point. 9265-8i ($900 RAID card) scores with eight of the exact same drives/FW as me:

8xvertex4-128GB-fastpatch-4.jpg


And mine:

8xLSIJBODWin7ITASSSD.png



Granted his is hardware RAID with Fastpath I think and mine is software RAID without it. Seems to do pretty well when not in hardware RAID mode.


it's a shame they even go through the effort of putting this on the mobo. may as well just buy a lower tier motherboard and get an actual megaraid card.

But then I wouldn't be able to run my 4-Way GPU setup. Im'm trying to have the best of both worlds. :)
 
Vega. your graph shows only a 119.24 GB where are the other 7 disks. I thought the win 7 stripe combines them to look like 1 disk like the graph above yours. I'm totally confused here. So if you want to download files or programs you have to specify which disk out of the 8 ssds you want it to download on?

I'm on the LSI site, but I can't find the IT firmware to download. Can you post the link where you downloaded this specific file from.

Are you using the intel raid for booting or did you just install 1 ssd?
 
Heya, AS-SSD will show (in my case) eight of the same SSD's all with the same drive letter. Select any of them and you test the whole array. Don't worry, Windows see's the whole striped volume in "My Computer' and you get the full combined volume size of all the disks together.

http://www.lsi.com/support/Pages/Do...Host Bus Adapters&productname=LSI SAS 9217-8i

That link has the firmware for both IR and IT versions and the flash software. It also has the .pdf instructions, but all you need to do is under the 6GB section here:

http://kb.lsi.com/KnowledgebaseArticle16266.aspx

Heed the warning of step #5 and make sure everything is set up correctly and the system stable before you attempt this. You could brick the LSI chip if the computer crashes, loses power or you erase the firmware and the new flash doesn't take. Mine worked fine though and I have a system UPS just in case. ;)
 
Granted his is hardware RAID with Fastpath I think and mine is software RAID without it. Seems to do pretty well when not in hardware RAID mode.

You can still have a hardware raid setup with the 2308, just different hardware doing it, slightly slower yes, but better then a 2 drive setup on the intel controller.
 
Back
Top