HP SAS Expander Owner's Thread

I've installed WHS on my system for testing and did not see any such messages....

What motherboard do you have in your system?
 
Try disconnecting the primary link...perhaps speeds will quadruple. ;)

Lol.

I'm going to be testing with a single link here when I am back in town, once I get it performing to the speeds that it should be (got filebench working btw!!) I will see if i can get the second link in there somehow...
 
Well I'm less than impressed with the HP P411 I have purchased. The management software for it is absolutely awful and on first boot of my system with the P411, SAS Expander and 3 of my drives to test with (all had data on) the P411 kindly decided to create a RAID 1 array out of two of the 500GB drives without any intervention from me.

I'm looking at replacing the P411 with something that can pass drives direct to the OS without having to create an array or put each individual drive into a RAID 0 array. Should I get the ARECA ARC-1300-4X or is there a better alternative for a similar price? If I get an answer in the next couple of hours I could order it and get it delivered tomorrow.

Edit: Does the ARC-1300-4X support online array roaming (and would this work with disk pass-through.) This would allow me to switch the drives on only when I need them.

Edit 2: I suppose my above question about array roaming doesn't really matter as I won't be using RAID, it should just work using the hotswap functionality. Forgive my ramblings :)

Regards,
Degsy.
 
Last edited:
I wrote to HP cciss devs, they replied with
>
> Hi cciss devs,
>
> I'm using cciss for Smart Array P212, but it requires me to create logical
> drive to access disks. The only way I can think of still using the linux
> software raid5 is to create 1-drive raid0 logic drive then present the logical
> drive to mdadm. But this will erase the data on the disks.
>
> Is there a way to bypass the logical disk layer and access the physical disks?
> This way I can use mdadm to manage existing array w/o backup/restore the data.
> And it would be possible to port the disk to another controller.
>
> Thanks,

>

No. It's a RAID controller, not a plain SAS HBA.
 
SuperMicro AOC-USAS-L8i - when I had both ports from the controller connected to the expander the speeds were terribly slow... unhooked the second link... doubled the speeds... Have only had the box going for a few hours now will be doing some more indepth testing with it over the next week or so.

No real reason to dual-link that Supermicro with the HP Expander, you're not going to saturate a single link with that LSI chip anyway. Not to mention very few if any third parties have tested or support dual linking with expanders, let alone the HP expander. Go ahead and test it but don't waste too much time, I don't think the implementation on the SM card knows how to deal with dual linking.

The only cards I'm aware of that can properly dual link are HP's own PMC-chipped raid cards, but I'm trying to push Areca to support it on current and future cards.
 
Well I'm less than impressed with the HP P411 I have purchased. The management software for it is absolutely awful and on first boot of my system with the P411, SAS Expander and 3 of my drives to test with (all had data on) the P411 kindly decided to create a RAID 1 array out of two of the 500GB drives without any intervention from me.

I'm looking at replacing the P411 with something that can pass drives direct to the OS without having to create an array or put each individual drive into a RAID 0 array. Should I get the ARECA ARC-1300-4X or is there a better alternative for a similar price? If I get an answer in the next couple of hours I could order it and get it delivered tomorrow.
<snip>

Maybe I should've yelled a louder warning in the OP that HP's raid cards aren't ideal for home use, too many shortcomings and too few options.

You haven't said which O/S you want to run and whether you want a striped array or not, those are the first questions in determining which HBA card to buy. If you ONLY want drives passed through to the O/S and don't plan on any striped arrays, then just get a non-RAID HBA like one of the Supermicro cards listed in the OP, or the Areca ARC-1300-4x.

EDIT: Open-box ARC-1300-4x on newegg right now: http://www.newegg.com/Product/Product.aspx?Item=N82E16816151060R
 
Last edited:
No real reason to dual-link that Supermicro with the HP Expander, you're not going to saturate a single link with that LSI chip anyway. Not to mention very few if any third parties have tested or support dual linking with expanders, let alone the HP expander. Go ahead and test it but don't waste too much time, I don't think the implementation on the SM card knows how to deal with dual linking.

The only cards I'm aware of that can properly dual link are HP's own PMC-chipped raid cards, but I'm trying to push Areca to support it on current and future cards.

Yeah i am not going to dwell on it to long maybe a couple hours of fiddling, but I am sure the low speeds I was seeing with both links connected was due to some collision issue, with the card.
 
Maybe I should've yelled a louder warning in the OP that HP's raid cards aren't ideal for home use, too many shortcomings and too few options.

You haven't said which O/S you want to run and whether you want a striped array or not, those are the first questions in determining which HBA card to buy. If you ONLY want drives passed through to the O/S and don't plan on any striped arrays, then just get a non-RAID HBA like one of the Supermicro cards listed in the OP, or the Areca ARC-1300-4x.

EDIT: Open-box ARC-1300-4x on newegg right now: http://www.newegg.com/Product/Product.aspx?Item=N82E16816151060R

Odditory, that's okay, I thought I was getting a bargain on the card, 6GB SAS, arrays, update the expander etc but I hadn't really decided what I wanted from a card before I purchased it.

At least I know now for certain that a HP card isn't right for me. I like the look of the Areca cards and from what I've been reading in this thread, the level of support that they give. My plan is to pass disks direct to the OS and use something like Disparity to sort out the parity.

I'll hang onto the card for now in case there's a firmware update for the expander, though I suspect the updates won't be as frequent now. My offer still stands, if anyone in the UK needs their expander updating I'll be happy to do it.

Regards,
Degsy.
 
degsy why don't you get a HP SC44Ge PCIe SAS and SATA storage controller from ebay, they do appear occasionally in the uk and are a pretty decent price, I have done just that...of course I haven't tested it yet with the HP Expander as I haven't bought one yet.

I am going to use the HP SAS controller with an additional 4 disks first which will give me time to get an HP Expander eventually.

The HP SC44Ge PCIe SAS and SATA storage controller is a rebadged LSI SAS3442E-R probably uses a differnent firmware (not sure whether it can be flashed with the LSI firmware or not).

Someone on this thread mentioned that their LSI SAS3442E-R is working good with the HP Expander (not sure what mobo though).

The LSI SAS3442E-R are pretty expensive in the UK, see here:
http://www.span.com/product_info.php?products_id=6745

The HP SC44Ge PCIe SAS controller that I have purchased looks like an exact copy of the LSI on the above web site.

Edit:
You can also find the Areca 1300 4x in the uk, see here:
http://www.scan.co.uk/Products/Areca-ARC-1300-4X-PCIe-4port-SAS-HBA-with-External-Connector
 
Last edited:
With a stock of 1 and a warranty of 180 days seems like you may be getting an old stock item there, maybe a yellow card.
 
Hey Guys,
I currently have a LSI SATA/SAS 9211-4i 6Gb/s and was looking in to getting the HP Expander, does anyone know if this card will work with my LSI? Thanks.

works with the 9211-8i, see no reason why the 9211-4i shouldn't work as well.
 
Look up forum member SynergyDustin. Great service and the best price when I bought mine.

It's true, we cannot export them directly. We're sold out of the expanders as of this afternoon. When we get our next order confirmation I will update folks here. Hopefully our cost stays consistent.

PM me to place an order and get at the top of the line if you need.
 
No real reason to dual-link that Supermicro with the HP Expander, you're not going to saturate a single link with that LSI chip anyway. Not to mention very few if any third parties have tested or support dual linking with expanders, let alone the HP expander. Go ahead and test it but don't waste too much time, I don't think the implementation on the SM card knows how to deal with dual linking.

The only cards I'm aware of that can properly dual link are HP's own PMC-chipped raid cards, but I'm trying to push Areca to support it on current and future cards.

Wat kind of card would I need to run 10 2tb drives in raid?
 
PERC 6/i: FAILURE

"Attached Enclosure doesn't support in controller's Direct mapping mode.
Please contact your system support.
System halted due to unsupported configuration."

Direct quote from PERC 6/i bios init. F/W v. 1.0 (HP SAS EXPANDER)
F/W v. 6.1.1-0047 (PERC)

Wont boot passed this screen. Chalking the two PERC 5 & 6 "internal" cards up as failures, even though the 6/i should support 16 drives on its 8 internal ports... possibly only in a special Dell system? or enclosure!? Unknown.

PERC 6/E for sure does work though!

Please update the first post!

Thanks!
 
Anyone know if I can use the areca ARC-1212 with this HP expander just fine? Looking for a sub ~$300 raid adapter to go with it and always wanted to try areca
 
PERC 6/i: FAILURE

"Attached Enclosure doesn't support in controller's Direct mapping mode.
Please contact your system support.
System halted due to unsupported configuration."

Direct quote from PERC 6/i bios init. F/W v. 1.0 (HP SAS EXPANDER)
F/W v. 6.1.1-0047 (PERC)

Wont boot passed this screen. Chalking the two PERC 5 & 6 "internal" cards up as failures, even though the 6/i should support 16 drives on its 8 internal ports... possibly only in a special Dell system? or enclosure!? Unknown.

PERC 6/E for sure does work though!

Please update the first post!

Thanks!



I would be willing to bet they would work if you flashed them with an LSI bios
I did that with a perc5 when I had it cause dell quit updating it.
 
Last edited:
Areca 1212 will not work. Says right on the product page that SAS expanders aren't supported.
 
Can you recommend a card in that price-range I should be looking at for use with this expander?
 
I would be willing to bet they would work if you flashed them with an LSI bios
I did that with a perc5 when I had it cause dell quit updating it.

I can try flashing a PERC 5/i I have lying around for this. Have a link for a compatible LSI firmware?
 
We're now taking orders on incoming HP expanders, and drop shipping direct from HP when possible as well. $235 shipped ground in the continental US.

Thanks!

Dustin
 
I'm going to go ahead and admit to not reading all 36 pages of posts.

However, I bet my question hasn't been asked yet.

2 years ago we purchased (at my suggestion) an EMC NS20 with a full tray of HP drives and a full tray of 1TB low performance drives. This has been a great thing for us as it allowed us to reduce our physical server sprawl from about 50 machines down to 5 ESX powered beefy servers. We have also completely replaced several file servers by using CIFS on the Celerra.

However, we've realized we don't use _any_ of the Fiber connectively. In fact it's 100% VMware over NFS and CIFS. We've considered using Fiber for our tape backup but that's hardly a "must have" enough to justify the SAN aspect.

This ran us a little more than $100,000.

Now we've just realized that our 3-year support is due April next year and it's going to be in the neighborhood of $20,000, which seems outrageous considering we could purchase a pretty tricked out one of these for about that much:

http://www.sgi.com/products/storage/servers/iss3500.html

So. As a kind of trial run at replacing the functionality of our EMC SAN with something like the SGI system (ok, and partly because I want to play) I'm about to suggest we setup a 36 drive testbed based on the equipment y'all are discussing here. I figure run something like OpenFiler and try to put some spindles behind the NFS/CIFS storage and we might just be surprised at the performance (though I understand it's not going to have the bulletproofness of the EMC SAN, my point is to testbed performance/storage density).

Any thoughts?
 
WarlordBB:
We are in the same situation here at work but sadly we must invent in enterprise class devices because of warrantees and support. (bleh I say!) Part of the write up I did to attempt to justify more storage was part out a 40TB(raw) storage system using the norco, areca, Hp Sas setup. While the Norco configuration was 1/10 of the cost, the cost of (if this then this) was too risky. The best part is if you research the server hardware, OS + software, and hard drives that are in the enterprise systems, it's no different then something we can buy off the shelf. Five years ago, Equallogic used desktop class hard drives. They upgrade to the enterprise "ultrastar" drives and now they can sell 7TB of storage for $70K? I know RND and software development is 90% of the enterprise storage systems but the hardware is still the same stuff we can buy from newegg or ebay.
Long story short: If you can talk your business into buying a "self supporting array" then using off the shelf hardware to create a Norco build is a great idea. A lot of the storage conferences I attend are full of venders that offer their software as a product but they charge $50k for the software and the hardware the software runs off is a $2-$3k build of crap. In most cases, a Win2k8R2 running CIFS/NFS will gain you more performance with great hardware then some of these thin linux builds with bulky software. Best part about going to the conferences is finding the sales weasels know nothing about hardware but can talk a storm about their software that runs on it. "Oh your 2010 release is running with a single Core 2 Duo and instead of hardware raid you are using JBOD with software raid? And you want $50k for your entry appliance? LAUGH!"
 
Well, the SGI ISS3500 device I mentioned is considered Enterprise class I think, though it does (kinda) fall into that category you mention of $$$ just for the sake of the name. It's still a LOT cheaper than what we have now and they have lower entry level support plans that make it reasonable.

Ultimately, like you, I think we'll have to opt for an Enterprise solution but that's to be expected when you are an 8 figure business.

However, my "testbed" was meant to prove to myself that I can match the _performance_ and storage density of our current solution (plus I want work to buy it so I can play and then offer to buy it from them on the cheap afterwards :)).

So I was hoping someone here would be able to say something like, "you're not going to be able to match the performance of an EMC NS20 device or even an SGI ISS3500 because...".

Or maybe provide some insight such as having to use SAS drives to achieve that level of performance.

Basically, I don't mind buying 36 drives + the small investment in something like a Arcera 1680LP + 2x HP SAS Expanders + Norco based system to test this out but I don't want to sink $20,000 into an SGI box just to find out that it's not going to cut it.

Especially since the homegrown storage server would be something I would definitely buy back from work (at an employee discount of course :)) so there's not a lot of risk to the company to prove something like this out.

But if the peeps here already know I'll be disappointed in the results... I'd just as soon save myself the time/effort.
 
Warlord - if you don't have the option of going to a solution like the one discussed here, there are still other options you may consider that run about 1/3 what EMC charges.

If you haven't already, you might consider solutions from NetApp or Isilon (still far more expensive than the "build your own" version, but they do come with the "warm blanket" support contract that enterprises so enjoy). We have a rather serious EMC bias where I work as well, but those two vendors are making inroads based on similar performance and far far lower costs.
 
myrison: I appreciate the advice. Actually, for the most part, my work will do what I suggest just as they did with the EMC NS20. In fact, the owner actually suggested a "roll-your-own" solution himself back when he first heard the pricetag for the EMC.

I'll be honest, I'm really hoping that no one here comes up with an "it's not going to work" answer because I'd love to try it and it would be awesome if the testbed wound up being the solution instead of just proving that something like the ISS3500 would be necessary.
 
Assuming the lack of a forthcoming "it's not going to work" answer here, I know we all look forward to hearing back the results of your testing! :D
 
Well, one thing that would help me decide is if anyone that knew about these kinds of things could give me pointers on what advantages the ISS3500 would have over someone here that really knew what they were doing. From what I understand about the ISS3500, the only advantage is the fact that the OS is on a built in flash and the OS itself is SGI's (supposedly) "unmatched" XFS file system.

If you had a homegrown server, built and sitting there with 36 drives in it waiting for an OS, what would you use? OpenFiler? With VMware over NFS and CIFS as your main priorities, what file system would you use? I hear ZFS is all the rage with the cool kids these days.

IOW, does someone with some experience with the homegrown variety have some suggestions on "best in class" setups?

For example, while certainly an interesting project, I'm sure no one would suggest I do what this guy did:

http://www.servethehome.com/category/the-big-whs-30-drive-whs/

and use Windows Server 2008 R2 as the Host. I assume I'd want to have whatever file server OS I'm using installed on the bare metal.

Ok, I think I've made the mental loop from "thinking about" to "planning to do". If history serves, the next step (one of my favorites) is very close: "ordering parts". :)
 
For example, while certainly an interesting project, I'm sure no one would suggest I do what this guy did:

http://www.servethehome.com/category/the-big-whs-30-drive-whs/

and use Windows Server 2008 R2 as the Host. I assume I'd want to have whatever file server OS I'm using installed on the bare metal.

Thats what i do as well.
Its not different than a bare meta install as far as the disks are concerned. With hyper-v you can give them VM direct access.
 
"it's not going to work"

To hopefully help clear up some information about enterprise storage, the only advice that ever works is "demo it." Any company that stands by their product will let you demo the box for 30 days. Data Domain let us have two shelves for a few months, and Equallogic lets us demo a self or two for a few months. We saved $300,000 + by getting a demo from Exagrid to learn that their product would not work in our infrastructure they way they advertised. (Inside joke but Exagrid will not dedup more than one copy of notepad.exe which is very fail)

So to WarLordBB, I would get a demo in of the product and run some tests on it. Move a few VM luns to it and grind the crap out of it. Throw up a few test VMs running IOMeter and pull a hard disk out of their array and see if it can rebuild on the fly or degrade your servers to the point of impacting business.
The most struggling thing about a home grown storage array is finding a full set of parts that are "endorsed" by VMware or your software venders. If you are stuck with an enterprise solution, get something that is fully supported by your software venders or you will be out of luck when a support call is made.

Most important, enterprise solutions have support that you must pay for and renew every year. Put them to the test as well. Demo the box and call their support about something you broke and make them fix it. Equallogic won us over because they were able to fix a big issue (on the demo machine) within an hour. Normally demos get less priority with support but they worked it hard. Their support also sent out a replacement drive from a local warehouse within three hours and the suspected disk didn't fail, they just felt it should be replaced because their software felt the drive wasn't keeping up.

I hope this helps with the business choice. As said above, if you do get to build one, please post your findings and include pictures!

And for the "it's not going to work" If had purchased the Exagrid storage, it would have failed horribly. No 30 day return policy on something that expensive.
 
nicholasfarmer: thanks for your thoughts, they are good advice to anyone pursuing a major IT purchase. We did some of what you mention before buying into the EMC NS20 we have now. We researched and got quotes on about 8 competing platforms, demo'ed boxes from DELL and a few others I've forgotten. DELL wanted to send us an EqualLogic demo too but by then, we knew we were going for VMware over NFS, which is what we still do today and everything works _fantastic_ on all 50+ VMs spread over our 5 ESX hosts.

As far as VMware support goes, like I said, we're using VMware over NFS exclusively. I don't anticipate any compatibility issues.

As far as pictures... I'm a fan of the "pictures or it didn't happen" club so you can count on it.
 
So does anyone know how well these work with the new Intel SASU8i cards under solaris?

I don't know how the SASUC8i cards work in Solaris, but they're based on the same LSI chipset as the LSI SAS3442E-R I'm using, so prognosis is probably good. Mine works perfectly, others have reported mixed results.
 
Back
Top