About Areca ARC-1680ix

What is the "Disk Write Cache Mode" set to? This is the write cache on the individual HDDs. If it is Auto and you have a BBU, then the Areca turns the caches off. If you want rebuild speed, you need to set it to enabled and reboot. But then you risk losing the data in the HDDs write cache if there is a power outage.
 
What is the "Disk Write Cache Mode" set to? This is the write cache on the individual HDDs. If it is Auto and you have a BBU, then the Areca turns the caches off. If you want rebuild speed, you need to set it to enabled and reboot. But then you risk losing the data in the HDDs write cache if there is a power outage.

Do you mean "you don't have a BBU"?
 
Do you mean "you don't have a BBU"?

Nope, I meant what I wrote. I guess you are getting confused between the Areca controller cache and the individual HDD caches. The Areca battery cannot preserve the individual HDD caches in the event of a power outage, so the Areca controller, with the Auto setting, takes the safe route and turns the individual HDD caches off if you have a BBU. If you do not have a BBU, the Areca assumes you don't mind living on the edge, and it leaves the individual HDD caches on. But I prefer to set it manually to disabled, rather than Auto. Except when I am first initializing a RAID 6, in which case I don't care about data loss, so I set it to enabled until the initialization is done.
 
Nope, I meant what I wrote. I guess you are getting confused between the Areca controller cache and the individual HDD caches. The Areca battery cannot preserve the individual HDD caches in the event of a power outage, so the Areca controller, with the Auto setting, takes the safe route and turns the individual HDD caches off if you have a BBU. If you do not have a BBU, the Areca assumes you don't mind living on the edge, and it leaves the individual HDD caches on. But I prefer to set it manually to disabled, rather than Auto. Except when I am first initializing a RAID 6, in which case I don't care about data loss, so I set it to enabled until the initialization is done.

You are right, my mistake. I was confusing the two caches.
 
So folks say OCE with the Areca's is fast, but how fast? I just got a 1680ix-12 and am expanding a 7x2TB RAID-6 array to an 8x2TB RAID-6, and the migration seems to be going at roughly 1% per hour, so we're talking about 4 days to finish. Is that normal?

For your reference I did a test OCE from 19 x Hitachi 2Tb in RAID6 and added one drive for 20 total. Completed in about 25 hours on an Areca 1680. You definitely at least doubled your OCE time by changing stripe size since that doubled the workload and the card actually performs two separate operations in sequence. As far as why else it might've been slower than usual you'd have to post a screenshot of your system config page to give us more clues.
 
Thanks, Odditory! Could you (or someone) elaborate on *what* the "Modify Volume Set Attributes?" page that pops up after you hit submit on Expand Raidset means? I assume it gives you an opportunity to change RAID level and/or stripe size, and has no relation to the actual volume set expansion which must be performed afterward.

I'm re-initializing the array (with the same parameters), and after copying data from my backups I will start the expansion again tomorrow morning. If the rate isn't any faster than before I'll post my config.
 
For your reference I did a test OCE from 19 x Hitachi 2Tb in RAID6 and added one drive for 20 total. Completed in about 25 hours on an Areca 1680. You definitely at least doubled your OCE time by changing stripe size since that doubled the workload and the card actually performs two separate operations in sequence. As far as why else it might've been slower than usual you'd have to post a screenshot of your system config page to give us more clues.

Well, it's still slow. About 10% done in just under 6 hours. Unless the progress is non-linear and it suddenly speeds up at some point, I'm still looking at 2+ days for expanding a 7x2TB RAID-6 by another 2TB disk.

Rather than bother with screenshots, I've posted a static mirror of my 1680ix's web interface. I'd appreciate if folks could take a look and let me know if there's something I'm missing.
 
I do not see anything wrong with your settings. And the drives themselves do not appear to be showing a lot of errors. You have the latest (1.48) firmware. All fine.

Code:
2010-03-11 11:57:47 	ARRAY1 	Expand RaidSet 	  	 
2010-03-11 11:57:47 	ARC-1680-VOL#000 	Start Migrating 	  	 
2010-03-11 07:52:10 	ARC-1680-VOL#000 	Complete Init 	007:16:31

I noticed that you completed a 7 hour init before you started your expansion. Was that a RAID 6 init? If so, 7 hours sounds about right for the size of your RAID 6.

I do not know anything you can do to speed up the migration, unless your theory about the stripe size selection is correct.

By the way, great job showing us your configuration. What program did you use to create the static mirror pages?

EDIT: One other wild guess. I've never enabled the HDD power management features on my Arecas. I cannot think of a logical reason why they would have an effect here, since the drives should be busy. But if it were me, just for the hell of it, I'd try disabling the power management to see if it makes a difference.
 
Last edited:
Well, as for the slow expansion I tried disabling power management and even swapping motherboards....but it is what it is.

Right now, my 1680ix-12 is NOT seeing any drive connected to the onboard expander (SFF-8088 via HP expander is fine). I've used the serial port connection to verify that the onboard expander *does* have physical 3.0 gbps links with no errors in its log.

The drives I've tried are a Hitachi 2TB, an Intel SSD and even a WD RE3 500GB. I also played with the SAS Mux setting but it hasn't had any effect.
 
Because "ix" 1680 cards have an onboard expander, and it relies on "SES2" to be Enabled for proper operation, and because the HP expander requires SES2 to be Disabled for proper operation that effectively knocks out the internal ports on the Areca card. Until Areca gets an HP expander to test and figures out the SES2 issue so it can be re-enabled with an HP expander attached, you're stuck with either using only the HP expander ports or only the internal SFF-8087 cards on the Areca card.

If I were you I'd forget about the internal ports on the Areca card for now and just use the ports on the expander. I'd also recommend emailing Areca support, a few of us have already registered the complaint about proper operation with the HP expander, and if they hear about it from more people it might put more fire under them: support <<AT>> areca.com.tw
 
Ah, OK, thanks. I am sticking with the expander, I just wanted to figure out why the internal ports were not working. I will email them.
 
I'm pretty confident they'll develop a fix since it really sounds like a communication/sync issue rather than a physical one, evidenced by the fact that when you enable SES2 the Areca can usually see 1 or 2 drives appear as present on the HP expander.
 
I am plugging one of my arc 1680-ix's into an ATX Chenbro CSSV-107AB Server Tower Case with motherboard Tyan S2866 and an AMD Opteron dual core processor FOR TESTING PURPOSES. Both the Western Digital 2TB and Seagate 1TB drive arrays failed completely. I will be using Windows 2008 x64 for all my testing and report back with latest firmware etc. I am going to go through what Areca support suggested and test the cables etc to try and narrow down the problems.
 
^^^^ Sorry, I may have missed it but can you provide some context?
 
I hate to bring up old threads especially when they're as long as this one but Odditory's comments come as close to my current problem as I can find.

I've got an ARECA 1680ix-12 card running V1.48 & HP SAS Expander running 2.02 I believe. I can't use the external ports at all with this 2M Tekram cable that was suppose to work w/ the 1680 series card. If I connect the 8 Samsung 500GB HDs to the ARECA they work great. If I connect an internal SFF8087 cable from the ARECA to the Expander, and then the HDs to the Expander it works at about half speed. But when I try to have the HDs on the ARECA w/ the external SFF8088 connected to the Expander or the HDs connected to the Expander and the Expander connected to the ARECA via SFF8088 I get no love. It just keeps failing to load into memory. I've played w/ the SES2 support both enabled & disabled too.

Anyone got this combination working w/ an external SFF8088 cable? I tried logging into the 1680's console port but get junk out.

I'd just like to get native 1680 speeds w/ the expander versus settling for 300MB/s max.

Because "ix" 1680 cards have an onboard expander, and it relies on "SES2" to be Enabled for proper operation, and because the HP expander requires SES2 to be Disabled for proper operation that effectively knocks out the internal ports on the Areca card. Until Areca gets an HP expander to test and figures out the SES2 issue so it can be re-enabled with an HP expander attached, you're stuck with either using only the HP expander ports or only the internal SFF-8087 cards on the Areca card.

If I were you I'd forget about the internal ports on the Areca card for now and just use the ports on the expander. I'd also recommend emailing Areca support, a few of us have already registered the complaint about proper operation with the HP expander, and if they hear about it from more people it might put more fire under them: support <<AT>> areca.com.tw
 
In case it wasn't mentioned all that time ago the biggest problem I was having was related to the Seagate drives. Once I removed all the Seagate 1TB ES drives and recreated the array its pretty stable (well apart from the damn Western Digital RE4 drives dropping way too often for my liking). Speed is ok but Windows 2008 R2 is giving me cache problems which I think are impacting performance somewhat. Every synthetic shows insane speeds (in Raid5 with 20 2TB WD RE4 drives CrystalDriskMark 2.2 shows over 1GB a second read/write in sequential and 100mb in 4k random, HD Tune Pro shows around 500+ MB a second in read, etc). For some reason writes are a fair bit faster than reads and i'm not sure why. Still yet to figure out why the server puts all ram into cache state (could be a problem with the raid array or sql or something) after a few hours of use but it isn't severely affecting performance.

Currently using the 1.48 firmware as using the 1.49 had a lot more drive errors, so be warned.
 
Back
Top