Project: Galaxy 5.0

What x64 bit version of Windows server 2003 are you gonna use?

And since the search is now gone i cant search for any raid controller for sale so does anyone have a 4-8 port pci-e raid controller for sale? PM me.
 
What x64 bit version of Windows server 2003 are you gonna use?

And since the search is now gone i cant search for any raid controller for sale so does anyone have a 4-8 port pci-e raid controller for sale? PM me.

The User PC299 has a ARC-1231ML for sale.
 
I'm going for whatever x64 variant that I can find :) Enterprise would be nice, but the features it has wouldn't really benefit me.

Ya exactly, as cool as the features are Enterprise, not worth the extra $$$ for a home server...

I am switching to Standard x64 as soon as I have the 1280ML, did not want to have tons of downtime on Sunday while reloading the server:)
 
The User PC299 has a ARC-1231ML for sale.

I was wondering where that PM came from out of the blue... thanks Ockie. I already regged over at 2CPU and sold it within 15 minutes though.
 
There are more diffs between Std and Ent than just clustering, such as limitations in max SMP instances and addressable memory. Truth be told, however, the more important factor is running x64 rather than 32-bit. If he ends up running 32-bit Standard he'd be maxed at 4gb ram, and he *has* 32gb, and this motherboard supports up to 128gb. I imagine he could therefore get away with x64 R2 Standard, but I'd go for the gold: x64 R2 Enterprise.

http://www.microsoft.com/technet/windowsserver/evaluate/features/compare.mspx

This is true. Hell std has a few things enterprise does though. Only big thing that I would think he would use that enterprise supports that std does is the Virtualized OS instances which is a pretty cool thing.

Ya exactly, as cool as the features are Enterprise, not worth the extra $$$ for a home server...

I am switching to Standard x64 as soon as I have the 1280ML, did not want to have tons of downtime on Sunday while reloading the server:)

Yea I'm running standard x64 r2 on my home server(full oem copy). Haven't brought the machine into production yet. Running sbs2k3 premium right now on another box as I needed to test the new Blackberry software on it.

In the ops case x64 std is the way to go unless he happens to find someone with a promotional copy of enterprise that they are willing to give him for next to nothing.
 
I'm going for whatever x64 variant that I can find :) Enterprise would be nice, but the features it has wouldn't really benefit me.

I,ve looked it up in my 2K3 course book, and Odditory is right, you will need with 32GB of ram the Enterprise version of 2K3, 2K3 Standard max out at 4GB, and does not even do 64bit, R2 does, but still at 4GB ram , Enterprice max out in 32bit at 32GB and with the 64bit at 64GB (Datacenter 32bit-64GB / 64bit-128GB).

So search for Enterprice or Datacenter 64bit to take full advantage of the hardware, asside from the other features it has.
 
I,ve looked it up in my 2K3 course book, and Odditory is right, you will need with 32GB of ram the Enterprise version of 2K3, 2K3 Standard max out at 4GB, and does not even do 64bit, R2 does, but still at 4GB ram , Enterprice max out in 32bit at 32GB and with the 64bit at 64GB (Datacenter 32bit-64GB / 64bit-128GB).

So search for Enterprice or Datacenter 64bit to take full advantage of the hardware, asside from the other features it has.

Not true at all Server 2003 x64 Standard will fit his needs just fine... how old is your "course book"? Perhaps it predates x64 (and the 64bit it's talking about is Itanium).
 
I,ve looked it up in my 2K3 course book, and Odditory is right, you will need with 32GB of ram the Enterprise version of 2K3, 2K3 Standard max out at 4GB, and does not even do 64bit, R2 does, but still at 4GB ram , Enterprice max out in 32bit at 32GB and with the 64bit at 64GB (Datacenter 32bit-64GB / 64bit-128GB).

So search for Enterprice or Datacenter 64bit to take full advantage of the hardware, asside from the other features it has.

Your book is wrong or you are looking in the wrong place. Windows server std 64bit r2 supports 32 gigs of ram.

Also they made a 64 bit version of windows server std before r2 came out. Also enterprise r2 supports 2tb of ram in the 64bit version.
 
Your book is wrong or you are looking in the wrong place. Windows server std 64bit r2 supports 32 gigs of ram.

Also they made a 64 bit version of windows server std before r2 came out. Also enterprise r2 supports 2tb of ram in the 64bit version.

swatbat is correct:)
 
pclausen, from AVSforum wants to ask a question, but [h] is not allowing new signups....

The pinout for the I2C connector on my 1170 controller are as follows:

1 +5V
2 GND
3 LCD Module Interrupt
4 Fault/Activity Interrupt
5 LCD Module Serial Data
6 Fault/Activity clock
7 Fault/Activity Serial Data
8 LCD Module Clock

The SAS backplane contains 6 I2C connectors that each have the following pinout:

1 Data
2 GND
3 Clock
4 No Connection

So am I correct in assuming the following mapping between the two?

ARE-1170 --------------------- SAS Backplane
7 - Fault/Activity Serial Data -> 1 - Data
2 - GND ---------------------> 2 - GND
6 - Fault/Activity Clock ------> 3 - Clock

This leaves pin 4 "Fault/Activity Interrupt" from the 1170 controller not connected to anything, which I assume is the way to go.

Since I don't see a way to set any backplane jumpers to automatically daisychain the I2C signal to the other 5 connectors, I assume I just wire all 6 of them in parallel?

p.s. It's shame that registration has been disabled over on the hardforum, or I would have asked these questions there.
 
Your book is wrong or you are looking in the wrong place. Windows server std 64bit r2 supports 32 gigs of ram.

Also they made a 64 bit version of windows server std before r2 came out. Also enterprise r2 supports 2tb of ram in the 64bit version.

Sorry, I was looking at the wrong place on the internet for the R2 version (the book does not handle the R2 version)
 
Ok, I was finally able to register. I appreciate gjvrieze copying my post from avsforum over here.

I should be getting my 846TQ-R900 tomorrow. The mobo I'll be using as an Asus P5WDG2-WS which does not support SES-2. So in order for all the drive I/O and failure LEDs to function, I have to use I2C (or worst case, resort to running a large ribbon cable for all 48 LEDs from my controller).

My controller is the Areca ARC-1170 24-port SATA-II RAID6 controller (very similar to the 1280ML, only PCI-X and 24 individual SATA-II connectors). It was the only 24 port game in town when I picked it up back in 2006.

So looking at the 846TQ backplane .pdf, it would appear that there are no jumpers to daisychain all 6 connectors together, so I wanted confirmation that I just wire all 6 in parallel in order to control all 48 drive LEDs via I2C.

I was also wondering if the 846TQ-R900 includes 6 I2C connectors as they look very non-standard.

Thanks!
 
Ok, I was finally able to register. I appreciate gjvrieze copying my post from avsforum over here.

I should be getting my 846TQ-R900 tomorrow. The mobo I'll be using as an Asus P5WDG2-WS which does not support SES-2. So in order for all the drive I/O and failure LEDs to function, I have to use I2C (or worst case, resort to running a large ribbon cable for all 48 LEDs from my controller).

My controller is the Areca ARC-1170 24-port SATA-II RAID6 controller (very similar to the 1280ML, only PCI-X and 24 individual SATA-II connectors). It was the only 24 port game in town when I picked it up back in 2006.

So looking at the 846TQ backplane .pdf, it would appear that there are no jumpers to daisychain all 6 connectors together, so I wanted confirmation that I just wire all 6 in parallel in order to control all 48 drive LEDs via I2C.

I was also wondering if the 846TQ-R900 includes 6 I2C connectors as they look very non-standard.

Thanks!

Your post is confusing - what exactly are you asking? You seem to be asking questions in the form of statements. You should still get fault and activity LED's simply by having each SATA cable plugged in between the Areca and the backplane. When I first set up my system and only had 16 drives occupying the first 16 bays, I plugged some other harddrives into the top bays and connected them via SATA connector to the motherboard's SATA ports, and I was still getting LED activity on the bays for those drives without any sideband connector. You may want to just wait until you get all your parts in and actually test it before you worry too much about what might not work.

Your last statement: are you asking if the case comes with I2C cables? you said 'connectors'. If you mean cables, no it does not.
 
Updates:

So the toys came in (everything other than the minsas to sff sas cable). I expect this cable to be in tomorrow.

dsc06436ks3.jpg


IPMI Cable has commeth:
dsc06437sj1.jpg


IPMI is now plenty long
dsc06442fh8.jpg


CSEM14T Hot swap came, this is one awesome unit.
dsc06438pv5.jpg


dsc06439pt2.jpg


Cute :)
dsc06440ez0.jpg


One helluva potent fan and a sff sas connection for a clean finish
dsc06441yc3.jpg


Optional position 1, great position, but brackets needs to be made and airflow is limited in this area
dsc06443xb5.jpg


Optional position 2, my favorite, but brackets needs to be made and each time cards needs to be accessed it could be a problem... not to mention tension against the cards.
dsc06444qs8.jpg


Optional position 3, this is the best position, no brackets, tons of airflow in this area, and i only lose two slots (not pci-e slots, phew).. ipmi takes one up anyways where I placed it.
dsc06445yq6.jpg


dsc06446hv2.jpg


dsc06447nz2.jpg


Clean finish with minimal chassis modifications. Should work fine with rackmounting it and can easily be removed. This was not the IDEAL spot as I dreamed about, but it's better than nothing and still alows me to mount my 3.5" internal for backups. I really wish this case accomodated for a 5.25" bay... that would be nice.
dsc06448ch3.jpg


dsc06449jp5.jpg


dsc06450pd5.jpg
 
That should work out pretty well. Looks good.

I've noticed that all of the space server chassis makers save (4U vs 5U) is usually pretty valuable. Your 5.25 slot for instance. Had this been a 5U, you'd have it.
 
Damn nice setup, but if you don't mind me asking, where do you get your supermicro supplies from?
 
Looking very nice there Ockie, you must being getting really excited to finally have it done!!

Do you have a rack for it to go, out of the gate?? Maybe the same as Neptune, if you still even have Neptune...???
 
I buy my supermicro stuff wherever I can find it :)

newegg.com
ingrammicro.com
wiredzone.com


I don't have a rack as of this moment yet. I have one at work if that counts! lol. Neptune was mounted in a two post rack basically using a cantilever shelf... I would never dare putting this thing in a two post.
 
That should work out pretty well. Looks good.

I've noticed that all of the space server chassis makers save (4U vs 5U) is usually pretty valuable. Your 5.25 slot for instance. Had this been a 5U, you'd have it.

To hell with that! 5U = blech. It's all about the fun in achieving "max density" in this 4U ... I dig it.

I'm sure when SuperMicro designed this case, with >24< hotswap bays, that they didn't anticipate someone needing an ADDITIONAL bay for even more harddrives.

Ockie: Your placement of the SAS bay is exactly where I imagined I would put it when you first started talking about it. Not just for airflow from that third fan, but because that would be the least used part of the case given the PCI-X. Good choice.
 
FYI - I'm pleased that my Areca 1280ML hasn't let me down in nearly a month of stress-testing, including several array expansions, 5 or 6 purposeful drive failures, etc. I've been running a 20-drive Raid5 array, and just added 4 more drives last night - all 24 bays are now populated with 1Tb Hitachi's. My initial plan was to start with 16 drives and add one drive at a time each whenever I ran out of space, so that I could take advantage of cheaper drive costs later on, but seeing drive bays sitting empty has a way of nagging at you to populate them!

Before I expand the array to all 24 drives, I'm at a crossroads: this is my last chance to change raid levels (ie from Raid5 to Raid6) since, by nature of this Areca controller, one can only change raid levels during an array expansion. Once the array is expanded to 24 drives, that's it - no going back and changing raid levels ever again (short of backing up all that data to somewhere else and recreating the array). The issue of backup becomes a real bitch at this level of capacity, short of duplicating expense with even more drives.

So: Do I get greedy and run 23Tb of capacity @ Raid5? Or do I "keep it christian" (giggle) and run a more sane and rational 22Tb @ Raid6? Is that extra 1Tb of space with Raid5 worth having to chew my nails for 8-9 hours during a rebuild hoping another drive doesn't fail within that window?

Decisions, decisions.... :) Third option is to procrastinate and expand to 23 drives for now, then sit on the fence about expanding to the the 24th drive. If Areca supported array spanning across multiple cards, this wouldn't be an issue.
 
So: Do I get greedy and run 23Tb @ Raid5? and chew my nails for 8-9 hours if a drive ever fails, realizing that a second drive failing within that window means I lose everything? Or do I "keep it christian" (giggle) and run a more rational, more conservative 22Tb @ Raid6?


What's 1TB in a 20+TB setup? Play it safe.
 
I would definitely play it safe. If you really need another terabyte, spend 240 more bucks :p
 
Play it safe and go with RAID6. Especially if some of your drives are from the same batch. If there is a bad batch that got through and 2 fail at once then you have problems.
 
FYI - I'm pleased that my Areca 1280ML hasn't let me down in nearly a month of stress-testing, including several array expansions, 5 or 6 purposeful drive failures, etc. I've been running a 20-drive Raid5 array, and just added 4 more drives last night - all 24 bays are now populated with 1Tb Hitachi's. My initial plan was to start with 16 drives and add one drive at a time each whenever I ran out of space, so that I could take advantage of cheaper drive costs later on, but seeing drive bays sitting empty has a way of nagging at you to populate them!

Before I expand the array to all 24 drives, I'm at a crossroads: this is my last chance to change raid levels (ie from Raid5 to Raid6) since, by nature of this Areca controller, one can only change raid levels during an array expansion. Once the array is expanded to 24 drives, that's it - no going back and changing raid levels ever again (short of backing up all that data to somewhere else and recreating the array). The issue of backup becomes a real bitch at this level of capacity, short of duplicating expense with even more drives.

So: Do I get greedy and run 23Tb of capacity @ Raid5? Or do I "keep it christian" (giggle) and run a more sane and rational 22Tb @ Raid6? Is that extra 1Tb of space with Raid5 worth having to chew my nails for 8-9 hours during a rebuild hoping another drive doesn't fail within that window?

Decisions, decisions.... :) Third option is to procrastinate and expand to 23 drives for now, then sit on the fence about expanding to the the 24th drive. If Areca supported array spanning across multiple cards, this wouldn't be an issue.

I would go with raid 6, and am going to raid 6 when I get my Areca 1280ML with 20 750GB Seagate drives, it is worth the little loss, good God, these arrays at 15-24tb, if you lost it, the little overhead of the extra drive would look silly in hindsight!!!!

BTW, odditory, any chance you will post benchmarks of the raid array?? I have yet to see real world benchmarks of the 1280ML, I know Areca says it will do 700-900MBps in raid 5/6......
 
I buy my supermicro stuff wherever I can find it :)

newegg.com
ingrammicro.com
wiredzone.com


I don't have a rack as of this moment yet. I have one at work if that counts! lol. Neptune was mounted in a two post rack basically using a cantilever shelf... I would never dare putting this thing in a two post.
Good to hear. look at HP's cabinets. They're kickass!
 
<snip>BTW, odditory, any chance you will post benchmarks of the raid array?? I have yet to see real world benchmarks of the 1280ML, I know Areca says it will do 700-900MBps in raid 5/6......

Sure, I'll post some benchies soon.. just haven't posted too much of my own build's info since I didn't want to tread too heavily in this thread given that it's Ockie's.

Areca is not lying when they say 700-900mbps... I'm typically over the 1000mb/s
 
<snip>BTW, odditory, any chance you will post benchmarks of the raid array?? I have yet to see real world benchmarks of the 1280ML, I know Areca says it will do 700-900MBps in raid 5/6......

Sure, I'll post some benchies soon.. just haven't posted too much of my own build's info since I didn't want to tread too heavily with this being someone else's thread.

Areca is not lying when they say 700-900mbps... I'm typically over the 1000mb/s mark, regardless of cache size (was getting basically the same throughput with the stock 256mb memory, as I do now with the 2GB module). Where this throughput really shines is when copying gigabytes (or even terabytes) worth of data from the array back onto itself (ie copy a file from D:\FILE to D:\Copy of file) - supa-fast.

Diskeeper Enterprise 2008 is also doing a great job at keeping this large single volume defragmented. Not that I would likely ever notice fragmentation with this controller, but nice to be able to type "defrag D: -a" and see 0% fragmentation. A bit perfectionistic, I know.
 
Sure, I'll post some benchies soon.. just haven't posted too much of my own build's info since I didn't want to tread too heavily with this being someone else's thread.

Areca is not lying when they say 700-900mbps... I'm typically over the 1000mb/s mark, regardless of cache size (was getting basically the same throughput with the stock 256mb memory, as I do now with the 2GB module). Where this throughput really shines is when copying gigabytes (or even terabytes) worth of data from the array back onto itself (ie copy a file from D:\FILE to D:\Copy of file) - supa-fast.

Oh my God, that is heavenly, cannot to get my controller, need 10GbE as Ockie keeps saying, even then the overhead of the networking, might slow it down....

I will have to see if I can my hands on a legit Diskkeeper Enterprise license!!!

No one ever said which 2GB module you and Ockie found to work...???
 
Your post is confusing - what exactly are you asking? You seem to be asking questions in the form of statements. You should still get fault and activity LED's simply by having each SATA cable plugged in between the Areca and the backplane.
Sorry about the confusing questions odditory. I didn't realize the backplane had the smarts to handle the LEDs just from the SATA cables. That is great and I was able to confirm that as my unit showed up today and is up and running!

Not to take away from Ockie's great thread, but here are a couple of pics of mine with an Asus mobo. Down the road I might do it right and get the SuperMicro mobo like what you guys have so I can use that shroud.

846mobo2.jpg


846mobo3.jpg


846final.jpg


I ordered up the internal HD bracket since I didn't realize at first that I needed it for the boot drive. I also need to order up variable lenght SATA cables to make it look a lot cleaner before installing the remaining 16 cables and begin busting out those 1GB drives to slowly displace my current 8 * 320GB RAID6 raid. I sure do love that "expand on the fly" feature of the Areca controllers!

So is the verdict that those WD "Green Power" drives won't work in a raid but only in a JBOD due to the 5400-7200 RPM thing?
 
I didn't realize the backplane had the smarts to handle the LEDs just from the SATA cables.
I think I spoke too soon. When I go into the Areca web gui and do a "Identify Selected Drive" under "Physical Drive", no LED is lit on the front. So it looks like I need to get I2C connected after all. Being that no cables with the proper connector for the I2C ports were supplied with the chassis, that will prove to be quite a challenge.

Odditory, are you able to get the red LEDs to light up when you follow the above procedure? If not, I don't think we'll ever see the red LED iluminate on the shuttle of a failed drive.

Here's some additional info on what is missed by not connecting I2C between the controller and backplane:

--- cut ---
If you do not connect the I2C interface, then the LEDs will still work, but then they are "activity" based. Yes, the backplane can detect a failed drive and the LED will flash for that, making it easy to identify, HOWEVER, the I2C interface is important, because that's a direct signal from the HBA. If the adapter detects "bit rot" or a failed cluster, it can signal the backplane, but only using I2C. The backplane just by itself will not be able to spot such errors. Kind of like an early warning.
--- cut ---
 
Sad Panda here. My Adaptec 3405 isn't showing any of the SAS drives. I reseated everything and replaced the power extension cable. The fan is spinning, I think the disks are spinning and getting power... but no detection by the controller. I'm thinking it may be the cable that is FUBAR, if that is the case, I am going to be a sad panda. The cable is actually a minisas to sffsas cable, I am not sure of the brand.. not an adaptec... now I regret it.


Anyone got any ideas? I've never had this issue before, but then again, I've never worked with this controller before, everything else seems to check out fine.
 
Anyone with suggestions? I'm going to guess it's the cable. I need to confirm asap so I can perhaps look at a saturday delivery or something.
 
You could try pulling one of the cables off the areca to see if it's the controller.

Other than that, I dunno...
 
You could try pulling one of the cables off the areca to see if it's the controller.

Other than that, I dunno...

Areca is a SATA controller, these are SAS drives. You can use SATA on a SAS controller if you have the right cables but you can't use a SAS drive on a SATA controller.

I guess since no one else has any input I'm going to order the cable on ground shipping, I don't want to spend $85 on saturday delivery if it's not the cable. Yay for another week without the server :(
 
Areca is a SATA controller, these are SAS drives. You can use SATA on a SAS controller if you have the right cables but you can't use a SAS drive on a SATA controller.

I guess since no one else has any input I'm going to order the cable on ground shipping, I don't want to spend $85 on saturday delivery if it's not the cable. Yay for another week without the server :(

The areca doesn't have the same miniSAS connector the adaptec has? That's what it looks like from the pictures, at least...
 
Back
Top