Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
What x64 bit version of Windows server 2003 are you gonna use?
And since the search is now gone i cant search for any raid controller for sale so does anyone have a 4-8 port pci-e raid controller for sale? PM me.
I'm going for whatever x64 variant that I can find Enterprise would be nice, but the features it has wouldn't really benefit me.
The User PC299 has a ARC-1231ML for sale.
There are more diffs between Std and Ent than just clustering, such as limitations in max SMP instances and addressable memory. Truth be told, however, the more important factor is running x64 rather than 32-bit. If he ends up running 32-bit Standard he'd be maxed at 4gb ram, and he *has* 32gb, and this motherboard supports up to 128gb. I imagine he could therefore get away with x64 R2 Standard, but I'd go for the gold: x64 R2 Enterprise.
http://www.microsoft.com/technet/windowsserver/evaluate/features/compare.mspx
Ya exactly, as cool as the features are Enterprise, not worth the extra $$$ for a home server...
I am switching to Standard x64 as soon as I have the 1280ML, did not want to have tons of downtime on Sunday while reloading the server
I'm going for whatever x64 variant that I can find Enterprise would be nice, but the features it has wouldn't really benefit me.
I,ve looked it up in my 2K3 course book, and Odditory is right, you will need with 32GB of ram the Enterprise version of 2K3, 2K3 Standard max out at 4GB, and does not even do 64bit, R2 does, but still at 4GB ram , Enterprice max out in 32bit at 32GB and with the 64bit at 64GB (Datacenter 32bit-64GB / 64bit-128GB).
So search for Enterprice or Datacenter 64bit to take full advantage of the hardware, asside from the other features it has.
I,ve looked it up in my 2K3 course book, and Odditory is right, you will need with 32GB of ram the Enterprise version of 2K3, 2K3 Standard max out at 4GB, and does not even do 64bit, R2 does, but still at 4GB ram , Enterprice max out in 32bit at 32GB and with the 64bit at 64GB (Datacenter 32bit-64GB / 64bit-128GB).
So search for Enterprice or Datacenter 64bit to take full advantage of the hardware, asside from the other features it has.
Your book is wrong or you are looking in the wrong place. Windows server std 64bit r2 supports 32 gigs of ram.
Also they made a 64 bit version of windows server std before r2 came out. Also enterprise r2 supports 2tb of ram in the 64bit version.
The pinout for the I2C connector on my 1170 controller are as follows:
1 +5V
2 GND
3 LCD Module Interrupt
4 Fault/Activity Interrupt
5 LCD Module Serial Data
6 Fault/Activity clock
7 Fault/Activity Serial Data
8 LCD Module Clock
The SAS backplane contains 6 I2C connectors that each have the following pinout:
1 Data
2 GND
3 Clock
4 No Connection
So am I correct in assuming the following mapping between the two?
ARE-1170 --------------------- SAS Backplane
7 - Fault/Activity Serial Data -> 1 - Data
2 - GND ---------------------> 2 - GND
6 - Fault/Activity Clock ------> 3 - Clock
This leaves pin 4 "Fault/Activity Interrupt" from the 1170 controller not connected to anything, which I assume is the way to go.
Since I don't see a way to set any backplane jumpers to automatically daisychain the I2C signal to the other 5 connectors, I assume I just wire all 6 of them in parallel?
p.s. It's shame that registration has been disabled over on the hardforum, or I would have asked these questions there.
Your book is wrong or you are looking in the wrong place. Windows server std 64bit r2 supports 32 gigs of ram.
Also they made a 64 bit version of windows server std before r2 came out. Also enterprise r2 supports 2tb of ram in the 64bit version.
Ok, I was finally able to register. I appreciate gjvrieze copying my post from avsforum over here.
I should be getting my 846TQ-R900 tomorrow. The mobo I'll be using as an Asus P5WDG2-WS which does not support SES-2. So in order for all the drive I/O and failure LEDs to function, I have to use I2C (or worst case, resort to running a large ribbon cable for all 48 LEDs from my controller).
My controller is the Areca ARC-1170 24-port SATA-II RAID6 controller (very similar to the 1280ML, only PCI-X and 24 individual SATA-II connectors). It was the only 24 port game in town when I picked it up back in 2006.
So looking at the 846TQ backplane .pdf, it would appear that there are no jumpers to daisychain all 6 connectors together, so I wanted confirmation that I just wire all 6 in parallel in order to control all 48 drive LEDs via I2C.
I was also wondering if the 846TQ-R900 includes 6 I2C connectors as they look very non-standard.
Thanks!
That should work out pretty well. Looks good.
I've noticed that all of the space server chassis makers save (4U vs 5U) is usually pretty valuable. Your 5.25 slot for instance. Had this been a 5U, you'd have it.
So: Do I get greedy and run 23Tb @ Raid5? and chew my nails for 8-9 hours if a drive ever fails, realizing that a second drive failing within that window means I lose everything? Or do I "keep it christian" (giggle) and run a more rational, more conservative 22Tb @ Raid6?
FYI - I'm pleased that my Areca 1280ML hasn't let me down in nearly a month of stress-testing, including several array expansions, 5 or 6 purposeful drive failures, etc. I've been running a 20-drive Raid5 array, and just added 4 more drives last night - all 24 bays are now populated with 1Tb Hitachi's. My initial plan was to start with 16 drives and add one drive at a time each whenever I ran out of space, so that I could take advantage of cheaper drive costs later on, but seeing drive bays sitting empty has a way of nagging at you to populate them!
Before I expand the array to all 24 drives, I'm at a crossroads: this is my last chance to change raid levels (ie from Raid5 to Raid6) since, by nature of this Areca controller, one can only change raid levels during an array expansion. Once the array is expanded to 24 drives, that's it - no going back and changing raid levels ever again (short of backing up all that data to somewhere else and recreating the array). The issue of backup becomes a real bitch at this level of capacity, short of duplicating expense with even more drives.
So: Do I get greedy and run 23Tb of capacity @ Raid5? Or do I "keep it christian" (giggle) and run a more sane and rational 22Tb @ Raid6? Is that extra 1Tb of space with Raid5 worth having to chew my nails for 8-9 hours during a rebuild hoping another drive doesn't fail within that window?
Decisions, decisions.... Third option is to procrastinate and expand to 23 drives for now, then sit on the fence about expanding to the the 24th drive. If Areca supported array spanning across multiple cards, this wouldn't be an issue.
Good to hear. look at HP's cabinets. They're kickass!I buy my supermicro stuff wherever I can find it
newegg.com
ingrammicro.com
wiredzone.com
I don't have a rack as of this moment yet. I have one at work if that counts! lol. Neptune was mounted in a two post rack basically using a cantilever shelf... I would never dare putting this thing in a two post.
<snip>BTW, odditory, any chance you will post benchmarks of the raid array?? I have yet to see real world benchmarks of the 1280ML, I know Areca says it will do 700-900MBps in raid 5/6......
<snip>BTW, odditory, any chance you will post benchmarks of the raid array?? I have yet to see real world benchmarks of the 1280ML, I know Areca says it will do 700-900MBps in raid 5/6......
Sure, I'll post some benchies soon.. just haven't posted too much of my own build's info since I didn't want to tread too heavily with this being someone else's thread.
Areca is not lying when they say 700-900mbps... I'm typically over the 1000mb/s mark, regardless of cache size (was getting basically the same throughput with the stock 256mb memory, as I do now with the 2GB module). Where this throughput really shines is when copying gigabytes (or even terabytes) worth of data from the array back onto itself (ie copy a file from D:\FILE to D:\Copy of file) - supa-fast.
Sorry about the confusing questions odditory. I didn't realize the backplane had the smarts to handle the LEDs just from the SATA cables. That is great and I was able to confirm that as my unit showed up today and is up and running!Your post is confusing - what exactly are you asking? You seem to be asking questions in the form of statements. You should still get fault and activity LED's simply by having each SATA cable plugged in between the Areca and the backplane.
I think I spoke too soon. When I go into the Areca web gui and do a "Identify Selected Drive" under "Physical Drive", no LED is lit on the front. So it looks like I need to get I2C connected after all. Being that no cables with the proper connector for the I2C ports were supplied with the chassis, that will prove to be quite a challenge.I didn't realize the backplane had the smarts to handle the LEDs just from the SATA cables.
Got a question for anyone on Supermicro Servers.
Do they all (atleast the highend models) have LEDs on the hdd bays? (As seen in this picture) - http://www.cstone.net/~dk/846final.jpg
Good to hear. Thanks Okie.Yes, they all do
You could try pulling one of the cables off the areca to see if it's the controller.
Other than that, I dunno...
Areca is a SATA controller, these are SAS drives. You can use SATA on a SAS controller if you have the right cables but you can't use a SAS drive on a SATA controller.
I guess since no one else has any input I'm going to order the cable on ground shipping, I don't want to spend $85 on saturday delivery if it's not the cable. Yay for another week without the server