Project: Plutonium 3

Sweet, I see you have a Q6600 how is it? Im gonna get a C2D and 4gb ram and run vms then upgrade to a Q6600 and 8gb ram later.
 
Sweet, I see you have a Q6600 how is it? Im gonna get a C2D and 4gb ram and run vms then upgrade to a Q6600 and 8gb ram later.

The RAM is the most important thing for VMs once you get into the Core 2 series architecture... VM performance is very scaled on core needs after 2, ie, you can run twice as many VMs on 4 cores and 2....

I found that on a quad with only 4GB of ram, I hit the ram limit at 4-8 VMs; I have yet to stress the new machine:)
 
I setup my 2 Broadcom NICs on the Asus P5M2 SAS mobo into Teaming mode, "link aggregation" Kind of weird, as it only shows 1Gbps speed on the NIC that the team creates, can someone tell me if this is correct, and will I get the combined 2 (1Gbps) speeds, that I am hoping for?? It actually only shows 1 NIC as being active at any given time, if I pull ne cable, it will auto fail over to the other and it becomes active, and no data loss (which is pretty sweet)

I could also use a good LAN stress/speed test, if someone has good apps, please tell me, I am all hears!!
 
First of all, I'm assuming that you're hooking your team up to a managed switch with link aggregation setup. Without a managed switch you'll never see the benefits of teamed NIC's. I've personally had a two port link aggregate transfer over 300MBps which is well over the spec for two gigabit links (250MBps), weird.

Here's a pretty common tool for measuring bandwidth. I haven't gotten a chance to play with it myself but I saw our CCIE consultant testing the new Aironets with it. I'm not sure if this will show any performance increase over regular gigabit as link aggregation only works when there are multiple connections.

http://sourceforge.net/projects/iperf
 
Your switch needs to support the function.
 
Your switch needs to support the function.

I have an 8 port Netgear, so that explains that, I am planning on grabbing a Managed Dell switch, when Dell has them on sale again:)

But at least it is configed and still moving data, with auto failover, so I will not to reconfigure the server when I switch out the switch!!

The Dell switch I am after supports IEEE802.3ad, so I am good to go:)
 
I have an 8 port Netgear, so that explains that, I am planning on grabbing a Managed Dell switch, when Dell has them on sale again:)

But at least it is configed and still moving data, with auto failover, so I will not to reconfigure the server when I switch out the switch!!

In addition to making sure that it's managed, you also need to make sure that it supports LACP, 802.3ad, teaming, bonding, whatever they happen to call link aggregation.
 
In addition to making sure that it's managed, you also need to make sure that it supports LACP, 802.3ad, teaming, bonding, whatever they happen to call link aggregation.

I checked the Dell specs after typing the prior message, I edited it with this:

"The Dell switch I am after supports IEEE802.3ad, so I am good to go"
 
My 1280ML just showed up. I am en route to the server room with camera in hand, should have more to post in the afternoon:)

Good news/bad news,

Good news: The card with the 2GB of ram works great:)

Bad news: The card does not fit in Plutonium 3, very good, the connectiors on the right side are into one of the backplanes, and I cannot get cable to one port on the back plane... The good news is that I still have all the of the 24 ports needed for the Areca, and 5 ports left for the onboard SAS controller.....

I think in the future, this case is going to become backup, and I going to move to those rack mount Norco ML 12 drive cases......
 
My 1280ML just showed up. I am en route to the server room with camera in hand, should have more to post in the afternoon:)

Good news/bad news,

Good news: The card with the 2GB of ram works great:)

Bad news: The card does not fit in Plutonium 3, very good, the connectiors on the right side are into one of the backplanes, and I cannot get cable to one port on the back plane... The good news is that I still have all the of the 24 ports needed for the Areca, and 5 ports left for the onboard SAS controller.....

I think in the future, this case is going to become backup, and I going to move to those rack mount Norco ML 12 drive cases......

So the 90 degree cables won't let it fit? Because you can always buy these cables seperatley.
 
So the 90 degree cables won't let it fit? Because you can always buy these cables seperatley.

On the side with the 4 ports, I used 2 normal and 2 90 degree, the one hits the Athena back plane really hard and blocks one sata port, with the right 90 degree sata cable, I may be able to make it work, but it is not needed right now...

The good thing, I learned, is the that the SAS cables of my on board Asus mobo, look to be the same as the Areca cables, so I need to try one to find out, it would be great to be 100% ML cables!!
 
On the side with the 4 ports, I used 2 normal and 2 90 degree, the one hits the Athena back plane really hard and blocks one sata port, with the right 90 degree sata cable, I may be able to make it work, but it is not needed right now...

The good thing, I learned, is the that the SAS cables of my on board Asus mobo, look to be the same as the Areca cables, so I need to try one to find out, it would be great to be 100% ML cables!!

It's called minisas :) Multilane has a diffrent type of connector, but you can get minisas to multilane. To be 100% multilane, your backplanes also would need the high density ports.


The header on your board is a minisas header so it will work with the areca cables.
 
It's called minisas :) Multilane has a diffrent type of connector, but you can get minisas to multilane. To be 100% multilane, your backplanes also would need the high density ports.


The header on your board is a minisas header so it will work with the areca cables.

It uses the same 8087 connector on the mobo that the Areca uses... Yes, I guess having "high density ports" would be great, but I meant, no more separate sata cable for each drive hookup on the backplanes....

The problem with the minisas cables that came with the mobo is that they have the power connector with the data, and the backplanes need just the data......
 
Here are some pictures, remember, this is temp, the 2 Supermicro 8 ports are coming out (I am selling them, if you are interested, drop me a PM) and I am only using the Areca and the onboard SAS!! Gotta run the Supermicro cards till I get the data off my software array....

Picture001-5.jpg

Picture003-5.jpg

Picture004-4.jpg
 
Be glad you didn't have supermicro hot swaps, they are much deeper :)
 
Almost done!!!

Things done:

New Asus P5M2 SAS Motherboard with Quad Core, and 8GBs of ECC RAM.

10 new 750GB drives, new raid card: an Areca 1280ML.

Going to RAID 6, still not finished, as I have to use OCE to add the 10 old 750GBs into the new raid, I am expecting to spend 1 week doing this....

Dual 1Gbps NICs in a team with auto fail over, not "bonding" the bandwidth yet, as my switch is a POS..... Soon to be replaced with a Dell managed switch!

Switched from Windows Server 2k3 Standard, to Windows Server 2k3 Enterprise R2 x64!

Still to do: add the Areca Battery Backup Module!!


Without further ado, here are pictures, of the almost finished machine!!

Here is a benchmark of 10 Seagate 750GB 7200.10s on the 1280ML while Initializing (The XOR calc was set to 80% priority, and the speeds are still pretty good):
arecabuildingraid6with10drives80.jpg


Here is a benchmark with raid 6 on 10drives, finished, running in normal mode:
arecaraid6with10drives.jpg


And here is the most current with 10 drives in r6 upgrading via OCE to 11 drives(The XOR calc was set to 80% priority, and the speeds are still pretty good)..:
arecamirgratingraid6with10drivesto1.jpg


And here is some almost finished pictures:
Picture001-6.jpg

Picture002-6.jpg

Picture003-6.jpg

Picture005-3.jpg

Picture006-3.jpg

Picture007-3.jpg

Picture009-3.jpg
 
Sweet deal, I got my BBM for the 1280ML, I got it on ebay for $86 shipped, and came this afternoon! Had I not checked the tracking, it would still be sitting in 2" of snow on the front steps!! It came very quickly, I am glad I bought it there instead of Newegg, saved $75 too!!!

I can shut down (for the last time, for awhile I hope!!) and install it and it is done less 4 more drives, which can easily be OCE'd in!!
 
I have an 8 port Netgear, so that explains that, I am planning on grabbing a Managed Dell switch, when Dell has them on sale again:)

But at least it is configed and still moving data, with auto failover, so I will not to reconfigure the server when I switch out the switch!!

The Dell switch I am after supports IEEE802.3ad, so I am good to go:)


If you want a 3Com Switch for much cheaper (full gig with all the features you desire, and blazing fast), let me know. I have four of them. I am phasing them out in favor of another vendor approach (they are hard to get since I need quite a few more than four).
 
Here is 11 7200.10s/11s in raid 6 on the 1280ML after the rebuild finished... This the 8MB test with Storport driver.....

raid6healthywith11drivesstorport8MB.jpg


And here is the same configuration but HDTach 32MB, with Storport driver, I honestly think the SCSIPort driver was faster, but I have another OCE going, so it will be over 24hrs before I can run any tests in normal mode....
raid6healthywith11drivesstorport32M.jpg
 
Well in the sprint, of wanting and good review and not being able to find, I meant, that I was going to post a ton of info on the Areca 1280ML, so here is a screen shot of the web config while expanding the array from 11 drives to 13....
I think once this OCE finishes, that I am going to do 1 drive at a time, just as another test, I think it is going to come out faster, doing one drive at a time...
It took 24hrs to go from 10>11 drives....
It looks to take 60-62hrs to go from 11>13 drives...

webconfig.jpg
 
awesome and awesome case. are those supermicro bays? 2 sata ports each? :)

Actually they are from Athena (sold buy other companies too) and they hold 5 SATA/SAS drives each, and I have an Areca 1280ML, which has mulit lane cables, so that is why the cabling looks as good as it does!!! I have 4 drives in each back plane hooked to the Areca, and the remaining drive is going to the mobo; so 24 drives on the Areca, and 6 on the mobo.....
 
Yakyb from AVSforum said:
definately subscribing to this veryt nice build thread

this is what im slowly edging towards (at a slow and steady rate)

how much did the server set you back (all included case HDDS contrllers MOBO CPU etc etc)

cant wait for the 1 TB drives to drop below £100

Good question, had not yet totaled it, well here goes:

Case and Back Planes (Used) from Ockie on [h]ardforum---------$825
Asus P5M2 SAS Motherboard (New) from Newegg------------------$350
Intel Core 2 Quad Q6600 (New) from ZZF---------------------------------$250
8GB of Kingston 2GB ECC Modules (New) from Newegg-----------$200
PC Power & Cooling 750w power supply (New) from PCP&C------$210
Areca 1280ML 24 Port RAID Card (New) from Newegg-------------$1092
2GB ECC Kingston RAM for the 1280ML (New) from Kingston-----$62
Areca 6120 Battery Backup Module (New) from Ebay-----------------$86
(2) Seagate 80GB 7200.9s (for the OS) (New)------------------------2x$40
(24) Seagate 7200.10s, 11s, and ESs--------------------------------24x$200
Belkin 1500VA UPS--------------------------------------------------------------$150

Grand Total-------------------------------------------------------------------------$8105

I think that I am fairly close for the most part, some parts I have had for a long time, hence the higher then now pricing....
I think with misc stuff, that I am at roughly $10k.. not bad for 18TB!!!
 
Here is a benchmark of my raid 6 array with 13 once the OCE finished this morning, this is HDTach in 32MB mode:
raid6with13normalmode32MB.jpg
 
Good question, had not yet totaled it, well here goes:

Case and Back Planes (Used) from Ockie on [h]ardforum---------$825
Asus P5M2 SAS Motherboard (New) from Newegg------------------$350
Intel Core 2 Quad Q6600 (New) from ZZF---------------------------------$250
8GB of Kingston 2GB ECC Modules (New) from Newegg-----------$200
PC Power & Cooling 750w power supply (New) from PCP&C------$210
Areca 1280ML 24 Port RAID Card (New) from Newegg-------------$1092
2GB ECC Kingston RAM for the 1280ML (New) from Kingston-----$62
Areca 6120 Battery Backup Module (New) from Ebay-----------------$86
(2) Seagate 80GB 7200.9s (for the OS) (New)------------------------2x$40
(24) Seagate 7200.10s, 11s, and ESs--------------------------------24x$200
Belkin 1500VA UPS--------------------------------------------------------------$150

Grand Total-------------------------------------------------------------------------$8105

I think that I am fairly close for the most part, some parts I have had for a long time, hence the higher then now pricing....
I think with misc stuff, that I am at roughly $10k.. not bad for 18TB!!!

For that price you could have gotten a desk. :p

Here is a benchmark of my raid 6 array with 13 once the OCE finished this morning, this is HDTach in 32MB mode:
[MG]http://i151.photobucket.com/albums/s129/gjvrieze/raid6with13normalmode32MB.jpg[/IMG]

That looks pretty good for RAID-6. :)
 
Did not want any readers to think that I forgot about my project, just been waiting on OCEs.... I did buy a 24 port Web Managed 3Com switch from Ockie on [h] and got it mounted and tunked to server today:
Picture002-7.jpg
 
I need to add this as I forgot to do it before: I know Raid 6 is slower than Raid 5, but those are some amazing numbers. I didn't think you could go that fast with it. Then again, I've never done a ton of research into the matter either.
 
Well, been running OCEs till I got 22 drives into my raid 6 array, (leaving 2 drives as hot spares)

Here are the results alone the way, the HDTach screenshots are in 32MB test mode....

15 drives:
15drivesinr632MBnormalstatus8Pri.jpg


16 drives:
16drivesinr632MBnormalstatus80Pri.jpg


17 drives:
17drivesinr632MBnormalstatus80Pri.jpg


18 drives:
18drivesinr632MBnormalstatus80Pri.jpg


20 drives:
20drivesinr632MBnormalstatus80Pri.jpg


21drives:
21drivesinr632MBnormalstatus80Pri.jpg


22 drives:
22drivesinr632MBnormalstatus80Pri.jpg


22 drives in 8MB test mode:
22drivesinr68MBnormalstatus80Pri.jpg


22 drives under HDTune:
22drivesinr632MBnormalstatus80PriHD.jpg
 
What kind of issues did you have with the 1280ML and the fit of the 3051B SATA bays? I'm not quite sure I understood what you wrote above.

This is the exact same setup I'm thinking about, but worried that the ML will block the ports due to its length, so more photos and/or explanation would be appreciated =)
 
What kind of issues did you have with the 1280ML and the fit of the 3051B SATA bays? I'm not quite sure I understood what you wrote above.

This is the exact same setup I'm thinking about, but worried that the ML will block the ports due to its length, so more photos and/or explanation would be appreciated =)

The 1280ML came right up the middle left drive back plane, and with right angle connector ML cables, I was able to easily get 4 out of 5 drives of the back plane hooked up. However, the drive bay to the right was too tight, if I had the proper right angle single sata cable/adapter, I could get it to work 100%.... I guess for my mind, 29 hot swap drives is more then I will use, I am planing to stay between 24-26 drives, any more and power usage gets to be a lot, and at 24 drives in raid, I will just upgrade when the array gets full:)
 
Okay, thanks... guess I'll do some preemptive SATA cable shopping =)

One more question, noise level of the backplanes - since you have a server room I doubt you care, but... are they _LOUD_?
 
Okay, thanks... guess I'll do some preemptive SATA cable shopping =)

One more question, noise level of the backplanes - since you have a server room I doubt you care, but... are they _LOUD_?

With the back planes fan speed set on low, my server was very quiet, and even on high, it is not bad at all. The loudest thing in my server room at the moment is that 3Com switch, which totally drowns out any other noises....
 
With the back planes fan speed set on low, my server was very quiet, and even on high, it is not bad at all. The loudest thing in my server room at the moment is that 3Com switch, which totally drowns out any other noises....

Yeah, those little 3coms are screamers :)

Galaxy drowns mine out no problem, lol.
 
Back
Top