Project: Galaxy 3.0

Fedex shows delivery on Thursday (Tomorrow), so update will come then :)
 
Ockie said:
Thats why I hate enthusiast psus, their actual output is so damn low. I'm expecting this 1KW to give me nothing less than 1KW of output. Its interesting because when you deal with server grade PSUs they don't talk about peak power, peak power is just like a perk or something :D

That 680 of yours did have some great numbers too... but I guess it's just like riced out cars... all the stickers, all the show... no go :p
As others have said it is a true 1kw PSU. It shall have an extremely long life, I don't see anything on the horizon coming near to obsolete it. Talking about sever PSU's and thir true rating is one of the reasons I recomended Zippy. They are super solid psu's designed for a server market where stability is #1.

Tim
 
Leaving in 15 mins... en route to fedex to go get my baby!
 
yay! she came


a real bad picture that doesn't do justice :(


case pics!







Alright, time for a PSU review:

This PSU is big, powerfull, and looks sexy (deep black).

Well, there is a downfall that dissapointed me ALOT. First of, this is the first high end psu that I've seen with so LITTLE bit of connectors, I had to use splitters to make everything work out! For such a high grade power supply, it blew my mind that they have only included 3 runs for molex connectors (two of them has 3 molex ends and one has two molex ends and a floppy connector). As a suprise, which blows my mind considering that most people using a PSU like this would obviously be using a backplane of some sorts... they've included 6 freakin sata connectors! WTF... I've yet to see a high end workstation or server dish out straight to the drives, usually you have either A: Bay cooling unit, or B: Hotswap unit.

Anyways, to say the least I was dissapointed in the fact that I actually had to use Y splitters and that my XConnect had more plug space than it has. It does have two PCI-E cables for video cards which is nice, but they should have made that more dual purpose modular like OCZ did with their powerstream psu's... this allows me to draw more power without having to use so many splitters.

One other very annoying thing is that their cables pulls out of their plugs really easily, enough to prevent your machine from starting. So I had to use a 24 pin and an 8pin psu extender to prevent the cable from pulling out under its own weight! Considering the enormous cost and the suposed quality of this psu, they should have at least made it more rugged for server enviroments.

Now there is a really nice thing that they did do, and thats sleeving the cables... it's top notch. However, they used very thick shrink wrap on the end near the molex connectors, making them very hard to bend (you can see in the picture one refuses to bend).


Now for the nice things:

Looks good
Very powerfull
Quite


So right now I'm contemplating on ordering some cabling and molex ends... and redo the entire psu's wiring more to my liking. My only gripe about that is that it would destroy the value of this psu if I do wish to remove it later on down the road. I would obviously leave the SLI ends and the sata ends because I have them neatly hidden away.


Overall, it's a 50/50 situation, the price with the lack of extra ends, and cables that pulls out really easily, pulled me down.

It's a good psu, wish it was cheaper to make up for the other problems. I never had this problem with a cheap ultra or a PowerStream... hell i've stretched codegen psu's without their cables pulling out on me.
 
Nice loooking rig when all pieced together :cool:

At what you said about the psu and iirc PCP&C really values customer service I would personally send it back and ask for another. There is no reason thet you should get loose connectors and short cables for $500 dollars it is just wrong :mad: Compare this to as if you were buying a car. Ther is the car lets say volvo(built strong as hell) and a porsche and you are merely looking for a really strong solid car. Would you be pleased if half the electric connections fell out on the porsche? Or how bout paying so much extra but really getting anything more then design and hood ornament. I'm start to get restless and side track so I'll stop here before I go on and on.

Tim
 
Don't know why you would have loose connectors, I've owned 2 PCP&C units and never had anything like that.
As far as the number of connectors, I'm surprised you didn't check the diagram of the harness on their website.
I noticed they didn't sleeve the very ends of some of the wires - that's kinda disappointing since they did it on my 510W.
 
^I think that's because he used extensions and splitters?

Anyway, I would contact PC P&C and talk to them about it, I'm sure they can try to solve those problems. At that price I would expect no less.
 
EnderW said:
Don't know why you would have loose connectors, I've owned 2 PCP&C units and never had anything like that.
As far as the number of connectors, I'm surprised you didn't check the diagram of the harness on their website.
I noticed they didn't sleeve the very ends of some of the wires - that's kinda disappointing since they did it on my 510W.


Pull down on your 24 pin atx cable and you will see some of the pins pulls out just enough to prevent the machine from starting.

I didn't think I'd need to check the diagram for wiring becuase when you pay $550 you expect a army of molex connectors. Hell my 550w psu's have almost enough for every drive. :eek: I was anyways contemplating on doing a cusom wiring job and hack off the PC&P cables to clean it up a bit more.
 
trust_no1 said:
^I think that's because he used extensions and splitters?

Anyway, I would contact PC P&C and talk to them about it, I'm sure they can try to solve those problems. At that price I would expect no less.


I had to use extensions to reduce the pressue of the atx cables.

I thought thier sleeving job was fine, I didn't have too much of a gripe about that, just wish it was a bit more workable in terms of flexibility :)



Anyways guys, I'll come up with something to make it sexeh :cool:


Also, I'm going to start on the little file server now, I just got the board and procs in for that :D
 
Ockie said:
Pull down on your 24 pin atx cable and you will see some of the pins pulls out just enough to prevent the machine from starting.
Mine seems pretty solid. :confused:
 
mashie said:
It is an enthusiast PSU designed and aimed for highly overclocked SLI/Crossfire systems, not for servers. Most people with need for 1000W+ in a server should use a 2+1 redundant PSU anyways.


My Xconnect had more ends.....

My Antec has more ends....

My PCMCIS has more ends...


See where I'm going here?
 
mashie said:
Still I don't consider any of them anything but enthusiast grade hardware. And as such I fully understand why they went for 6 SATA plugs in the PCP&C.

Right but,

What I'm trying to say is it wouldn't have hurted them to add more connectors on there. For $550 you'd expect the "extra mile" so to speak.


Anyways, I'm very happy with the way it turned out, so I'm only complaining about minor things that should be worth noted. Its running strong and it's fast, thats all I wanted :D



My other servers case will be in and then I can start with the pictures and the next server project. This one is more of a media server geared towards storing media content for my network dvd players :) It's a dual core machine and a lot of goodies. This machines capacity would be fairly high, but all I'm wanting out of it is 1.5tb.
 
So that 1Kw beast puts out 1000 watts normal useage and 1200 watts peak.
What is the draw from your wall socket?

Also, how does it feel to own the power draw of 10 X 100 watt light bulbs inside a computer?
 
Rewire and resleeve it yourself. Get some sata connectors and wire and make everything the exact length that you need. Another choice would be to measure up what connectors you need at what lengths and get PP&C to make one with exactly what you need for a little extra money.
 
Majin said:
So that 1Kw beast puts out 1000 watts normal useage and 1200 watts peak.
What is the draw from your wall socket?

Also, how does it feel to own the power draw of 10 X 100 watt light bulbs inside a computer?



On the previous machine it was 7.2amps of draw from the wall, this one should be a tad higher :D

Feels great! :D
 
Majin said:
So that 1Kw beast puts out 1000 watts normal useage and 1200 watts peak.
What is the draw from your wall socket?

Also, how does it feel to own the power draw of 10 X 100 watt light bulbs inside a computer?

You know it dosent draw 1kw unless components actually draw that much right? You dont just plug it in and it automatically draws 1000watts lol :p
 
Devistater said:
I'm so tempted to try and get a 150g raptor. The performance on those is nothing short of amazing, a huge leap from the 74g series, which was a leap from the 36g series.

The thought of 21 raptors in RAID5 (for about 3.1TB of space) is blowing my mind. But with that sort of speed, the PCI bus just wouldn't be able to handle it. Even a PCIe 1x slot wouldn't be enough, that is only 150MB/s... One raptor can peak at 88MB/s (not even considering cache), and 21 of them in RAID... Even PCIe 16x might not be enough, if they even make PCIe 16x RAID controllers. You'd need to use PCI-X 2.0, which does 3.4GB/s, enough for even 21 raptors.

It probably wouldn't matter anyhow. In order to get that many drives on a single RAID card, you'd need to use port replicators, which means 4 or 5 Raptors per SATA channel (also 150MB/s).

I guess you'd have to use software RAID 5 if you wanted them all in one array, and a motherboard with multiple PCI-x slots.
 
Ockie,
I have heard nothing but good things from PC P & C. I would call them up and see if you can trade your standard 1K PSU for a 'custom-wired' one that has all the conectors and cable lenghts that you need.
 
Guspaz said:
...a PCIe 1x slot wouldn't be enough, that is only 150MB/s... You'd need to use PCI-X 2.0, which does 3.4GB/s, enough for even 21 raptors.

You sure about that Guspaz? I am not about to check bus specs just for this, but 150MB/s souds too little. Could it be 150 up and 150 down for a 'marketing' total of 300? :confused:
 
Guspaz said:
The thought of 21 raptors in RAID5 (for about 3.1TB of space) is blowing my mind. But with that sort of speed, the PCI bus just wouldn't be able to handle it. Even a PCIe 1x slot wouldn't be enough, that is only 150MB/s... One raptor can peak at 88MB/s (not even considering cache), and 21 of them in RAID... Even PCIe 16x might not be enough, if they even make PCIe 16x RAID controllers. You'd need to use PCI-X 2.0, which does 3.4GB/s, enough for even 21 raptors.
Where'd you come up with this? He has 20 500gb drives, not 20 raptors, but whatever, let's roll with it. Areca makes a 24-port pci express x8 card. x1 is 250 MB/s both ways (theoretically) so x8 is 2 GB/s. PCI-X goes to 266/64 at max IIRC, which is 16 gbits, or 2 GB/s. There's a reason that new buses come out, they're faster ;) x16 is twice as fast, of course.

Now, as to transfer rates on such a setup, you might get a few GB/s (88MB/s * 20 = 1760) reading from disk, but writes that are less than a stripe wide are just going to be painful - you go to a 2r/2w scenario.

 
He wasnt saying that he had 21 raptors, he was just fantasizing about the speed that you would get from that.
 
CmaN3 said:
He wasnt saying that he had 21 raptors, he was just fantasizing about the speed that you would get from that.


What Speed?
Speed doesn't go up by adding more, 10k rpm is 10k rpm.
All you get is more storage.
 
unhappy_mage said:
Where'd you come up with this? He has 20 500gb drives, not 20 raptors, but whatever, let's roll with it. Areca makes a 24-port pci express x8 card. x1 is 250 MB/s both ways (theoretically) so x8 is 2 GB/s. PCI-X goes to 266/64 at max IIRC, which is 16 gbits, or 2 GB/s. There's a reason that new buses come out, they're faster ;) x16 is twice as fast, of course.

Now, as to transfer rates on such a setup, you might get a few GB/s (88MB/s * 20 = 1760) reading from disk, but writes that are less than a stripe wide are just going to be painful - you go to a 2r/2w scenario.




Correct and if you use more capable controllers they will allow you to create one array over multiple physical controllers for one massive partition :eek:
 
Ozone77 said:
Ockie,
I have heard nothing but good things from PC P & C. I would call them up and see if you can trade your standard 1K PSU for a 'custom-wired' one that has all the conectors and cable lenghts that you need.



I guess I'm just over expecting :) However, I'm not too worried about it, the initial sticker shock and the suprise kinda was a downer but I'm good to go! :p


Oh and let me post pictures of the new little file server :D
 
Alrighty, this could potentially be another Galaxy, but probably something like a Galaxy1.8

Basically this is a pretty straight forward machine, nothing extra fancy other than a dual core cpu.


Cooler Master Centurion 5
Antec 400W PSU
WD 10k RPM SATA Raptor 36.4GB (System drive)
Syba 2 port SATA Controller
Syba 4 port SATA Controller
Sony 16X DL DVDRW
Intel D945PSN Motherboard with Gigabit, SATA2, etc etc
XFX 6200TC PCIE
Intel P4 D820 Dual core 2.8ghz per core
Retail heatsink
Two I-Star 3 in 2 SATA hotswap units still needs to be ordered
3 x 320gb drives
2 x 250gb drive
2 x 256MB PC25400 DDR2 Memory


This machine is just waiting for its memory and she will be done as of this moment. She has 1496.4 GB's of storage, so just a hairline under 1.5tb. It's already full as we speak, so more drives will have to be ordered (I need the two hotswap units first before I can add anymore drives).

Its my media server for my network dvd players so i can be lazy and watch a movie from any room in the house without having to carry it with me :)


Without further adue, here is a crappy picture :)




Reason for building this one specifically? I was bored, had some parts/drives laying around that I wanted to use.
 
Cool I just built a computer with a centurion 5 mines the blue one. Its a really nice case.
 
Well MiniGalaxy is done, I had some initial problems (Running a syba 2 port and a 4 port sata controller gave issues since they used diffrent drivers but the system saw them as being the same). Anywhoo, runs perfect now and its doing great... it's out of space, but thats how it always is :)
 
Galaxy 3.0 seems like a awesome setup, but I was wondering about the setup on the picture of Neptune. What rack cabinet is it?(looks alot like Chieftec 4U or something similiar. And also, the rack cabinet below the Neptune, is it a SCSI fiber backplane?

Also, what are those drive cabinets on Neptune called? I was kind of looking at a similiar setup, but instead of Supermicro's drive cages, I was looking into Chenbro SK-335, same thing, just different design.
 
kpolberg said:
Galaxy 3.0 seems like a awesome setup, but I was wondering about the setup on the picture of Neptune. What rack cabinet is it?(looks alot like Chieftec 4U or something similiar. And also, the rack cabinet below the Neptune, is it a SCSI fiber backplane?

Also, what are those drive cabinets on Neptune called? I was kind of looking at a similiar setup, but instead of Supermicro's drive cages, I was looking into Chenbro SK-335, same thing, just different design.


Neptune uses a Norco case with a SATA backplane thats included. I think it's a D600S IIRC.

The drive sub system below it is a Norco DS-610 IDE Firewire storage system. Allows you to use IDE drives via firewire... great for just straight up bulk storage.
 
How does the stacker 801 compare to the original in your opinion. Did you tear apart galaxy 2.x to build 3.0 or did you leave it together?

If you pulled it apart whey did you decide to get the 801 instead of just updating the PSUs and running duals on the original?

So whats your total storage capacity at your place?
 








Well there you have it. Galaxy 3.0, MiniGalaxy, and Galaxy 1.0
 
GLSauron said:
How does the stacker 801 compare to the original in your opinion. Did you tear apart galaxy 2.x to build 3.0 or did you leave it together?

If you pulled it apart whey did you decide to get the 801 instead of just updating the PSUs and running duals on the original?

So whats your total storage capacity at your place?


The 810 is great. Its built is just so much more improved, stronger, steel top as opposed to alum. This is a great deal to me because with the weight the drive towers punches into the soft alum. which makes it look a bit weaker than it really is. Anyways, the 810 is IMO superior to the 101.

I have so much storage I don't even know :eek: I'll have to count it some day, I just kept adding drives and machines whenever I need more. I'm all good with machines now since I got some room to expand :D
 
Ockie said:
The 810 is great. Its built is just so much more improved, stronger, steel top as opposed to alum. This is a great deal to me because with the weight the drive towers punches into the soft alum. which makes it look a bit weaker than it really is. Anyways, the 810 is IMO superior to the 101.

I have so much storage I don't even know :eek: I'll have to count it some day, I just kept adding drives and machines whenever I need more. I'm all good with machines now since I got some room to expand :D

Interesting, I don't know about a stacker 101, but my T01 has a steel top and bottom also, and you can mount the PS at the top or the bottom...so it was superior to the 810 for me since i went with liquid cooling
 
Back
Top