Project: Plutonium 3

I thought about going with WHS, but I don't like the idea of losing half my storage capacity. I have an HP server right now with 3x 1TB and the 500GB, but I am going to 8TB drives in RAID5 so I have ~7TB of capacity.
 
I thought about going with WHS, but I don't like the idea of losing half my storage capacity. I have an HP server right now with 3x 1TB and the 500GB, but I am going to 8TB drives in RAID5 so I have ~7TB of capacity.

You can choose which files/folders that you want redundancy on, so in many ways it's actually better.

However, if you want all 8tb protected from drive failures, then WHS isn't the awnser if you wanted maximum usable capacity (ex not raid 1)
 
Yeah, right now I want the maximum space while having redundancy. I want all of it protected so WHS's solution is not the right one for me. I am keeping my HP server to backup my other computers though, it should work out nicely.
 
My understanding is that even if a drive dies, you can read the rest without issue (less the data which was on the afore mentioned drive) and just replace it with a drive and you are back in business.... Bare in mind, that I do not trust RAID at all any more, I will always have a complete backup.....
 
Yeah, right now I want the maximum space while having redundancy. I want all of it protected so WHS's solution is not the right one for me. I am keeping my HP server to backup my other computers though, it should work out nicely.

So you already have a method of an alternative full backup? If that's the case, then WHS is for you.
 
My understanding is that even if a drive dies, you can read the rest without issue (less the data which was on the afore mentioned drive) and just replace it with a drive and you are back in business.... Bare in mind, that I do not trust RAID at all any more, I will always have a complete backup.....

Well, I would be fine with this if the data I am storing were easily replaceable. Eventually I want to get to ripping Blu-ray movies to my storage server to stream to my TVs so I could just re-rip the movie, although a pain, it wouldn't be hard to replace. I have never tried RAID before so hopefully this doesn't turn into a disaster. :p

So you already have a method of an alternative full backup? If that's the case, then WHS is for you.

I would just use my WHS box to backup all my computers to use with the restore feature. This project is just pure storage and relying solely on RAID protection. Eventually one day, like after I graduate, I would like to get a rack with two storage servers mirroring each other with WHS or its predecessor running the show as RAID cards aren't cheap.
 
If there was a storm and surged the whole server.....wouldn't that cost a alot of money to replace it?
 
If there was a storm and surged the whole server.....wouldn't that cost a alot of money to replace it?

Home owners insurance "should" cover that kind of thing... I worry way more about the data, I can put a price on the server, I cannot on the data....
 
And another thing...looking at your sig....your sage server...the 4 ATi 550Pro's how much does it cost for each one?
 
nitrobass24 thanks for the answer, it helps clear some confusion I had.
I'm thinking since I already have a complete backup for the most important stuff and stuff that is semi important is on a RAID setup, then WHS is for me. I mean if the Storage King himself uses it, it must be good. :D
Maybe it is time in the near future for me to setup a work log ;)


(snip).... Bare in mind, that I do not trust RAID at all any more, I will always have a complete backup.....

gjvrieze, why do you say that you do not trust RAID at all anymore?
 
gjvrieze, why do you say that you do not trust RAID at all anymore?

Well, my best friend just lost his 4 drive raid 5 linux software array, one disk failed, it went to rebuild, and died... With a good controller like the 1280ML, it is not so scary, especially, if you run RAID 6 instead of 5... I have a full backup of all my data on simple drives.....
 
For me, raid is dead too, at least hardware raid. Solaris + ZFS is the only raid like solution I believe in. Everything else is Cron jobs and manual backups.
 
Well, my best friend just lost his 4 drive raid 5 linux software array, one disk failed, it went to rebuild, and died... With a good controller like the 1280ML, it is not so scary, especially, if you run RAID 6 instead of 5... I have a full backup of all my data on simple drives.....

I had a failed expansion on a 1280ML before... this was after many many sucessful expansions before.

$hit happens :) But yes, with the latest drive compatibility and raid controllers, it's not making things fun.
 
Home owners insurance "should" cover that kind of thing... I worry way more about the data, I can put a price on the server, I cannot on the data....

I'd think this was wishful thinking that insurance would cover this level of hardware.
 
[LYL]Homer;1033070413 said:
I'd think this was wishful thinking that insurance would cover this level of hardware.

It's not. I know my insurance covers $10,000 worth in computer hardware and I had an additional $10,000 taken out just incase... they wouldn't let me make it higher but the premuim is nice an low, was only a few bucks more.
 
I need to check on that one of these days, since I still live at home with parents.... Should have my dad check for me and see what coverage looks like....
 
And another thing...looking at your sig....your sage server...the 4 ATi 550Pro's how much does it cost for each one?

i bought 3 on gjvrieze's suggestion. Check ebay for OEM cards, i got all 3 for $41 shipped (so under $14 bucks each)
 
thanks for the answer gjvrieze

EDIT:
I'm sorry if I'm asking too many questions, but I have a question about the network configuration.
I will be transferring lots of stuff over the network and working with large files. What will I need to optimize my network hardware wise? I do not want to spend super crazy amounts of money on it, lol. If I get a mobo with onboard Gigabit NICs will that be enough, or should I get a dedicated PCI-E or PCI-X card instead. Will I benefit from teaming 2 ports if the switch supports it?
thanks in advance for any answers
 
I would say what do you define "super crazy amounts of money" as? From what I'm told, consumer onboard NIC's generally will "connect" as gigabit, but generally don't *really* get up to gigabit speeds, though server motherboards tend to do better in this area. If you're really looking for gigabit speeds, you may want to consider getting a dedicated card.

Even if you team two ports and your switch supports it, are you going to be moving *that* much data? For instance, pretend you have two teamed ports on the server, but only one client pulling at gigabit speed... well, the teaming isn't really helping you out there. See my point?
 
I would say what do you define "super crazy amounts of money" as? From what I'm told, consumer onboard NIC's generally will "connect" as gigabit, but generally don't *really* get up to gigabit speeds, though server motherboards tend to do better in this area. If you're really looking for gigabit speeds, you may want to consider getting a dedicated card.

Even if you team two ports and your switch supports it, are you going to be moving *that* much data? For instance, pretend you have two teamed ports on the server, but only one client pulling at gigabit speed... well, the teaming isn't really helping you out there. See my point?

Yep. Also keep in mind a goog managed gig switch sets you back well over $200, a decent gig unmanaged switch (no teaming) sets you back less than $100.

Teaming also requires two team compatible nics, thus even more costs. So my suggestion would be to go with a good server nic or even a killer nic if you prefer the gaming branding and get a good gigabit switch, you will get pretty close to acchieving true gigabit network speeds.
 
I would say what do you define "super crazy amounts of money" as? From what I'm told, consumer onboard NIC's generally will "connect" as gigabit, but generally don't *really* get up to gigabit speeds, though server motherboards tend to do better in this area. If you're really looking for gigabit speeds, you may want to consider getting a dedicated card.

Even if you team two ports and your switch supports it, are you going to be moving *that* much data? For instance, pretend you have two teamed ports on the server, but only one client pulling at gigabit speed... well, the teaming isn't really helping you out there. See my point?

I get 99% of gigabit, just using the single connection from my 3com switch to the primary NIC of Plutonium 4 when coping a few things at once, it will stay peaked for however long the data transfer is....

Ockie answered the second part of your question very well...
 
I get 99% of gigabit, just using the single connection from my 3com switch to the primary NIC of Plutonium 4 when coping a few things at once, it will stay peaked for however long the data transfer is....

...you're also running a server motherboard ;)
 
The reason some motherboards particularly the desktop ones get slower speeds on the onboard nics is because it is using the PCI bus...but on nicer models and server models it typically uses the pci-e bus which has a higher bandwidth.
 
I would say what do you define "super crazy amounts of money" as? From what I'm told, consumer onboard NIC's generally will "connect" as gigabit, but generally don't *really* get up to gigabit speeds, though server motherboards tend to do better in this area. If you're really looking for gigabit speeds, you may want to consider getting a dedicated card.

Even if you team two ports and your switch supports it, are you going to be moving *that* much data? For instance, pretend you have two teamed ports on the server, but only one client pulling at gigabit speed... well, the teaming isn't really helping you out there. See my point?

Well I describe "super crazy amounts of money" as $2000+ for a switch and $500 for a NIC.
As far as how much data I move around on my current setup. I cap tv and movies (6-10GB per file) as a hobby, I also stream those files to my htpc. I also do video editing and shoot raw images (40MB per image sometimes). After a shoot and when you start editing those files, moving them around, backing them up, it would be nice to upgrade cause my situation now is kinda pitiful imo.

Yep. Also keep in mind a goog managed gig switch sets you back well over $200, a decent gig unmanaged switch (no teaming) sets you back less than $100.

Teaming also requires two team compatible nics, thus even more costs. So my suggestion would be to go with a good server nic or even a killer nic if you prefer the gaming branding and get a good gigabit switch, you will get pretty close to acchieving true gigabit network speeds.

Well I was looking at the HP ProCurve line of switches. Those seem to be good in the $200-300 range, if there is a better switch out there please let me know :) I also was looking at the Intel NICs, PCI-e and PCI-x flavors.

The reason some motherboards particularly the desktop ones get slower speeds on the onboard nics is because it is using the PCI bus...but on nicer models and server models it typically uses the pci-e bus which has a higher bandwidth.

How can you tell what bus the onboard NICs use?


thanks again guys for your answers, very helpful!
 
How can you tell what bus the onboard NICs use?


thanks again guys for your answers, very helpful!

Its usually in the documentation or google

Ive never used HP switches...I like 3com, but right now for my house I'm using a dell powerconnect 2716...its a web managed gig switch and it does everything that I would want it to do for my house...teaming, vlans, miroring etc. Most people dont need a $2000 switch for their home, i mean on my dell swithc the switching capacity is like 78gbps im not going to max that out with 8 boxes even if they all had fast raid arrays, and 4gb teamed nics just not going to happen.
 
I use the HP ProCurve switches at home. I have the cheaper 1800-24G as well as some more expensive models but the 1800-24G should be more than enough. It's web managed, does vlan's, port mirroring, and teaming.
 
Well I describe "super crazy amounts of money" as $2000+ for a switch and $500 for a NIC.
As far as how much data I move around on my current setup. I cap tv and movies (6-10GB per file) as a hobby, I also stream those files to my htpc. I also do video editing and shoot raw images (40MB per image sometimes). After a shoot and when you start editing those files, moving them around, backing them up, it would be nice to upgrade cause my situation now is kinda pitiful imo.

That's as good a reason to have a gigabit switch as any. As I said before though, unless you're doing all that at once (eg. streaming to the HTPC *and* copying files to the server) *or* one of your boxes (such as your workstation) is also teamed, it may not really help you.

What're you running on now?
 
Well I was looking at the HP ProCurve line of switches. Those seem to be good in the $200-300 range, if there is a better switch out there please let me know :) I also was looking at the Intel NICs, PCI-e and PCI-x flavors.

There is better out there, but HP is a pretty good brand. The others will cost you "super crazy amounts of money" ;) I'm a big fan of 3Com, Cisco, and HP products. I just bought my new rack a HP Procurve.


How can you tell what bus the onboard NICs use?

It's not the bus thats the issue, it's the NIC chipset. Some NIC's share system resources and others have their own resources provided. For example, if you transferred with a cheap onboard nic, you will see some CPU usage increase and you will only see perhaps 400mbit transfer speeds, now if you transferred with a server grade nic, your cpu utilization would be minimal and you will see closer to 800mbit transfer speeds.

It's not a matter of PCI-E or PCI busses because a lot of older platforms and even PCI nics are very high performing... you can also get junk PCIE cards too.

My recommendation would be to look at an Intel Pro PT server nic. Intel chipsets typically are good performers. I would stay away from broadcom (although they do make good server end chipsets) or realtek or really anything else.
 
....What're you running on now?

Don't laugh, lol. I have a Linksys Wrt54GS

There is better out there, but HP is a pretty good brand. The others will cost you "super crazy amounts of money" ;) I'm a big fan of 3Com, Cisco, and HP products. I just bought my new rack a HP Procurve ..... My recommendation would be to look at an Intel Pro PT server nic. Intel chipsets typically are good performers. I would stay away from broadcom (although they do make good server end chipsets) or realtek or really anything else.
I think I will get an HP Procurve then, unless a find a really good deal on a 3com or Cisco switch. I would rather spend my money on hard drives ;)
Thanks for confirming that Intel makes good NICs. In all reality would I need dual NICs? Or would I be just fine with a single link?

thanks again for your help
 
Don't laugh, lol. I have a Linksys Wrt54GS


I think I will get an HP Procurve then, unless a find a really good deal on a 3com or Cisco switch. I would rather spend my money on hard drives ;)
Thanks for confirming that Intel makes good NICs. In all reality would I need dual NICs? Or would I be just fine with a single link?

thanks again for your help

Your router (if it is a Linux model, is fine)

I have a 3com 24 port managed (bought from Ockie a while back) even with 1 port, my server is very fast, I do not even feel the need for bonding at home, sort of bragging rights:)
 
hey gjvrieze, not sure if its been posted already but what program do you use to encode into x264 or whatever it is you use?

Also, do u cut out the ad's or not?
 
Orginally posted on AVS on 12-26-08....
benjamin.r on AVS said:
gjvrieze, any new updates on your build? I really enjoy this thread!
clueless_n00b on AVS said:
3 months and no reply! I wonder if he is ok? :confused:

Hey, been supper busy:

(1) I decided to still run the 1280ML with the dual quad core server. I am running 14x1TB drives in RAID 5, and it is working out very well... I am now running Sage TV on the file server feeding the 12TB of data and tv from the HDHRs out to 5 clients/extenders, no more cable and no more analog PERIOD!

(2) I just put a 40ft tower up with 3 antennas to feed the HDHRs:
Picture005.jpg

Picture006.jpg


I am grabbing two markets at the moment, La Crosse, WI, and my local market, Rochester/Austin, MN/Mason City, IA. I am hoping to grab some of the Twin Cities market after Feb 2009, but we will see.. I was fighting winter getting the tower this time of year, but I won and the results are pretty good, I am getting channels that I could not touch on the roof with the same antennas.... I am 50ft in air with the top mounted antenna, the lower antenna is at about 43ft....

(3) I have always really like PBS and now I get three digital PBS stations, counting up to 10 PBS channels (counting subs) and 20 channels total, after Feb, I should be a bit higher and some of the other markets will become easier to get....

(4) My RAID array is almost full, so I have been cleaning house and I have almost 1TB of raw tv recordings from back when I had cable, so I am editing and rendering those as quickly as I can stand.. I am back to 400+GB free this morning, so my effort is paying off, no need for new hardware----YET!

(5) The parts from the 10 tuner server are going to my brother's machine, as a P4/1GB of RAM is not cutting it anymore, the analog tuners: I sold 2 thus far and have 2 left to sell... The 2 cable boxes went back to Charter, and I will prolly grab another HDHR soon, I am waiting for the Tech/Pro version to come out..... I am prolly going to need more antennas for the other markets, but I am going to worry about that after Feb, so I can see what my results are with what I have, plus I am sick of tower working on cold days, at least for now:)
 
hey gjvrieze, not sure if its been posted already but what program do you use to encode into x264 or whatever it is you use?
AutoMKV

Also, do u cut out the ad's or not?[/QUOTE]

Yes, I cut all the crap with VideoReDo.....
 
Well, I a RAID array failure this past week. I forget to plug the cable back into the 1280ML, so it did not alert me went he first drive failure of my 14 drive RAID5 dropped and then another drive dropped and I was SOL'd.... luckily most of my data was backed up on the old 750gb drives from the past project, but I still lost some data that had not been synced into the backup....

New plan 1-15-09:

(20) 1TB 7200.11 drives in RAID 6, 2 becoming hot spares, should leave me with right around 15TB of usable space... I traded my boss for a WD GP 1TB for a Seagate SD15 which matches my array, and I bought a another drive on Newegg this past week, so I have 16 of the drives needed to get to my goal.... MOST IMPORTANT: network cable NEVER gets unplugged from the 1280ML ever, period!

Part 2:
1.5>2TB drives for backup in another rackmount machine that gets backed up once a month and left unplugged from power/network the rest of the time....

Since I moved the Sage server onto Plutonium a few weeks ago, I am going to prolly use the hardware from the former Deca server for the backup server, it currently has 5 hot swap drive bays and I will prolly bump it up to 10 for all the backup needs....
 
Well, I a RAID array failure this past week. I forget to plug the cable back into the 1280ML, so it did not alert me went he first drive failure of my 14 drive RAID5 dropped and then another drive dropped and I was SOL'd.... luckily most of my data was backed up on the old 750gb drives from the past project, but I still lost some data that had not been synced into the backup....

New plan 1-15-09:

(20) 1TB 7200.11 drives in RAID 6, 2 becoming hot spares, should leave me with right around 15TB of usable space... I traded my boss for a WD GP 1TB for a Seagate SD15 which matches my array, and I bought a another drive on Newegg this past week, so I have 16 of the drives needed to get to my goal.... MOST IMPORTANT: network cable NEVER gets unplugged from the 1280ML ever, period!

Part 2:
1.5>2TB drives for backup in another rackmount machine that gets backed up once a month and left unplugged from power/network the rest of the time....

Since I moved the Sage server onto Plutonium a few weeks ago, I am going to prolly use the hardware from the former Deca server for the backup server, it currently has 5 hot swap drive bays and I will prolly bump it up to 10 for all the backup needs....

It seems you have a shit load of drives, Ever though of getting another norco + old sage server parts + old hard drives + WHS for a backup rig?
 
It seems you have a shit load of drives, Ever though of getting another norco + old sage server parts + old hard drives + WHS for a backup rig?

Yes, that may happen actually, I like the 4020 case a lot for the price....
 
I'd avoid 1 TB 7200.11 HDDs like the plague at the moment, lots of bad firmware issues at the moment.

Other than that, looking good :)
 
I'd avoid 1 TB 7200.11 HDDs like the plague at the moment, lots of bad firmware issues at the moment.

I would not buy them over again, given the choice, but I hate the idea of selling off (16) 1TB drives [at a loss] then turning around to buy more drives that I am sure to get at least one or two DOAs..... The firmware bugs are not cool, but I think the issue may have gotten a little blown out of hand, not that many people had the drives with issues, just the version of firmware that was known to cause issues.... Hopefully the 2TB drives will be better from the start.... (I am still a little nervous of the drives, but I do have a good backup plan)

Other than that, looking good :)

Thanks:)
 
Back
Top