The [H]ard Forum Storage Showoff Thread - Post your 10TB+ systems

Status
Not open for further replies.
The most space efficient way of storing HDDs possible:
0829081518.jpg

What drive cage is that?
 
@ cyr0n_k0r - Sweet setup. How big are the drives in the MD1000s and how big is the PE1950 alone?
 
@ cyr0n_k0r - Sweet setup. How big are the drives in the MD1000s and how big is the PE1950 alone?
Well, 2 of the arrays are on 1 PE1950, and 2 are on another.
3 of the 4 arrays have 500GB drives, but the newest one has 1TB drives.

1 1950 has 22.5 TB
1 1950 has 15 TB
 
Looks a lot like this:
$29 - 3-to-5 Mounting bracket

Yep, that's the one. Seems to be a somewhat small business that sells these, and when I was purchasing them they only had 1 in stock with no idea when they'd get more. I ended up getting the other two from someone on [H]. Looks like they're back in stock now. Great brackets for the money, though they are just simple metal rectangles with holes. Still, it beats $100 (minimum) hot swap bays that are usually only 3in1 or 4in1. Of course, if a drive needs to be replaced, its a major pain to get it out of this thing. Not only do you have to unscrew the drive - you have to take the whole bay out just to get at the screws to the drive. I'm not looking forward to having to do that when the inevitable drive failure occurs.
 
Is this any better? Sorry, it looked fine to me at 19x12 :p
Yes that seems a little bit more manageable. When I come across some very high-res photos online I just hit CTRL and "-" to bring things down to size, takes 2 seconds and doesn't really slow me down. The resolution you have there is fantastic. Personally, what I do is host a 640x480 version of my photos, post those but make the hyperlink go to the HD versions so if someone was using a netbook or something it wouldn't be hard to deal with, and it doesn't download the HD versions unless clicked on, thus saving bandwidth for those dial-uppers still lurking out there (you know who you are!). Just my 2 cents.

Oh and I like your 5x3 cage things, I wish they sold them here locally I could have saved a bunch by not getting the Startech hotswaps and getting those. The Startech ones are nice but mine rattle from time to time and the flat screws warp with the slightest pressure. I haven't posted my backup server here yet so don't be looking for it, sort of a waste of time when it doesn't even hold a TB currently.

I own the business so it counts.

I see you own the business, but you are not using the drives yourself, it's your business. Seems like a grey area but I still think the server should be running in your home's private network. Then technically if whoever owns houkouonchi's insane petabyte storage facility comes forward he could post it as his own. I think things would just get out of hand.

Your rack though is very impressive! I wish I had an offsite backup business! Dell branded SATA drives? Weird.

[Edit] Removed my reply to The Hunter's question... I misunderstood what you meant, I thought you had wasted area on the screen, instead you wanted to have multiple windows up, totally different.
 
Last edited:
I see you own the business, but you are not using the drives yourself, it's your business. Seems like a grey area but I still think the server should be running in your home's private network. Then technically if whoever owns houkouonchi's insane petabyte storage facility comes forward he could post it as his own. I think things would just get out of hand.

Agreed. If your drives are being subsidized by your customers, then they are really your business' drives, rather than your own. Albeit, if you are a sole proprietor, your business' assets are your own, but you get my point.
 
Not much more space when you toss out the 1t's and replace them with 1.5t's. :( That's why I go double or nothing :D

True, which is why I only bought 8. The 20x1 TBs in my main system are staying put as I am only using about 10 TB of the 18 TB useable anyway... These drives are going in my 2U server which is currently over 90% full (8x750 GB in raid5). I am upgrading it to 8x1.5 TB raid6 (so 9 TB usable). The 8x750 GB drives (currently in my server) are going to go in the system I will be using when I move to Japan on May 19th.
 
Just out of curiosity, what do you guys store in all this? for my own personal use, I don't see much more than 5 Tbs, and thats with alot of stuff, and in the next year or so heh!
 
been busy
if the list needs to be updated, send me a PM with link to your post
 
24.5TB Manufactured Total

1 x WD 500TB System drive
16 x Segate 1.5TB

norco 2040
ABIT IX38 Max
8 gigs
Intel Core 2 Duo E8400 Wolfdale 3.0GHz 6MB L2 Cache
Gigabyte GeForce 6600GT Silent Pipe
Supermicro Add-on Card AOC-USAS-L8I
LSI -R 8 x SAS
UPS cyberpower 1500VA
Opensolaris

Backup strategy, for storage i have 2 arrays of 8 drives each; in each array 2 drives drives can fail (=raid6).

pic1.png

http://zfstalk.com/images/hard/IMG_0172.JPG
http://zfstalk.com/images/hard/IMG_0173.JPG
http://zfstalk.com/images/hard/IMG_0174.JPG
 
24.5TB Manufactured Total

1 x WD 500TB System drive
16 x Segate 1.5TB

norco 2040
ABIT IX38 Max
8 gigs
Intel Core 2 Duo E8400 Wolfdale 3.0GHz 6MB L2 Cache
Gigabyte GeForce 6600GT Silent Pipe
Supermicro Add-on Card AOC-USAS-L8I
LSI -R 8 x SAS
UPS cyberpower 1500VA
Opensolaris

Backup strategy, for storage i have 2 arrays of 8 drives each; in each array 2 drives drives can fail (=raid6).


http://zfstalk.com/images/hard/IMG_0172.JPG
http://zfstalk.com/images/hard/IMG_0173.JPG
http://zfstalk.com/images/hard/IMG_0174.JPG


Hmm not sure I understand what you mean by backup strategy? Your images show (by the masive difference in amount of data stored) that the two 8TB drives are not mirroring each other. Are you reffering to RAID 6 as backup? or that this system is a backup for your primary stores somewhere else?

{following not directly aimed at quoted post - more general}
It is a popular misconception that RAID 1/5/6/10 is backup when actually it is drive redundancy. I have known people losing a lot of data on RAID systems - not due to a drive failure but either controller/virus/user error - and then puzzled when that data can not be recovered. Backup should really be on a seperate system completely, preferably in a different physical location. Does not really mater if it is just your pr0n collection - but if any of that data is for a client or data you may come to rely on at a latter date it really needs a proper backup and safe storage solution (fire safe for external drives/offsite either sent or rsynced etc)
 
Hmm not sure I understand what you mean by backup strategy? Your images show (by the masive difference in amount of data stored) that the two 8TB drives are not mirroring each other. Are you reffering to RAID 6 as backup? or that this system is a backup for your primary stores somewhere else?

{following not directly aimed at quoted post - more general}
It is a popular misconception that RAID 1/5/6/10 is backup when actually it is drive redundancy. I have known people losing a lot of data on RAID systems - not due to a drive failure but either controller/virus/user error - and then puzzled when that data can not be recovered. Backup should really be on a seperate system completely, preferably in a different physical location. Does not really mater if it is just your pr0n collection - but if any of that data is for a client or data you may come to rely on at a latter date it really needs a proper backup and safe storage solution (fire safe for external drives/offsite either sent or rsynced etc)
Ok i ment i have 2 arrays with each having 2 disk redundancy.
No backup strategy then.
I don't know of any viruses for opensolaris.
I protect against some user error with ACL's making deletion not easy.
I trust SAS controller cards don't destroy drives when they break.

It is a backup for business files thou the working set is on a seperate workstation. The rest is for a large database project that if data is lost can be re acquired over a month.
 
Ok i ment i have 2 arrays with each having 2 disk redundancy.
No backup strategy then.
I don't know of any viruses for opensolaris.
There does not need to be one - the clients connecting to the array can have a virus and wipe/alter the data (although if only an opensolaris client then agree)
I protect against some user error with ACL's making deletion not easy.
LOL only takes the one! - although with a good policy these can be made to work
I trust SAS controller cards don't destroy drives when they break.
I would not - having had EMC engineers scratching their heads for a weekend they finally conceded and replaced a controller that was wiping drives in a SAN - fortunatly the VMs and DATA are replicated in triplicate across global datacenters and it happened over a weekend so no business data lost.
It is a backup for business files thou the working set is on a seperate workstation. The rest is for a large database project that if data is lost can be re acquired over a month.
Hmm at what cost - I know employees are a fixed cost - but unless it is a NPO that month of downtime is an awfully big cost compared to the cost of proper backup - in this case could be tape - or another array.

Anyway getting off topic a little! - still a sweat setup!
 
The biggest flaw in data protection through replication is human stupidity.
 
The biggest flaw in data protection through replication is human stupidity.

Completely agree - this is where a regularly run shadow copy is your friend - eats a bit of hard disk but I can not count the number of times it has resolved issues at the company I used to manage the servers for (100 users - on average 10-20 requests for 'old' versions of a file a week - with shadow copies and correct client setup they can self manage rather than having to recover from backups like I used to have to do 5 years ago)
 
12TB

(4) CaliforniaPC 5-in-3 drive bays with Enermax UC-12EB fans
Coolermaster Stacker 810
Silverstone Olympia OP650
Gigabyte GA-M61P-S3
AMD Athlon X2 1.9GHz Brisbane core
Stock retail cooler
2GB Kingston DDR2-667
(2) Supermicro AOC-SAT2-MV8 SATA controllers
(8) 1TB WD Green WD10EADS
(8) assorted 500GB drives - some Maxtor, some WD IIRC.
(1) 80GB Seagate system drive, but I'm not counting it.
Gentoo Linux, amd64

9.6TB after formatting/RAID.

I have it directly connected to my desktop over a dedicated autonegotiated gigabit link and serving over NFSv4. Both desktop and server have dual NICs, so both are simultaneously connected to each other and to the internet. Each set of 8 drives is configured with software RAID5, and the two arrays are combined with basic LVM. The CalPC bays were purchased over the phone from California PC for $25 shipped per unit. If you don't like loud fans, make sure you have some quiet ones to swap in. Basically all inspiration was taken from Ockie. Upgrade path is to replace drives in chunks of 8 as needed.

insidern5.png

hddmv6.png

frontkl6.png
 
12TB

(4) CaliforniaPC 5-in-3 drive bays with Enermax UC-12EB fans
Coolermaster Stacker 810
Silverstone Olympia OP650
Gigabyte GA-M61P-S3
AMD Athlon X2 1.9GHz Brisbane core
Stock retail cooler
2GB Kingston DDR2-667
(2) Supermicro AOC-SAT2-MV8 SATA controllers
(8) 1TB WD Green WD10EADS
(8) assorted 500GB drives - some Maxtor, some WD IIRC.
(1) 80GB Seagate system drive, but I'm not counting it.
Gentoo Linux, amd64

9.6TB after formatting/RAID.

I have it directly connected to my desktop over a dedicated autonegotiated gigabit link and serving over NFSv4. Both desktop and server have dual NICs, so both are simultaneously connected to each other and to the internet. Each set of 8 drives is configured with software RAID5, and the two arrays are combined with basic LVM. The CalPC bays were purchased over the phone from California PC for $25 shipped per unit. If you don't like loud fans, make sure you have some quiet ones to swap in. Basically all inspiration was taken from Ockie. Upgrade path is to replace drives in chunks of 8 as needed.

What NFS client are you using? - The RAID controllers look like PCI - are you maxing out the throughput of 133MB/S with teaming the gigabit NICS? or are you only contecting in at the ma theoretical of 125MB/S?
 
What NFS client are you using? - The RAID controllers look like PCI - are you maxing out the throughput of 133MB/S with teaming the gigabit NICS? or are you only contecting in at the ma theoretical of 125MB/S?

I'm using the in-kernel v4 NFS client, nothing special. The controllers are just SATA controllers since I'm using software RAID, and they are indeed PCI-connected. I am maxing out single-NIC gigabit speeds (on reads, at least), so I am not much bothered that my PCI bus is nearly saturated. I do not team/bond the NICs unless I am doing a server-to-server mirror, but even then it's not really worth it because of the PCI bus saturation. I'm only really concerned about getting gigabit speeds, so meh.
 
Total storage: 194TB
Largest single storage: 58TB


Drive nodes:

#jp 25TB
Case: Codegen IPC-4U-500 + 1x SK33502
PSU: Chieftec CFT-750-14CS
Motherboard: GigaByte GA-MA790GP-UD4H
CPU: Phenom II 940
RAM: 2x KVR800D2N5K2/4G
Controller Cards: Sil3114
Hard Drives: 5x 2TB + 5x 3TB
Operating System: debian

#ob 28TB
Case: Chieftec Bigtower BA-01B-B-SL-OP + 2x CoolerMasters STB-3T4-E3-GP
PSU: Chieftec CFT-750-14CS
Motherboard: GigaByte GA-MA74GM-S2
CPU: 5050e
RAM: half of KVR800D2N5K2/4G
Controller Cards: 1x AOC-SAT2-MV8
Hard Drives: 14x 2TB
Operating System: gentoo

#dn 34TB
Case: XCase RM400B + 1x hdha170
PSU: Fortron 550w
Motherboard: GigaByte GA-MA74GM-S2
CPU: 4200+
RAM: half of KVR800D2N5K2/4G
Controller Cards: 2x Sil3114
Hard Drives: 8x 2TB + 6x 3TB
System drive: 8gb slowass 2.0 usb stick
Operating System: debian

#tt 37TB
Case: Chieftec Bigtower BA-01B-B-SL-OP + 3x CoolerMasters STB-3T4-E3-GP
PSU: Chieftec CFT-750-14CS
Motherboard: GigaByte GA-MA74GM-S2
CPU: 5050e
RAM: half of KVR800D2N5K2/4G
Controller Cards: 1x Sil3114 + 1x AOC-SAT2-MV8
Hard Drives: 17x 2TB + 1x 3TB
System drive: 8gb slowass 2.0 usb stick
Operating System: debian

Head nodes:
st 58TB
Case: Norco 4220
PSU: Antec 650w
Motherboard: GigaByte GA-H67M-UD2H-B3
CPU: i5-2500
RAM: KVR1333D3N9K2/8G
Controller Cards: 2x m1015
Hard Drives: 3x 2TB + 16x 3TB + 1x 4TB
System drive: SSDSA2CT040G3K5
Operating System: debian

sn 12TB
Case: Norco 2208
PSU: Zippy (P2G-6510P) 2U 510W
Motherboard: Supermicro X10SLA-F
CPU: E3-1220
RAM: 16G
Controller Cards: 1x sil3114
Hard Drives: 6x 2TB
System drive: Kingston 240G
Network: 2x EXPI9402PT

Code:
[size=1]drives used:
2TB: HD203WI, WD20EARS, HD204UI, HDS723020BLA642, HDS722020ALA330
3TB: DT01ACA300, WD30EURS, ST3000DM001
4TB: HDS724040ALE640[/size]

changes:
replaced 1,5TB drives with 3TB
due to reduced drive count reduced number of drivenodes
added new headnode with esxi ( move still in progress )
 
Last edited:
I think I finally have a competitor. It's about damn time. I haven't upgraded or anything lately just because there is no need to and no competition.

I will need to add up my systems aside from the galaxy systems and see where we are at ;)
 
after he gets 29 x 2T ... right Ockie? :D
basicly doubling his space ... its pretty much useless to try .... but for now ....
 
Once Seagate releases 2TB drives (they certainly are taking forever), I should be able to top everyone for most storage in a single system. I have to put those 24 x 5.25" bays to use! :D
 
Yeah, I'll stick with just filling the 20 bays in my norco... Once I've finished that I'll see about adding a second system.
 
Ooh, I think competing with Ockie will become an expensive proposition indeed. :D

after he gets 29 x 2T ... right Ockie? :D
basicly doubling his space ... its pretty much useless to try .... but for now ....

It depends, do you guys want me to calculate my business storage also? The SAN cabinets will really add up :D

I know I have over 50 at home, I just need to get into the mood and bust out the calculator. Right now I have 30~ extra drives with no systems to put them into... and I'm not sure if I really want to put these drives in cheap systems or if I want to get an array and bundle them. If I bundle them, then I have large storage volumes with easy use, if I don't, I save a boat load of money, but then I have to manage several small systems which are not really worth the power consumption. Or option B, simply stack and store those drives for when I get some other crazy idea.

Once I break down and upgrade this primary norco, i would double my storage capacity and hit 100tb... damn, that has a ring to it. :cool:
 
Status
Not open for further replies.
Back
Top