New Storage Array

Not trying to be a Linux zealot here, but any reason why you are using windows over linux or a bsd for this machine?

Course I do not know if Linux has support for the Raid hardware you have, that could be an issue.

But for a file server Linux does a GREAT job, samba I have found is better in some repects than a windows smb machine. More options for auth. is one.

Again, I do not know your full needs.
 
m1abram said:
Not trying to be a Linux zealot here, but any reason why you are using windows over linux or a bsd for this machine?

Course I do not know if Linux has support for the Raid hardware you have, that could be an issue.

But for a file server Linux does a GREAT job, samba I have found is better in some repects than a windows smb machine. More options for auth. is one.

Again, I do not know your full needs.

Ahh man i really wanted to use linux on this beast


Unfortunatly Linux just isn't liking that 3ware 9500s-12

Also seems both linux and windows have a single hardrive size limit of 2 terabytes. No matter what i ahve done in both os's i can't get it to recognize more then 2.5 terabytes.


Tom
 
Holy shit my man!! I've been thinking about doing something of the sort myself, but with maxtor 300gb sata drives :)


Good work and can't wait to see the rest of it.
 
It seems we have run into a major problem. For people who actually read the thread you should have an idea to those who do not here it is in a nutshell. You cannot currently have a storage volume greater than 2 TB. Microsoft says you can IF you do a dance and a prayer.
Linux can IF you use a new BETA file system. (If yuo do not actually want to keep your data you are welcome to try this.) The new Xserves from Apple actually use this Beta file system.
(If you want specifics as to these items ask Captboom, that is his department. I am the PR rep for Project Borg and had no sleep last night so the big words hurt my head).

The funny thing is the 3ware card will support up to 3 TB. The server case will allow about that much. All the other hardware is ready to go or does not care. It is the Software that seems to be the limiting factor.

So this is where I ask for some help and you all have been giving some great feedback so far.
Without pointing to a web document that says it SHOULD work or MIGHT work. We need some pointers as to maybe something we have not tried. (Please be careful how you phrase your answers, we have been really frustrated over this and have lost lots of sleep and had way way to much caffine.) Captboom is the contact for this, please direct all questions to towards him.

Now for you p0rn freaks I will be posting some more pics later tonight.
 
I'm guessing you're going to have to carve up the drive space. :(

Even still it's pretty [H]ard to say you have two 1.5TB drive arrays no matter what anyone else says.
 
There are plenty of DFS alternatives out there. That many terrabytes in a single array is a bad_idea^2 on x86 & ATA.
 
We had some goals with this project

1.) eliminate all of the spread out storage into 2 high capacity devices
Saves on power
2.) Increase our data redundency so we didn't risk loseing anything
Alot of the other storage systems where running software raid with multiple pci ide
controllers ECCH
3.) Cause you all to have wood for 3 weeks looking at it



As it stands now we had to back off 3 terabytes down to 2 terabyte file volumes.
This leaves me with extra hardrives which will probably sit on the shelf as hotspairs.

Upon much investigation and a 3 hour support call with a 3ware Support rep no current operating system with the exception of linux 2.6.8 kernel with Large block device enabled *read beta will support a single block device in excess of 2 terabytes.

Linux 2.6.8 with Large block device support and xfs file system can do it but as i said it's still beta code.

Some private messages to me saying go xserve fail to realize a few key points

1. xserve is bookoo bucks
2. xserve requires license from apple for users
3. xserve is using the same large block device code from the freebsd tree. *read beta
4. Doesn't integrate well with windows networks without a great deal of work Especialy Active Directory


Just a few items

Unfortunatly running windows 2003 server standard being the most compatible with the drivers and the rest of the network became my choice. The overhead of the os doesn't impact the system that greatly and we have noticed virtualy no lag when accessing the machine.

It's currently on 100base t and will be moved to the gigabit connection when i make a new cat6 patch cable.

The second 2 terabyte server should go live tonight or tomorrow.

Have i mentioned that these fuckers are LOUD

I mean so loud you can hear them from the otherside of the house. It's like haveing 30 hairdryers going right next to you. Seriously going to have to do something about that.

Captboom
 
Captboom said:
1.) eliminate all of the spread out storage into 2 high capacity devices
Saves on power
2.) Increase our data redundency so we didn't risk loseing anything
Alot of the other storage systems where running software raid with multiple pci ide
controllers ECCH

Most of the big-iron systems are clusters of multi-TB arrays. They've broken the ground here, you can learn from their experiences.

3.) Cause you all to have wood for 3 weeks looking at it

ATA defintely doesn't do it for me.

Upon much investigation and a 3 hour support call with a 3ware Support rep no current x86 operating system with the exception of linux 2.6.8 kernel with Large block device enabled *read beta will support a single block device in excess of 2 terabytes.

Solaris 64 supports multi-TB devices & filesystems, as do a few other proprietary unices. But since you're doing this on x86, you're going to have major issues until either linux gets its act together or Sun pumps out these features on their x86-64 version of Solaris.
 
Captboom said:
We had some goals with this project

1.) eliminate all of the spread out storage into 2 high capacity devices
Saves on power
2.) Increase our data redundency so we didn't risk loseing anything
Alot of the other storage systems where running software raid with multiple pci ide
controllers ECCH
3.) Cause you all to have wood for 3 weeks looking at it


#3... check. Taken care of long ago.... haha

Snugglebear: edited out inappropriate image. Congrats, I think this is the first time that's happened in this forum!


SelRahc: I apologise to snuggle and everyone else about that image. I had seen it around the forum before and assumed it was ok to post. I was actially logging back in to remove it but you beat me to it.

Anyway, that was poor judgement on my part. It won't happen again
.

m( _ _ )m my apologies.
 
I've seen that image/animation posted around here before...doesn't happen often but I see it enough .... :confused:
 
It's not supposed to be seen, period, as it's inappropriate content for the forum. Not every post gets read, so these things do slip by occasionally.
 
Can you use Dynamic Volumes in server 2003 to span two 1.5 TB volumes into one 3 TB volume?

worth a try..

==>Lazn
 
how old are you, what do you do for a living, and how much did all this cost?
 
Another thought would be to mount one of the volumes into an NTFS folder if you can't span them.

Or (and this would be REAL tricky if you could do it, and data security would be concerning) use the hardware Raid to make two 1.5TB arrays - then use windows to software raid0 the two together... Like I said - REAL iffy sounding to me - but it's an option...


Asazman how old are you, what do you do for a living, and how much did all this cost?

Reading the thread would help to illuminate you on those subjects. :rolleyes:
 
[H]Rabbit said:
Reading the thread would help to illuminate you on those subjects. :rolleyes:

I scanned the entire thread you shitkicker but i didn't see how much the entire thing cost, what his profession was, or how old he was. Asking a simple question doesn't require you to flame my ass, you stompfucking pigwhore.

Yes, I've had a bad day.
 
Asazman said:
I scanned the entire thread you shitkicker but i didn't see how much the entire thing cost, what his profession was, or how old he was. Asking a simple question doesn't require you to flame my ass, you stompfucking pigwhore.

Yes, I've had a bad day.
lol?
 
Asazman said:
I scanned the entire thread you shitkicker but i didn't see how much the entire thing cost, what his profession was, or how old he was. Asking a simple question doesn't require you to flame my ass, you stompfucking pigwhore.

Yes, I've had a bad day.


Haha, never heard that one before ("stompfucking pigwhore"); :p
 
^ Indeed


Asazman said:
you shitkicker
you stompfucking pigwhore.

Yes, I've had a bad day.

and it just got worse
that was totally uncalled for and you know exactly how strictly the flaming rules are enforced,
consider yourself warned & written up
that post is filling up my email box with reports and it wouldnt suprise me if an Admin is in here next :rolleyes:

In Data Storage no less
 
A I have been asked on the status so far here it is.
Unimatrix01 is up and running with Windows 2003 Server. But to make it work we took out 2 drives. So we have now have 10 250GB (232GB formatted) drives in RAID 5.
We are using Unimatrix02 as a test sytem to find somehting that will allow us to have storage greater than 2TB.
I would have had some pictures up the other night but the forum was down, however I will put some up this weekend. After the storage part of this project is complete we will start working on the MOD side. Althought that is still a ways off.

And so everybody will stop fighting over us here is what seems to not bee known.

Catpboom - UberSysAdmin for an private firm - 30
-{DM2K}- - IT Pro Superman- 30
Aftersh[]ck - IT Pro Spaceman - 25

Estimated Cost:
24x 250GB SATA Drives = $4152
2x Rackmount 3U Cases = $2000
2x CPU, Mem, MB = $1200
2x 3ware 9500 12 SATA RAID card = $1654
Various Parts = $500
Total = $9506

Now this is missing stuff as I am at work and do not have the reciepts in front of me.
But you get the Idea so far. Remember we are no where near done and this number will go up. And I know that number is higher because I added it up once and it was around $12,000.

Now no more fighting in this thread please children or we will take toys and leave.
Bad day or good day, Monday or Friday the only thing that matters is how [H}ard it is.
 
This isn't to fix your 2TB limit problem, but I want to know how you hooked up the drives. I've dealt with Compaq servers before, but it has been awhile. On those you put the drive in a drive holder to make it hotswapable. Then you u put it in the drive cage and the back panel circuitry took care of the rest. You hooked up an add-in SCSI card to one point and it was all set. With SATA it is one cable a drive. I also doubt you have 10 mulex connectors running all over the place. I know with SATA it is a little different, but I'm just seeing a cabling nightmare in what are usually pretty tight environments.

My main reason for asking I'm wondering about the feasability of buying an older rack mount sever like a DL380 (PIII era) ripping out the SCSI drives/controller and replacing them SATA. I just don't know if that is possible. I figure your solution will give me some insight. Thanks.
 
Captboom said:
We had some goals with this project

1.) eliminate all of the spread out storage into 2 high capacity devices
Saves on power
2.) Increase our data redundency so we didn't risk loseing anything
Alot of the other storage systems where running software raid with multiple pci ide
controllers ECCH
3.) Cause you all to have wood for 3 weeks looking at it



As it stands now we had to back off 3 terabytes down to 2 terabyte file volumes.
This leaves me with extra hardrives which will probably sit on the shelf as hotspairs.

Upon much investigation and a 3 hour support call with a 3ware Support rep no current operating system with the exception of linux 2.6.8 kernel with Large block device enabled *read beta will support a single block device in excess of 2 terabytes.

Linux 2.6.8 with Large block device support and xfs file system can do it but as i said it's still beta code.

No it isn't.

XFS isn't beta.
LBD isn't beta.
2.6.8 isn't beta.

Use Linux.
 
as doh says, use linux. if you need any help configuring it, let someone in the alternative os forum know. or pm me. ;) i want to play with (vicariously) this machine.
 
this is a pretty sweet project. I had my little plans to construct a terabyte server of my own, but they will be on the back burners for a while.

Cant have enough anime/scifi ;)
 
Sorry for lack of updates. Been real busy at work and crash when I get home and the GF stole my Digital Camera for "a few pictures". Few = 133 in 2 hours. Then she took out the memory card in it and gives me back the camera with out telling me she kept the CF card for a 5 days. ;-P

Anyway working on a few pics and info right now.
 
To all unassimilated species.....
Ah hell here are the pics

Making a mess with the servers.

IMG_0646b.jpg


No guts no glory...


IMG_0701b.jpg


IMG_0702b.jpg


IMG_0703b.jpg


IMG_0704b.jpg



Temporary Home...

IMG_0700b.jpg



3ware RAID BIOS Screen

IMG_0707b.jpg


End Transmission

Last Item - Very Short Video of server. Image is not important. Listen to it.
Just a little too loud for the upstairs
.


CLICK HERE TO DOWNLOAD


I'll give an update in words in a few hours. For now I must Regenerate in my alcove.
 
-{DM2K}- said:
Linux can IF you use a new BETA file system. (If yuo do not actually want to keep your data you are welcome to try this.) The new Xserves from Apple actually use this Beta file system.

The Xserve (and Xserve RAID) uses HFS+J, which is not beta. The only thing remotely beta about the FS an Xserve could use would be StorNext in Xsan, but that's only beta because Xsan is beta; the FS has been out for quite some time. (That, and comparing a SAN FS to a local FS is highly, highly unfair to the local FS.)

So, which FS are you talking about?
 
Captboom said:
2. xserve requires license from apple for users

No it doesn't. If you're in an all-Windows house it doesn't do user checking. Only Mac connections and only 10 in the low-end license of Mac OS X Server. I saw one Mac, so you're ok there.

Captboom said:
3. xserve is using the same large block device code from the freebsd tree. *read beta

Mac OS X can use up to 16TB volumes. I have personally used a 3.5TB RAID with it.

Mac OS X Server 10.3: Tested and theoretical maximums
(Mac OS X Server uses the same core system as Mac OS X, so it works for both.)

Captboom said:
4. Doesn't integrate well with windows networks without a great deal of work Especialy Active Directory

Not my experience. About three clicks and it's up. But it does lack ACLs, which they're putting into Tiger.
 
Seems a few people have ideas on how to make a RAID 5 Array larger than 2 TB work. Just wish we could all get together at one time and figure it out and kick Unimatrix02 into the world of the living with a Linux core.

So I ask. Using a flavor of Linux. How do you create a 3 TB RAID 5 array? If I understand all the talk I have seen about linux (I am not a Linux person) Part of the problem is solving hardware drivers first along with which flavor of Linux to use.
Then you can piece together the rest.


Other than having lots of storage and eventually doing so cooling rack modding part of the project is to create something unique. If all the [H}ardOCP folks work together to help us in the areas that we may lack then possibly I may have a place to store my 2 TB of anime.
 
Back
Top