Project: Escalade 2.5

deaddawg

Weaksauce
Joined
Apr 17, 2006
Messages
111
I will start it of with some previous projects, as I have never posted anything here:

Escalade 1.0





Specs:
HP Workstation x4000
1 x 2.0GHz Xeon with HT
2 x 512 MB RDRAM
Cheap PCI-based Gigabit NIC
Rocketraid 1640

Storage:
OS - 1 x 36GB Atlas 10k U160 SCSI
Storage - 4 x 300GB Maxtor Raid 5

Total usable storage after RAID overhead = ~870GB

Escalade 2.0






Specs:
Coolermaster Stacker STC-T01
Seasonic 500w PS
Generic 400w PS
Asus A8N-SLI Deluxe
AMD 3700+
2 x 512MB Geil DDR Ram PC-3200
16x DVD+-RW
8 Onboard SATA Ports
Rocketraid 1640

5 x 300GB
4 x 500GB

4 x 500GB in Raid5 via Rocketraid = Movies

5 x 300GB is split up using Windows software raid

5 x 244GB in Raid 5 = Everything
3 x 35GB in Raid 0 = Scratch Disk
2 x 35 GB in Raid 1 = OS

Total usage storage after RAID overhead = ~2515GB

Goals of Escalade 2.5

-Move to Unix operating system from Windows 2003
-Move to completely software RAID
-Have support to grow RAID disk by disk
-Increase storage (current array is full)

What has been chosen

-Will use Linux, not sure of distro, with MDADM and LVM for RAID. (unsure of FS still)
-Supermicro AOC-SAT2-MV8 Controller
-4 x 500GB WD SE16's

Main challenges of this build

The hardest part of this build is going to be moving all my data from the current NTFS-formatted software and hardware RAID arrays to the new linux array. With only 4 new 500gb drives on the way, it will be tight fitting all the data onto the new array. I may have to order 3 or 4 more when I start into it.
 
So some new parts have arrived:

4 x 500GB WD5000AAKS
1 x Coolermaster STB-3T4
4 x SATA Power Connection



The Supermicro AOC-SAT2-MV8 is currently on back-order and should be in shortly.

I have decided to go with Raid 5 instead of Raid 6, and will go with the XFS file system. I may add a hot spare to the array in the next set of drive I add.
 
So I have installed the new parts...wiring took a while!

Getting the drive bays in went smoothy
2.jpg


Little blurry...
3.jpg


8 x 500GB and 4 x 300GB
4.jpg


Power is in. You can see the DVD-RW and 5th 300GB drive up at the top
5.jpg


All the SATA cables ... no Raid controller yet ...
6.jpg


Front ... still a little dusty from before
7.jpg



I am in the process of install Gentoo 2007.0 linux onto the lone 300GB drive.
 
Code:
escalade ~ # df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2              37G  2.2G   33G   7% /
udev                   10M  200K  9.9M   2% /dev
/dev/md/0             1.4T  1.4G  1.4T   1% /BigArray
shm                   501M     0  501M   0% /dev/shm
//192.168.0.2/movies  1.4T  1.4T  8.1G 100% /Test

Copying 1.4TB across the network is going to take a while! I am getting about 170MBit/s across the gigabit link.

*update* I mounted the network share using CIFS instead of SMB and I am now getting 27% usage = 270MBit/s. The CPU of the server the I have my old array in is pinned at 100% (2.0GHz Xeon), so it must be a CPU limiting issue reading from the array now. Now I know to use CIFS and not SMB in the future!
 
I name all my computers after cars.

HTPC = G35 ... because the case is sexy.
FileServer = Escalade ... because it is massive.
Macbook Pro = RS8 ... because it is sleek and fast.
VoIP Server = Bronco ... because its a beast that won't die. (old P3)

and the list goes on...
 
I was thinking I'll see some pimped out Cadillac here :) Oh well, storage is always more my thing anyways.


I see you learned a lot form the galaxy projects :D You should look at 3 in 5's, it will alow you to hold a lot of drives without losing your optical drives.


BTW, in picture 3 (of your original build), is that post-it notes?! LOL
 
I was thinking I'll see some pimped out Cadillac here :) Oh well, storage is always more my thing anyways.


I see you learned a lot form the galaxy projects :D You should look at 3 in 5's, it will alow you to hold a lot of drives without losing your optical drives.


BTW, in picture 3 (of your original build), is that post-it notes?! LOL

Haha yes, it is Post-It Notes. I used them to separate the drives. It was temporary lol. You may also see the Styrofoam above them and the piece of speaker wire tied to the case in the back preventing them from moving around.

As for the 5-in-3's, I am going to do a custom solution later on. Now with my transition to Gentoo, I no longer will have an optical drive in the system, so a max of 16 drives will be enough for this current setup.

Of course following along with the Galaxy projects has had some influence on my build ;-) I like to stay behind the curve a bit (your the curve lol)

Once I hit 16 500's in this system, I will look into the workshop at my University to build me a custom aluminum case to hold around 50 drives. I will likely go with a board similar to the one in your new galaxy 5, with 3+ pci-e slots, and go with 3 16-port adapters. But, that is in the future.
 
Back
Top