Best OS for 2TB+ Server.

jtg1993

Gawd
Joined
Apr 25, 2008
Messages
811
Right now i have a Windows Server with a 500GB Hdd, but ill be upgrading it to about 2GB Ram, Corsair 450W PSU, New Case, And 5x 500GB Hdds.
But im not sure what OS to use, I was gonna use Solaris 10 and run the 5x 500GB Hdds in raid-z but Solaris is a b!tch to get working.
I hope to stay away from Freenas and Open filer because the limited things you can do. And I want to stay away from Windows Software Raid, well maybe not WHS but thats my last option. I will need speed and reliability with Raid 5 or Raid-Z. I'm able to use Linux but to a limited extent, I have only used it for about 4.5 months in my life and that was just for basic computer usage, nothing Server oriented. So basically i need Raid, Speed, Reliability. So please recommend a good OS for me.

Thanks
JTG
 
what about the new versions of windows server? I use red hat enterprise 5 at work and it works great...so does windows server 2008...I don't know how hard this server will get hammered or what software you want to run on it...if it won't be hammered that badly, I think it doesn't matter at all.
 
From what I've read and experienced, Ubuntu is a great OS for running as a server. (I've used Ubuntu, but not Ubuntu Server Edition, still works great.) I also have limited Linux knowledge but I can manage some. Ubuntu was a breeze to set up and get configured, only issues I had was with trying to replace failed drives in a raid5 software raid. I was using a virtual machine when testing that though, so it may be easier than it seemed.

There are a lot of tutorials for setting Ubuntu up as a server: check this post

That's just my 2 cents though, there are other people who know a helluva lot more about Server OS's.
 
I dont think it would have enough guts to run server 2008 its a old Sempron 1.8GHz s939, I tried to run Server 2008 Beta in a VM on my laptop and it used all but 12MB of the 512 MB ram that I gave it. It will be used for Apache, MySQL, Samba, If i can DVD Ripping (.ISO), It wont be hammered to hard but it will do alot of file transfers from server to pc's and xbox's.
 
so i dont really think its going to matter... i have a large array on windows 2003 without an issue
 
I use ubuntu and centos for running servers w/2+TB in sraid5 using mdadm and lvm.
I would put in my vote for sraid5 on whichever linux flavor ur comfy with.
centos is popular since its based of rhel and ubuntu has lots of documentation. plus, u can always live boot and play/test out stuff.
 
FreeBSD 7.
RTFM. There is copious documentation, and it is rarely incorrect or out of date unlike "other" OSes. (I'm looking at you, Sun, Red Hat, etc.)
 
I use ubuntu and centos for running servers w/2+TB in sraid5 using mdadm and lvm.
I would put in my vote for sraid5 on whichever linux flavor ur comfy with.
centos is popular since its based of rhel and ubuntu has lots of documentation. plus, u can always live boot and play/test out stuff.

What kind of speeds do you get? Ya I will probably test out a few things like Windows XP Pro SP2 with raid 5 drivers. Cent OS, Ubuntu, Server 2008. All of them ill test with 4 drives not 5 because my current 500GB has data on it. But once i decide on the OS ill migrate the array to 5 drives once my old data is copied on.
 
with six 500GB sata WD RE2s in a raid 5 via mdadm:
$ hdparm -Tt /dev/md0

/dev/md0:
Timing cached reads: 1314 MB in 2.00 seconds = 657.05 MB/sec
Timing buffered disk reads: 452 MB in 3.01 seconds = 150.33 MB/sec
 
with six 500GB sata WD RE2s in a raid 5 via mdadm:
$ hdparm -Tt /dev/md0

/dev/md0:
Timing cached reads: 1314 MB in 2.00 seconds = 657.05 MB/sec
Timing buffered disk reads: 452 MB in 3.01 seconds = 150.33 MB/sec

Thats nice speeds for software raid. Is it possible to expand the array by adding a drive?
 
I have been recomended to use debian so ill use that but now that means i have to buy an extra drive then copy my files over then once i know the array is setup then i sell the old drive or use it as a spare.
 
Thats nice speeds for software raid. Is it possible to expand the array by adding a drive?

Yes indeedy. The most annoying thing is it has to all be done via command line, but on the flip side, it can all be done with the partition still mounted.
First, I format the drive in gparted to ext3, then set the raid flag.
Then (IIRC) I set the reserved space to 0% using tune2fs (normally space is saved for root, this is only useful if the array is the system drive, which it practically never is):
Code:
sudo tune2fs -m 0 /dev/sdf1
Then we add the drive to the array pool:
Code:
sudo mdadm --add /dev/md0 /dev/sdf1
Next we grow the array:
Code:
sudo mdadm --grow /dev/md0 --raid-disks=6
You can check on the progress with the watch command:
Code:
watch cat /proc/mdstat
Finally, we expand the partition to make use of the new space:
Code:
sudo resize2fs /dev/md0

The slowest part is growing the raid array, that took probably 8 hours or so. The rest was done within a half hour.

Note: it's probably not needed, but its good practice so that mdadm doesn't forget about an array:
Code:
sudo echo "DEVICE partitions" > /etc/mdadm/mdadm.conf
sudo mdadm --detail --scan >> /etc/mdadm/mdadm.conf

Any other questions, I'm happy to help.
 
Yes indeedy. The most annoying thing is it has to all be done via command line, but on the flip side, it can all be done with the partition still mounted.
First, I format the drive in gparted to ext3, then set the raid flag.
Then (IIRC) I set the reserved space to 0% using tune2fs (normally space is saved for root, this is only useful if the array is the system drive, which it practically never is):
Code:
sudo tune2fs -m 0 /dev/sdf1
Then we add the drive to the array pool:
Code:
sudo mdadm --add /dev/md0 /dev/sdf1
Next we grow the array:
Code:
sudo mdadm --grow /dev/md0 --raid-disks=6
You can check on the progress with the watch command:
Code:
watch cat /proc/mdstat
Finally, we expand the partition to make use of the new space:
Code:
sudo resize2fs /dev/md0

The slowest part is growing the raid array, that took probably 8 hours or so. The rest was done within a half hour.

Note: it's probably not needed, but its good practice so that mdadm doesn't forget about an array:
Code:
sudo echo "DEVICE partitions" > /etc/mdadm/mdadm.conf
sudo mdadm --detail --scan >> /etc/mdadm/mdadm.conf

Any other questions, I'm happy to help.

Thanks so much i just have to buy the Case ($30), PSU ($60), and 4 drives for $87 each. Would the process be the same for debian? Because ill probably use debian because it was recommended to me and is great on servers plus no gui (will be headless but have ssh setup).
 
Yes indeedy. The most annoying thing is it has to all be done via command line, but on the flip side, it can all be done with the partition still mounted.
First, I format the drive in gparted to ext3, then set the raid flag.
Then (IIRC) I set the reserved space to 0% using tune2fs (normally space is saved for root, this is only useful if the array is the system drive, which it practically never is):
Code:
sudo tune2fs -m 0 /dev/sdf1
Then we add the drive to the array pool:
Code:
sudo mdadm --add /dev/md0 /dev/sdf1
Next we grow the array:
Code:
sudo mdadm --grow /dev/md0 --raid-disks=6
You can check on the progress with the watch command:
Code:
watch cat /proc/mdstat
Finally, we expand the partition to make use of the new space:
Code:
sudo resize2fs /dev/md0

The slowest part is growing the raid array, that took probably 8 hours or so. The rest was done within a half hour.

Note: it's probably not needed, but its good practice so that mdadm doesn't forget about an array:
Code:
sudo echo "DEVICE partitions" > /etc/mdadm/mdadm.conf
sudo mdadm --detail --scan >> /etc/mdadm/mdadm.conf

Any other questions, I'm happy to help.

I am printing this post. I know I will end up needing this if I go the Ubuntu route vs FreeNAS lol Great info!



jtg1993, just out of curiosity, what case are you going to be using?
 
O i found how to make the array.
Code:
 sudo mdadm -C /storage --level=raid5 --raid-devices=4 /dev/sd[abcd]1
Is that the right command?
 
I am printing this post. I know I will end up needing this if I go the Ubuntu route vs FreeNAS lol Great info!



jtg1993, just out of curiosity, what case are you going to be using?


Make sure you print the instructions on how to recover the array after a failed disk.. much more important!
 
I was wondering, does mdadm let u set the raid mount to /storage or /raid not /dev/mdX?
 
O i found how to make the array.
Code:
 sudo mdadm -C /storage --level=raid5 --raid-devices=4 /dev/sd[abcd]1
Is that the right command?

It looks to be correct, although I would think about using the standard /dev/md0 as opposed to /storage, as thats probably the location where you would want to then mount the partition.

Then you would format to your desired partition type (As mentioned earlier, I use ext3):
Code:
mkfs.ext3 /dev/md0

Next, you use the tune2fs command mentioned previously:
Code:
sudo tune2fs -m 0 /dev/md0
Note: the reserved space was initially designed so that the root user would be able to log in if the booting drive was full; if the booting drive was 100% full, linux would error out when it tried to write to the system logs, this is not a problem though, as a different drive is used for booting.

This would then get added to your /etc/fstab file so that it mounts at startup, you may want to look online to see what settings suit you best:
Code:
/dev/md0  /storage  ext3  suid,dev,exec  0  0
You can also mount the partition by then typing mount /dev/md0 so you don't have to restart.
 
I use Ubuntu Server for my needs, and they're almost identical to what you're needing. It is based off Debian so the commands and the packages available are almost identical. Both distro's will run fine on your current hardware, and MDADM is an excellent tool for software RAID. (Ubuntu is a bit lighter, newer kernel usually, and it boots a bit quicker too - in my experience, out of the box)

As far as "/storage" for the data, once you make /dev/md0 with MDADM you'll mount /dev/md0 to /storage in the filesystem (in your /etc/fstab) and then via SAMBA you'll share /storage to be accessible to your Windows systems. And you can then give it any share name you wish... be sure to check out SWAT - makes configuring SAMBA a breeze and it's web based!

Also with Ubuntu they offer a LAMP install option with their Server CDs... Linux Apache MySQL and PHP... "You can have a server up in 15 minutes..." or so they say. ;)

Good luck!

We're here to help, and www.ubuntuforums.org too!
 
Code:
/dev/md0  /storage  ext3  suid,dev,exec  0  0
You can also mount the partition by then typing mount /dev/md0 so you don't have to restart.

You wouldn't have to restart anyway; after adding to fstab, just do
Code:
sudo mount -a
 
FreeBSD 7.
RTFM. There is copious documentation, and it is rarely incorrect or out of date unlike "other" OSes. (I'm looking at you, Sun, Red Hat, etc.)

*swoon*. I loves me some FreeBSD!

I had a P3 running a test of Samba that would respond faster over the network than my machine would on local drives.

Unfortunately, last I looked there was no FreeBSD love for VMWare. :mad:
 
Damn I have a problem, I want to go debian now but i would have to format my 500GB hdd to ext3 (260GB used) but i need somewhere to copy it to but i only have little 8gb and 10gb hdds (from xbox's) to use.

EDIT: By copy it to i mean i need a place to back it up to. But i only have little hdd's. But i think ill ask my neighbor 2morrow if i can borrow one of his drives (he buys drives and then backup his files to them then puts them safely away for archive).
 
might be able to pull ~10gb off, resize to 250, format remainder to ext3, copy from old to new, delete old partition, resize ext3, push ~10gb back on.
 
Hmm... what you might want to look into doing is having a small "system" drive, and using your 500's for the storage. 10-20 is plenty of GBs for holding the OS.

Such as in my case, I have a 160 GB drive that holds the OS and SWAP (not that it is needed since I have 4 GB of RAM) but I then have my 4x 500's in RAID 5 so I get ~1.4 TB. And the 160 is more than enough, works well for me!
 
I am gonna use a drive just for the OS, its a WD 80B IDE, but the problem is my 500GB is NTFS and Debian uses ext3 and i have to format the 500GB to ext3 but my files are on the 500Gb and they have to be backed up, and my spare drives are small (8-10GB from xbox's). So ya ill be using the single 500GB for now till i save up some funds for the other 4. Then ill just have to make the array and copy fiiles over, then expand the array and bam 2TB for $500 in raid 5.
 
I chose FreeBSD for my server that has 3TB of storage and am very happy with the results.
 
Well im going with ubuntu server 6.06 now. I tried debian and its not working out good, it takes forever to boot and i cant get anything installed.

EDIT: Im gonna use debian now, i took out my OS drive and found that i forgot the jumper set to master when it should be single, so its fast now. I tried ubuntu server 6.06 (not LAMP) fresh install and it used under 80MB ram, thats insane. I was thinking of running ubuntu server 6.06 LAMP but i went with debian.

EDIT AGAIN: Well Debian wasn't working to great so now is Ubuntu Server 6.06 LAMP ,thanks to ring.of.steel for helping me get it setup, i just need the drives now.
 
*swoon*. I loves me some FreeBSD!

I had a P3 running a test of Samba that would respond faster over the network than my machine would on local drives.

Unfortunately, last I looked there was no FreeBSD love for VMWare. :mad:

Use "Other" OS type. I'm running something like 40 FreeBSD VMs on ESX. Getting the VMWare tools in was a bit of a pain, but not impossible.
 
May I suggest upgrading to a more recent version of Ubuntu? If you're running RAID5, 7.04 or higher supports hotswap and online raid5 expansion. It's a fairly simple process, if you use apt, the one catch is that unless you do a full reinstall, upgrades are only supported from the immediately preceeding release, i.e. you can't do 6.06>7.10, you have to do 6.06>6.10>7.04>7.10. A bit of a hassle, but it's generally a pretty quick process if you have decent bandwidth.

You may not need to bother with it right now, but if you want to expand your raid later down the line, you'll need to.
 
May I suggest upgrading to a more recent version of Ubuntu? If you're running RAID5, 7.04 or higher supports hotswap and online raid5 expansion. It's a fairly simple process, if you use apt, the one catch is that unless you do a full reinstall, upgrades are only supported from the immediately preceeding release, i.e. you can't do 6.06>7.10, you have to do 6.06>6.10>7.04>7.10. A bit of a hassle, but it's generally a pretty quick process if you have decent bandwidth.

You may not need to bother with it right now, but if you want to expand your raid later down the line, you'll need to.

I will want to upgrade it later on so ill upgrade now since its basically doing nothing till i get the drives.
 
Just fyi several distros allow you to setup the raid arrays during install which is great so that you can actually install and boot the OS to raid 1 and then also have a separate raid5 data array. If you want a gui then I know you can do this with ubuntu ALT install. I don't think that Ubuntu server has a point-click interface. :)
 
Just fyi several distros allow you to setup the raid arrays during install which is great so that you can actually install and boot the OS to raid 1 and then also have a separate raid5 data array. If you want a gui then I know you can do this with ubuntu ALT install. I don't think that Ubuntu server has a point-click interface. :)

Sorry for the late reply. Ubuntu Server is all commands not point-click, but i do have webmin installed which has about everything i need. I would run a raid 1 for OS but i dont have enough drives. And I couldn't setup the raid 5 during the install because i still need to buy the drives (4x 500GB's).

The Hunter - Im currently doing the upgrade from 6.06 to 7.10, its taking a while, i have slow internet right now because my phone line is screwy.

Well thats all for now.

EDIT: I have a problem now, i did
Code:
apt-get update
apt-get dist-upgrade
and it finished, but in phpsysinfo its the same distro version and when i do
Code:
 apt-get dist-upgrade
i get
Code:
 @SERVER:~# apt-get dist-upgrade
Reading package lists... Done
Building dependency tree... Done
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

So what do i have to do to get it to 6.10 then 7.04 then 7.10?
 
Back
Top