Any deals on dual socket servers right now?

sandmanx

[H]F Junkie
Joined
Mar 22, 2001
Messages
9,901
I need a server for work for storage and sending out streaming video via Windows Media Server. I was planning on grabbing something similiar to our Dell Poweredge 2800, which came with 2 3.0GHz procs, 2 73GB SCSI drives, and 1GB Ram a few months ago for less than $2000. The same setup now costs nearly $5000!

Does anyone know of deals going on right now to pick up a dual processor machine cheaply? I'd prefer Dell, just to not have to mess with multiple manfacturors, but I wouldn't mind getting an Opteron machine either.
 
You can easily build a dual opteron server for less than $2000. You'd be close to the limit, and that includes a Terrabyte RAID0 or a 500GB RAID1 SATA2 array at the price as well.
 
fox-amd said:

It's decent, and I might go for that if I can't find anything else. Thanks for the link.

I should also mention I want SCSI Raid 1 or 5 for the setup, I want to stay away from IDE. I'd rather not build it myself(if you have ever dealt with servers, you know why), and I'd really like dual PSUs, which the gateway does have.
 
Save some cash and build a single-socket, dual-core Opteron setup, perhaps?
 
Dual socket server with scsi raid 5 is going to be very hard to put together for less that $2k if you want decent parts.
 
sandmanx said:
I'd rather not build it myself(if you have ever dealt with servers, you know why), and I'd really like dual PSUs, which the gateway does have.

Can you explain your logic here? I deal with servers all day long and they can be much easier to build in some cases. Look at my signature. I have a essentially what you're asking for -- I just haven't bought my SATA2 drives yet.

You can skip the dual-core processors and get Opty 248's and put that money toward your drives -- to keep it under 2k.

As defakto said, SCSI is not cheap -- and no longer the best storage solution.
 
nickcarr said:
Can you explain your logic here? I deal with servers all day long and they can be much easier to build in some cases. Look at my signature. I have a essentially what you're asking for -- I just haven't bought my SATA2 drives yet.

If you build a server yourself, you become support central for taht server. If you go on vacation and the server craps out, you're going to get reamed a new one if nobody else knows how to fix it. Anybody can call Dell, Hp, Gateway, or whoever and get a server from them fixed.

We buy only HP servers (almost always refurbished) and haven't paid over $3k fully loaded in a while. I don't remember the last time we had a motherboard fail on one of our over 100 HP servers. We have a drive failure here and there, but we RAID 0 all internal drives.

Brian Elfert
 
nickcarr said:
As defakto said, SCSI is not cheap -- and no longer the best storage solution.

I know several people who run streaming services.
That statement is total bullshit.
One just threw out over three terabytes of SATA because it couldn't even handle 25 concurrent users at I think it was 360Kbit. And that was on a 3Ware. It couldn't even remotely keep up, no matter how it was configured. The other threw out even more because it couldn't remotely keep up with the load on just static pages. Switching to an FC-AL attached nStor immediately tripled capacity.

SCSI is and always will be superior to IDE and it's derivatives.
 
AreEss said:
I know several people who run streaming services.
That statement is total bullshit.
One just threw out over three terabytes of SATA because it couldn't even handle 25 concurrent users at I think it was 360Kbit. And that was on a 3Ware. It couldn't even remotely keep up, no matter how it was configured. The other threw out even more because it couldn't remotely keep up with the load on just static pages. Switching to an FC-AL attached nStor immediately tripled capacity.

SCSI is and always will be superior to IDE and it's derivatives.

Correct. Synthetic benchmarks don't tell the whole story. All of our serious servers and storage are SCSI based. We use a few terabytes of IDE storage for archive purposes only..
 
AreEss said:
One just threw out over three terabytes of SATA because it couldn't even handle 25 concurrent users at I think it was 360Kbit.
Where are they? Dumpster diving FTW :D

And good god, that's only 4.5 MB/s total reads. What application/OS were they using? I can do that much (granted, a smaller # of reading threads) with DMA turned off...

 
how does SCSI compair to SATA3.0 though? I know SATA3.0 destroys SCSI in prices (if i remember correctly, a 500 gb SCSI drive was something over 1000... I could be wrong, feel free to correct me)
 
unhappy_mage said:
Where are they? Dumpster diving FTW :D

Heh. Not telling. ;)

And good god, that's only 4.5 MB/s total reads. What application/OS were they using? I can do that much (granted, a smaller # of reading threads) with DMA turned off...

No, it's not, because you're not doing sequential reads. You're bouncing between multiple disks and bouncing around on a single disk, which IDE can't do worth shit period. Trying to compare random seek on IDE versus SCSI is Yugo vs Ferrari.

RaphaelVinceti said:
how does SCSI compair to SATA3.0 though? I know SATA3.0 destroys SCSI in prices (if i remember correctly, a 500 gb SCSI drive was something over 1000... I could be wrong, feel free to correct me)

You get what you pay for.
SATAII is no better than IDE at random seek. I can probably go get 6TB of ESDI storage for $20 these days. Doesn't make it worth shit.
 
AreEss said:
No, it's not, because you're not doing sequential reads. You're bouncing between multiple disks and bouncing around on a single disk, which IDE can't do worth shit period. Trying to compare random seek on IDE versus SCSI is Yugo vs Ferrari.
I ran a little test:
Code:
#define _FILE_OFFSET_BITS 64

#include <stdio.h>
#include <fcntl.h>
#include <stdlib.h>
#include <math.h>
#include <sys/time.h>
#include <pthread.h>

const long long FILESIZE = 200049647616LL;
const long long READSIZE = 1048576LL;
const int NUMTHREADS = 50;

void *doreads(int me)
 {
  long long totaltime = 0;
  int numtests = 0;
  double averagetime, speed;
  int fh = open("/dev/sda", O_RDONLY);
  char *buf = malloc(READSIZE);
  struct timeval t1, t2;
  struct timezone tz;
  gettimeofday(&t1, &tz);
  srand(t1.tv_usec + me);
  while (1 == 1)
   {
    lseek64(fh, (off_t)((double)rand() / RAND_MAX * (FILESIZE - READSIZE)),
SEEK_SET);
    read(fh, buf, READSIZE);
    numtests += 1;
    if (numtests % 100 == 0)
     {
      gettimeofday(&t2, &tz);
      totaltime = (t2.tv_sec - t1.tv_sec) * 1000000 + (t2.tv_usec - t1.tv_usec);
      averagetime = ((double)totaltime / numtests);
      speed = (1000000 / averagetime * READSIZE) / 1024;
      printf("%d %d This process's read speed: %f kb/second (%d usec for 100 chunks)\n", t2.tv_sec, me, speed, totaltime);
      numtests = 0;
      totaltime = 0;
      gettimeofday(&t1, &tz);
     }
   }
  free(buf);
 }
int main()
 {
  pthread_t threads[NUMTHREADS];
  pid_t pids[NUMTHREADS];
  int j;
  char *retval;
  int pipes[NUMTHREADS][2];
  for (j=0; j<NUMTHREADS; j++)
   {
    if (0 == (pids[j] = fork())) // We are the child...
     {
      doreads(j);
     }
   }
  wait(&j);
  return 0;
 }
What this does is start 50 processes, each of which opens my hard drive for reading, and then grabs 1MB chunks. Every 100 chunks a given process retrieves, it reports back how fast it got the chunks and how many MB/s that is. I thought many of the threads would get lost in the shuffle, but as far as I can tell all of them are executing - I added code to print which thread was reporting back, and all 50 were being executed. Each thread is getting read speeds of about 500 KB/s, for a total aggregate of around 25 MB/s - watching "vmstat 1" confirms this; around that many blocks are read from disk every second. True, if I change the read size to 1k everything goes to shit, but that's no surprise - they're still spread out evenly across the disk so the ratio of seek time to transfer time goes up by a lot.

So it's kind of surprising that my single disk can keep up with 50 "streams", but a multi-disk array of newer disks can't keep up with 25. Any guesses as to why this happens?

 
To compliment unhappy_mage's post I decided I'd test out his code on one of my own boxes...my old laptop, in particular, to see what really old hardware would do under those circumstances. I simply copied the source and compiled it as-is using gcc 3.3.6. Running on a thinkpad 600e...440bx chipset, mobile PII 400, 288mb of ram, 20gb Toshiba hd running at ATA33.

I don't quite understand this program's output since the bandwidth numbers seem impossibly high...but maybe I'm interpreting it wrong. Anyway, all of the lines of output were incredibly close and consistent with their output (with varying thread numbers, of course) so I'll just post one sample line. Maybe unhappy_mage can intrepret exactly what it's saying:


Code:
1137313376 48 This process's read speed: 393846153.846154 kb/s (260 usec for 100 chunks)

Not sure if that's in kilobits or kilobytes but either way that's a really really big number. Anyway, that's what I got.

EDIT: I also ran it on my ventrilo server, out of curiosity. Pentium 1 233mhz, 64mb ram, an ancient asus via-based ss7 mobo, 7 gig quantum hd. It reported:

Code:
1137316013 12 This process's read speed: 344781144.781145 kb/s (297 usec for 100 chunks)

I'd also run it on my main box but I don't feel like figuring out how to make it compile in windows under visual studio.
 
You need to change the constant const long long FILESIZE = 200049647616LL; on line 10 to the actual size of your disk, or it'll try to seek over a 200GB disk. Run "fdisk -l" to get this. You'll probably also need to change line 19 'int fh = open("/dev/sda", O_RDONLY);' to be the correct disk (hda, whatever). And finally, it probably needs to be run as root. There should be no risk with this; all it's doing is reads.

HTH

 
yeah I was looking at the code later after I posted and I noticed those differences...I changed them accordingly but it didn't change the output. running it as user still produces ridiculous results (and there appears to be no actual hd activity), and running it at root produces no output on the console (though there is disk activity).

Oh well I guess it's not really that important...I was just curious to see what kind of results my old ide-based systems would get.
 
Eva_Unit_0 said:
running it at root produces no output on the console (though there is disk activity).
It takes me about 3 minutes to get results back - each thread reads 100 1MB chunks before any output is produced. That means that 5GB will be read before the last thread prints its results. Patience is a virtue ;)

Edit: and a normal user wouldn't have permission to open the disk, so it'd mean that you're trying to read all the chunks from a closed filehandle, which returns an error quickly. Thus the ridiculous speeds.

 
ah okay I apparently didn't wait long enough...it took a pretty long time on the laptop. it's reporting ~200 kb/s on the 440bx / ATA33 hd combo.
 
And each of the 50 threads are doing that, so the aggregate read speed for that machine would be 10 MB/s. Anyone have a scsi disk or raptor to test with?

 
Yeah I just ran it on my 74G raptor. The numbers for this varied a bit more than the other one, with a range of 500-550 kb/s.
 
Hmm, I would have expected it to be higher than that, given that my mediocre 7200.7 200gb seagate gets:
speed-graph.png

Paste the results into a file, and PM them to me, and I'll update the graph with your results. I have a few other disks to test, and of course I'm interested in hearing AreEss's take on this.

 
Tried one of my 18.6gb 15krpm U160 drives - 488-367kb/sec. Also tried one of my 73.6gb 15krpm U320 drives - 253-234kb/sec. Just did it under knoppix, so drivers for the controllers probably aren't set up right. PMed you the results, mage.

Oldbenwa
 
belfert said:
If you build a server yourself, you become support central for taht server. If you go on vacation and the server craps out, you're going to get reamed a new one if nobody else knows how to fix it. Anybody can call Dell, Hp, Gateway, or whoever and get a server from them fixed.


We buy only HP servers (almost always refurbished) and haven't paid over $3k fully loaded in a while. I don't remember the last time we had a motherboard fail on one of our over 100 HP servers. We have a drive failure here and there, but we RAID 0 all internal drives.

Brian Elfert
I agree with about 70% of what you're saying here... but RAID 0 for all internal drives? I hope that you meant to say RAID 1+0...

HP servers are great, but the systemboards to go bad from time to time. A couple of years ago I worked on a project where we had to replace the systemboard in 200 DL580 servers because of a design defect.

By the way guys... IDE and SCSI both suck. Real system architects design servers with SAS and Fibre Channel... :)

Sorry I had to throw that in, this is a pretty entertaining discussion to read.

IDE/ATA/SATA drives are really for different purposes than SCSI. Except for a few situations, new SATA drives will perform better in a single user system than SCSI (although SAS is a different story though). SCSI has more overhead within the protocol, but handles concurrent users better than SATA/ATA. You really don't start seeing a significant increase in performance with most SCSI drives in single user systems until you have a bunch of spindles in a RAID set with a nice caching controller card. Multi-user situations is a completely different story though.

The latest versions of SATA are much better with handling concurrent users than past ATA/IDE drives, but no where near as good as SAS and FC.
 
Xenon: Aren't fiber channel drives basically scsi drives with a different protocol and cable interface to allow for longer cable lengths and more addresses. Why does that make them better than SCSI drives? How does that make them perform more?

I know SAS will perform better mainly because it's newer technology so that's kind of a moot point. Comparing and SAS drive to a SCSI drive is like comparing the performance of a cars made in different years.
 
If anyone wants to comile this for Windows I'll run it on my Poweredge 6400 at home.
 
Really I was asking to seperate the fact out from the cruft so someone didn't get the wrong idea about the drives performance.
 
Oldbenwa said:
Tried one of my 18.6gb 15krpm U160 drives - 488-367kb/sec. Also tried one of my 73.6gb 15krpm U320 drives - 253-234kb/sec. Just did it under knoppix, so drivers for the controllers probably aren't set up right. PMed you the results, mage.

Oldbenwa
Thanks for the results. I'm generating the graph now. Fake edit:
speed-graph.png

*May be cached from before, try refresh...

What controller is it? Much to my dismay, I don't have much experience with new scsi stuff, but maybe I can dig up something.

 
Numbers there aren't right and you can't trust SCSI results on Linux. All the drivers are horrifically broken at best, and there is no proper DMA, so you're limited severely by the kernel. (Why yes, I do develop device drivers. How did you know?)

I'm still trying to port it to FreeBSD so we can have some valid results. Linux numbers should be thrown out as defect-limited, especially on MegaRAID-family, where the driver is horribly broken.

Definitely does not work as-is on FreeBSD though. No lseek64, so yeah, have to make significant changes. A quick runthrough with an old excersizer I wrote on a Netfinity 4500R, ServeRAID 3L, 2x Seagate 10K 18GB RAID1, FreeBSD 5.4-REL was pretty good at showing how blatantly off the Linux numbers are. These are U160 disks.

Netfinity4500R rack0,unit2,pdu0-2
root@dev3 # /opt/bin/dn-excersize -random_only
(C) 2004 Dragon North - All Rights Reserved
-random_only set
Locating disks...
FOUND: /dev/acd0
FOUND: /dev/ipsd0
FOUND: /dev/ipsd0s1a
FOUND: /dev/ipsd0s1b
FOUND: /dev/ipsd0s1c
FOUND: /dev/ipsd0s1d
FOUND: /dev/ipsd0s1e
FOUND: /dev/ipsd0s1f
Select Disk: 5 Selected /dev/ipsd0s1c
Not a UFS partition
Not a UFS2 partition!
Entering RAW
Reading 2048x512 RANDOM
..................................................
Result: OK 69MB per second
Reading 2048x1024 RANDOM
..................................................
Result: OK 55MB per second
Reading 2048x2048 RANDOM
..................................................
Result: OK 33MB per second
RUN MULTI? [Y,N]Y
Thread OK: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
20 Threads OK
Entering RAW
Reading 20 x 2048 x 512 RANDOM
..................................................
Result: OK 3921KB per second
Reading 20 x 2048 x 1024 RANDOM
..................................................
Result: OK 1295KB per second
Reading 20 x 2048 x 2048 RANDOM
ERROR alloc() FAIL
Aiee! Abandon ship, abandon ship!

Netfinity4500R rack0,unit2,pdu0-2
root@dev3 #

Yeah, ran out of memory; forgot the system was soft-limited to 128MB at the time to get a good dump. No, won't be publishing the code. It's not all that magical; gets a sector count, uses /dev/random to pick a random startpoint, then reads from that startpoint into memory for the perscribed length, then frees after the read's complete. It's pretty crappy code.
 
Raid 1 changes things, though - I'd imagine that would approximately double the results for this particular test, since you can do reads to both disks at once.

In ~5 hours, I'll have the BSD cd's to play with and I can compare and contrast. It'll be interesting, I'm sure. Haven't played with BSD much before.

My original question stands: What application/OS were they using (the streaming media people)? Even the slowest drive I've got results for did 50 "users" at 2 mbit/s...

 
unhappy_mage said:
Raid 1 changes things, though - I'd imagine that would approximately double the results for this particular test, since you can do reads to both disks at once.

In theory, it would. In actuality, no. That's not the way the Clarinet Lite works. Also, this is far closer to the setup they were running than a single disk.

In ~5 hours, I'll have the BSD cd's to play with and I can compare and contrast. It'll be interesting, I'm sure. Haven't played with BSD much before.

FreeBSD has actual DMA APIs and SCSI drivers that aren't utter shit. Watch the numbers fly upwards. amr(4) can easily break 140MB/s sequential on single-disk with a 320-2X.

My original question stands: What application/OS were they using (the streaming media people)? Even the slowest drive I've got results for did 50 "users" at 2 mbit/s...

I don't know, haven't had time to ask. I have not had time to dedicate to this at ALL the past few days (see my post in rate-my-cables soon) and don't expect to for at LEAST a week. :(
 
unhappy_mage said:
What controller is it? Much to my dismay, I don't have much experience with new scsi stuff, but maybe I can dig up something.

The U160 Drive is on an Adaptec 29160LP, and the U320 drive is on an LSI Logic/Dell LSI21320.

Edit: You have the drives swapped in your graph. The U160 drive tested faster. Oh yeah, I ran it on Knoppix's ramdisk too, holy crap.

Oldbenwa
 
The graph's fixed now, thanks for pointing that out.

Yeah, ramdisks look *hella* fast:
Code:
1137643353 1 This process's read speed: 115473.441109 kb/second (866 usec for 100 chunks)
1137643353 1 This process's read speed: 106723.585912 kb/second (937 usec for 100 chunks)
1137643353 1 This process's read speed: 124378.109453 kb/second (804 usec for 100 chunks)
1137643353 1 This process's read speed: 123304.562269 kb/second (811 usec for 100 chunks)
1137643353 1 This process's read speed: 121506.682868 kb/second (823 usec for 100 chunks)
but notice it's all the same process (1 in the second field) doing the reading. That means that's all that's happening, not 50 processes doing that at the same time. Still fairly impressive, I guess, on a system where memory bandwidth doesn't suck (this one gets ~400 MB/s in memtest!).

Down to 2 and 4 hours left on the CD images (stupid DSL).

 
If you can get me a source that will compile under freebsd I can run your code on that, I've got a raid 1 with 2x10k 18 gig scsi's on one channel and a raid 1 with 2x40gig 7200 IDE's on seperate channels.
 
defakto said:
I know SAS will perform better mainly because it's newer technology so that's kind of a moot point. Comparing and SAS drive to a SCSI drive is like comparing the performance of a cars made in different years.


Actually its more like comparing the same car but the SAS one has like a 1% narrower tire as long as the drives are same family. My personal results are that SAS is a little slower mostly due to imature drivers. SAS and SCSI are basically identical just like SATA and IDE are the only difference is a change in the physical interface and a few more commands and features are added but it is basically the same.
 
not wanting to threadcrap, but how does the read/ write speed on a RAID array pertain to deals on dual-socket servers?
 
drizzt81 said:
not wanting to threadcrap, but how does the read/ write speed on a RAID array pertain to deals on dual-socket servers?

Because we're talking about servers...and servers typically run applications that rely heavily on hard disk performance (mysql, php, http, ftp, etc.). And it isn't neccesarily about RAID...the discussion is just about IDE (and its derivatives) vs. SCSI (and its derivatives) since the original discussion was about building a low-cost duallie server, and whether a SCSI disk system was possible (or practical) in the allotted budget.
 
drizzt81 said:
not wanting to threadcrap, but how does the read/ write speed on a RAID array pertain to deals on dual-socket servers?

I am the OP and I was asking for any deals on a dual socket machine, and I needed a scsi raid system as well, for a server at work. I eventually found a $1700 off coupon from Dell and went with that, with a scsi raid 1 setup and 2 3.2GHz Xeons.

Since then, the thread turned into a sata/ide vs. scsi performance battle. That's fine with me, but for servers that get banged on by 50 users, I personally only trust scsi drives. The couple time I used an ide drive for our users(in a pinch), it ended up with horrid performance, and one time the drive started to die after 3 days of constant use.
 
Back
Top