Some hard Fermi performance # info

I'll put my HD5870* up for sale when Kyle has done a full review of Fermi - all we have here are two cherry picked benchmarks that HardwareCanucks was given by nVidia. Hopefully Fermi is a beast, since we like competition.

* Hee hee - I don't have a 5870, who the hell am I kidding!
 
All "benchmarks" so far have come directly from NV demo machines. I'm anxious to see the gameplay experience GF100 can deliver.
 
They were #s from an nVidia-provided system. Probably why H didn't bother posting them, since production models could invariably be different.

Page 13 of HWCanucks said:
Performance Preview

Now that we have extensively gone through the architectural aspects and feature set of the GF100, it’s time to see what happens when the rubber hits the road. Up to this point, benchmarks involving NVIDIA’s upcoming cards were impossible to come by which is understandable since any leaks would have opened the competition’s eyes to what was slowly but steadily approaching them. While the following benchmarks show only a small cross-section of what the GF100 is capable of, it should be mentioned that all of them were run in front of our faces in real-time and are done on hardware that is still in its beta stage. These numbers will improve as NVIDIA dials in clock speeds and the beta drivers mature to a point where they are ready for release.
 
They were #s from an nVidia-provided system. Probably why H didn't bother posting them, since production models could invariably be different.

It's fairly solid regardless, plus since it's pre-release hardware it's likely not running faster than the production ones will be (given that the hardware only architecturally scales to 512 cores, they can't have pulled a BS like adding extras ;) assuming that was even the 512 and not the 448 model), and...

http://www.xtremesystems.org/forums/showpost.php?p=4203666&postcount=1273
Yes but it was the standard Ranch Small test in the Far Cry 2 benchmark. In addition, the program itself wasn't running in the background in any way. The booted the benchmark, changed the settings in front of our eyes and then ran it. Simple as that and like anyone else would do. The only difference is that none of us were holding the mouse.

Don't know about you, but I like more info and being able to decide its merit, than just not being told it's around.

Jeez dude relax, those more advertisements than actual reviews. what a huge letdown.

The HardwareCanucks article had explanations of everything as well as extra info/explanations/speculation thrown in. Most of the sites just tossed up the slide deck plus a paragraph and called it a day.


All "benchmarks" so far have come directly from NV demo machines. I'm anxious to see the gameplay experience GF100 can deliver.

Definitely looking forward to the finalized info myself... and while the "benchmarks" came from NV demo machines, it sounds like from the way they were run (I wasn't there, you or Kyle were :p, so correct me if I'm wrong) that there wasn't a huge amount of room for "faking" or "bending" the truth there.
 
Yeah I got the HWCanuck link from reading the thread on XS as well. I'm a member there too :p

*hah I see Randomizer brought up the same question i had.
 
I don't care as long as the prices of the 5xxx's drop when it releases.
 
According to the article, they gave them general info about the hardware setup. Seems like a good estimate to me.

Also according to the reviewer who wrote the article...

http://www.xtremesystems.org/forums/showpost.php?p=4203681&postcount=1280
SKYMTL said:
As I mentioned in the article, NVIDIA sent me their EXACT system specs when I tried to call them on using a hand picked system. I went out and bought an i7 960 and replicated their system. My GTX 285 results were within a mere few percentage points of theirs.

Trust me, nothing was messed with the benchmark on my system and judging from my results, nothing was "optimized" on theirs either.....

So, I think the setup is pretty safe to call "OK"... drivers are another issue entirely and no one has the answer to that but nVidia... still excellent info to know about though. It would be fairly tricky I'd think to mess with the info that's been provided when the results appear replicable unless they paid off another fairly longrunning site :), which I simply don't think is very plausible. Yes, I'm aware it's pre-release. Yes, I'm aware it's not official, released info, and not in the style some people prefer of graphed-out second-by-second info, but it's the most solid we've seen thus far.


hehe, well if you are gonna do tri-monitor and only wanna use one card, you will definitely be keeping the 5870! ;) Specially if the GF100 is $500 at lauch or more. Bascially for the price of
2x0Fermi, you get a 5850+2 more monitors. Thats a better deal for me.:p

I run 2560x1600, while more monitors would be nice for a wider FOV, I am not interested in 3x30" myself at this time :).
 
Last edited:
According to the article, they gave them general info about the hardware setup. Seems like a good estimate to me.

While they may have given the hardware specs, we still know nothing of how the benchmarks were performed. We don't know what, if any, kind of tweaks were done to the software, be it the game itself or drivers. As history has taught us, neither GPU vendor is above doing software tweaks to make their hardware seem that much better. But, we'll know in the next few months, right after Kyle and Brent have their way with the card.
 
While they may have given the hardware specs, we still know nothing of how the benchmarks were performed. We don't know what, if any, kind of tweaks were done to the software, be it the game itself or drivers. As history has taught us, neither GPU vendor is above doing software tweaks to make their hardware seem that much better. But, we'll know in the next few months, right after Kyle and Brent have their way with the card.

True, hence why I commented about that the results with the GTX 280 at least were able to be replicated. However, history has definitely shown us that quote optimizations unquote (aka cheats) do get used... and right or wrong on the performance info, it's not like anyone's buying one tonight, so we'll see in a couple months for sure :D.
 
While they may have given the hardware specs, we still know nothing of how the benchmarks were performed. We don't know what, if any, kind of tweaks were done to the software, be it the game itself or drivers. As history has taught us, neither GPU vendor is above doing software tweaks to make their hardware seem that much better. But, we'll know in the next few months, right after Kyle and Brent have their way with the card.

But they were able to duplicate the GTX 285 to a few percentage points, so it's not exactly all that optimized. Yes your right, it's not definite however as a general idea it does the job.
 
But they were able to duplicate the GTX 285 to a few percentage points, so it's not exactly all that optimized. Yes your right, it's not definite however as a general idea it does the job.

My big question is if they were using the same drivers for both cards. They may have used a released driver for the GTX 285, but it's very likely that they used a beta in-house driver for the GF100. And since in-house drivers aren't released to the public, there's a possibility that they compromised quality for the sake of speed in order to talk up thier card. Of course this is all speculation, but since they're a company trying to sell a product, it seems very plausible. That being said, I do think Fermi will be a good performer. Probably 10-20% higher than a 5870 in current DX9/10 games, and more in DX11 games that make good use of tessellation.
 
Last edited:
This was saying that the high end part may NOT be ready when this is first launched. Gosh I hope that's not true. Well I guess that's better than no card at all but I do hope the high end card is available.

But as the article said, if these bechmarks were done using the lower end card then wow, GF100 or whatever is just going to rock. AMD might be in for more of a challenge than anyone thought. Time will tell.
 
I don't see the end result beating a 5970, and by the time this comes out we should expect to see a 5990. Yeah, it's cool, but these tech demos are always worthless, it doesn't matter what company is pumping them out. Remember the Ruby demo?

The ray tracing is great for us, but the truth is that we're not going to see it in games anytime soon. Not only are the early adopters usually screwed, but you're going to need a DX11 OS, as well as a big chip on a infantile process.

The only thing that really matters for computer graphics is what Microsoft says. AMD and NVIDIA can only do what MS specifies in their API better than eachother, they can't do anything new. Look at how far ATI tessellation got.

EDIT: Here's how it's going to go. GF100 is close to 5970 so we see a price war on the high end. Sadly, this is hardly going to trickle down to the 5870 price range because there is probably not going to be a challenger from NVIDIA there, as there's not going to be enough GF100s around thanks to the size of the chip and TSMC's problems with yields. I'd like to think that the consumer is finally going to win after the ramming AMD has been giving consumers with the 5000 series, but the truth is that NVIDIA isn't going to make things better for most of us who spend 200-300 on a card.
 
Last edited:
Meh, il wait for the card in the flesh before i make my dession.
Posted via [H] Mobile Device
 
Looks like "nFinity" is only possible with SLI. Anandtech's preview: http://www.anandtech.com/video/showdoc.aspx?i=3721&p=7

"
Regardless of to what degree this is a sudden reaction from NVIDIA over Eyefinity, ultimately this is something that was added late in to the design process. Unlike AMD who designed the Evergreen family around it from the start, NVIDA did not, and as a result they did not give a GF100 the ability to drive more than 2 displays at once. The shipping GF100 cards will have the traditional 2 monitor limit, meaning that gamers will need 2 GF100 cards in SLI to drive 3+ monitors, with the second card needed to provide the 3rd and 4th display outputs. We expect that the next NVIDIA design will include the ability to drive 3+ monitors from a single GPU, as for the moment this limitation precludes any ability to do Surround for cheap.
"
 
Looks good. Now the only question is if the 850m to 1B tranistors more gives performance gains in line with the price diffrence we will see.

The larger hotter chip i'm sure will cost more not only for the chip but for the pcb and cooling.

Will be interesting to see how this all goes down.


Esp if the end of qtr 1 launch for the lower clocked card is true. That will give amd more time to come out with their refresh
 
I seriously doubt they would make a GPU that causes your house to burn down every time you plug it in.

Honestly, I think that would make a good selling point.

What enthusiast wouldn't want to be able to claim ownership of a card so powerful you can't even use it safely.

ePeen +10 right there.
 
Honestly, I think that would make a good selling point.

What enthusiast wouldn't want to be able to claim ownership of a card so powerful you can't even use it safely.

ePeen +10 right there.

Well played, sirrah.
 
I dont doubt GF100 will be faster than 5870, the question is price. One website speculated the GF100 will be $520.

AMD on the other hand can probably easily slash the 5870 price enormous amounts and still be profitable on it. $300 for a 5870, say, versus $500 for a GF100, and AMD will still have a great market niche, in fact probably the far better selling one.

Unless Nvidia can somehow screw up that plan, but I dont see how. Sell GF100 for $400? Seems unlikely, and $300 5870 would still sell anyway.. 3/4 Fermi with a smaller die? Given how much trouble Nvidia has had getting Fermi out, who knows when such a beast would release, or even if they're planning it, and performance would likely be no greater than 5870 anyway.

And there's the matter of the 5870 refresh AMD undoubtedly has planned (5890), which will probably be ~10% faster than the 5870, throwing another weapon/curveball into the price/performance scenarios I've laid out.

The bottom line is from what I see AMD is still in a great position.
 
And there's the matter of the 5870 refresh AMD undoubtedly has planned (5890), which will probably be ~10% faster than the 5870, throwing another weapon/curveball into the price/performance scenarios I've laid out.

The bottom line is from what I see AMD is still in a great position.

Bah, 10%? That thing better be faster than 10%. Hell, they could achieve 20% just by upping the stock 850MHz clock to 1020MHz, which some are already achieving on 5870's with better cooling and voltage. AMD could easily attain this level of performance with cherry picked chips (which would possibly be much easier than it is now with manufacturing improvement in the next few months). They could also make mild architectural improvements which will improve performance beyond 20%, perhaps matching (in general terms) the Fermi. This will drive prices down even further for all top end components. Awesome time for us gamers! :D
 
Last edited:
I hope that the thread title can be changed since the number is nowhere near of being [H]ard.
 
Even if it is better, its going to be priced around $600. Get ready to spend $200 extra for few fps more.

For 20% more FPS overall (assuming the article is remotely correct), why not? There are a lot of people who will pay extra to get even less of a gain. There is nothing wrong with that.
 
Right now, my Core i7-GTX 275 setup is struggling with just a few games at 1920x1080 - I'm not completely happy, at max settings, with Crysis or Crysis Warhead, or with Stalker Clear Sky, or, sometimes, with GTA 4 (which seems to be more processor intensive anyway). There are a couple of other titles I'm not too pleased with, but since I've forgotten which ones they were they couldn't have been too important.

I suspect that Nvidia's next offering, whatever it will be, will be a nice upgrade from my GTX 275, which I bought on launch day for $300. If I end up getting 18 months out of my GTX 275 then I'll be pleased.

The really big problem right now is that there just aren't enough PC titles coming out that are really going to test this new hardware. Right now, a GTX 275 is more than enough - and that's based on my own personal experiences as a hardcore gamer. I bought every damned game last year, and my 275 tamed almost everything.

I remember when the 8800 was launched - there were quite a few titles which really needed that card, and there were quite a few titles that were released shortly afterwards that really needed that card. Today, that just is not so.
 
Except the GF100 isn't competing with the 5870, it is competing with the 5970.

And if you look at their review of the 5970, it was hitting 114 fps at 1920x1200 4xAA but at Very High instead of Ultra High, which was a ~13fps hit for the 5870. So Ultra High needs to be a 30 fps hit for the 5970 just for the GF100 to break even and it needs to be a 20 fps hit at 2560x1600 w/ 4xAA.

Something tells me that the GF100 is going to be a massive letdown.
 
Whether they like to admit it or not, ATI basically took DX11 features and popped them onto a more powerful version of their HD 4000 series and called it a day.
That alone (from that article at HardwareCanucks) doesn't inspire me much. He thinks he's a big hotshot writing a sizable article about the GF100 technology, but really if he could take a few m,inutes and go to the ATI technology previews of the HD5870, it the same blabla all over again. To say ATI didn't build their HD5870 from the ground up to be DX11 is like saying; "I went to the R&D, saw what they were doing, and NO they just took the HD4000 series design and slapped a DX11 sticker on the side. With a few nuts and bolts placed in different places..." Whatever, this Michael "SKYMTL" Hoenig won't get my respect.
 
Back
Top