T-break review shows 6800GT beating X800 XT

Shemazar said:
Something just doesn't seem right with the T-Break review in regards to the X800XT benches.

The T-Break review is irrelevant now that the SM3.0 patch is out

fc5.png


fc6.png


etc
 
tranCendenZ said:
The old non-sm3.0 benchmarks are irrelevant now

Are they? Maybe you didn't read the full article at anandtech...

Final Words
Both of our custom benchmarks show ATI cards leading without anisotropic filtering and antialiasing enabled, with NVIDIA taking over when the options are enabled. We didn't see much improvement from the new SM3.0 path in our benchmarks either. Of course, it just so happened that we chose a level that didn't really benefit from the new features the first time we recorded a demo. And, with the mangoriver benchmark, we were looking for a level to benchmark that didn't follow the style of benchmarks that NVIDIA provided us with in order to add perspective.

Even some of the benchmarks with which NVIDIA supplied us showed that the new rendering path in FarCry isn't a magic bullet that increases performance across the board through the entire game.

Image quality of both SM2.0 paths are on par with eachother, and the SM3.0 path on NVIDIA hardware shows negligable differences. The very slight variations are most likely just small fluctuations between the mathematical output of a single pass and a multipass lighting shader. The difference is honestly so tiny that you can't call either rendering lower quality from a visual standpoint. We will still try to learn what exactly causes the differences we noticed from CryTek.

The main point that the performance numbers make is not that SM3.0 has a speed advantage over SM2.0 (as even the opposite may be true), but that single pass per-pixel lighting models can significantly reduce the impact of adding an ever increasing number of lights to a scene.

It remains to be seen whether or not SM3.0 offer a significant reduction in complexity for developers attempting to implement this advanced functionality in their engines, as that will be where the battle surrounding SM3.0 will be won or lost.
 
Shemazar said:
Are they? Maybe you didn't read the full article at anandtech...

Yes, they are. The SM3.0 patch fixes rendering errors on the 6800 series and now speeds up the 6800 series both in 2.0 and even moreso in 3.0. You don't have to read his final words, look at the benchmarks yourself. SM3.0 can afford Nvidia up to 20-30% performance increases in some areas.

For instance, in the Volcano demo, the 6800U gets a 10fps gain through SM3.0, a ~15-20% increase.
 
My God! You don't say! Are you telling me that a plain-jane GT beat the XT-PE when the GT got to use a essentially a proprietary path, and the XT-PE had no such luck.

Put some 3Dc in FarCry, play it, look at the IQ/perf differences when you force an 6800 to run on SM2.0 and I'm sure you would see the opposite of what is currently happening.

This is not some Godsend, this is not a huge shock. What it does show is that both cards are immature (or, perhaps that is better said "the software implementation of their features ") and that we CANNOT determine which card is going to be the card to have until we see who implements what, and how the cards act when they are used to their full potential.
 
nweibley said:
My God! You don't say! Are you telling me that a plain-jane GT beat the XT-PE when the GT got to use a essentially a proprietary path, and the XT-PE had no such luck.

It was ATI's choice not to support SM3.0, don't blame Nvidia or Crytek for ATI's decision. Looks like they underestimated Nvidia :)

P.S. - the SM3.0 path won't be proprietary when next years ATI and Nvidia cards support it, as it is the future Shader Model standard that will be in all games.
 
nweibley said:
My God! You don't say! Are you telling me that a plain-jane GT beat the XT-PE when the GT got to use a essentially a proprietary path, and the XT-PE had no such luck.

SM3.0 is not proprietary.... It will be part of DX9.0C and ATI will have it in their next generation of cards. They will also have to add 32 bit precision.

AND I will bet when ATI adds 32 bit (instead of 24bit) their next gen of cards will lose a lot of performance.

Considering how much more work Nvidia is doing by using 32 bit precision its amazing they can keep up, much less beat a much faster clocked GPU by ATI.
 
tranCendenZ said:
It was ATI's choice not to support SM3.0, don't blame Nvidia or Crytek for ATI's decision. Looks like they underestimated Nvidia :)

P.S. - the SM3.0 path won't be proprietary when next years ATI and Nvidia cards because it is the future Shader Model standard that will be in all games.
Indeed. But, it is Cryteks oddball decision to alienate ATI by not disclosing ANY plans to implement 3Dc, which has been praised for its simplicity and ease of implementation. I find it odd, considering a very large segment of their audience. It simply seems odd that Crytek would take such odd actions. I can only speculate about why they chose not to implement 3Dc (at least yet) but that is nothing but speculation.

And, at this point in time, the SM3.0 basically is a proprietary path. Who else besides nVidia is using it yet?

And as far as underestimation of nVidia... I doubt it. Au contraire, I think it was nVidia who was shocked by ATI's ability to produce a card that would rival the best nVidia could offer. Seems to me that nVidia had to rush a bit to try and regain crediblity with the 6800... and the design of the 6800 looks like it would have benifitted from a little release delay. Let nVidia spend another month or two figuring out how to make it quieter, more power efficient, and a little cooler. The 6800 is very formidable... it just looks rushed to me... seems like they could have "pwn3d" ATI if they had spent a little bit more time making the card.
 
i think the fx series was rushed, and the 6800 has really been thought out. i mean if you think about it, the 6800 fixed everything the FX had problems with, and some. thats just my opinon on it. :eek:
 
chrisf6969 said:
SM3.0 is not proprietary.... It will be part of DX9.0C and ATI will have it in their next generation of cards. They will also have to add 32 bit precision.

AND I will bet when ATI adds 32 bit (instead of 24bit) their next gen of cards will lose a lot of performance.

Considering how much more work Nvidia is doing by using 32 bit precision its amazing they can keep up, much less beat a much faster clocked GPU by ATI.

Just wondering: have you been noticing the extra IQ that 32bit is giving nVidia? I sure can't. In fact, I can't tell much of a difference between screen shots at all. Playing the two is a different story, which I have not had a chance to do, but I suspect there is little to no difference.

32 bit calculations may be advantageous in a few years, but nVidia has gotten a little ahead of themselves by implementing it in this generation. Although, it is nice to seem them trying new things... that is the only way we will see revolutionary design in the GPU world.
 
just a question, how much are normal maps used in far cry, because if they aren't used very heavily then 3dc won't add anything.
 
Bad_Boy said:
i think the fx series was rushed, and the 6800 has really been thought out. i mean if you think about it, the 6800 fixed everything the FX had problems with, and some. thats just my opinon on it. :eek:
I do not disagree, but isnt the point of releasing each generation to improve upon the last. Everyone is readily admitting the FX line had its shortcomings that nVidia needed to address, and they did. They also made a faster better card than the FX. ATI on the other hand had something real good going with the 9800s. Good IQ, good performance, etc. They made the decision that the 9800's arch. would be sufficient for their next generation of cards/games, and so they improved its performance too, and added some features to it, but did not redesign it.

Both approaches are working just fine... it seems to me that in the real world right now, ATI has a little edge on performance, and nVidia has an edge on upcoming technology, especially since they seem to be getting it implemented without roadblocks. Things may change in the future, we just don't know.
 
nweibley said:
And, at this point in time, the SM3.0 basically is a proprietary path. Who else besides nVidia is using it yet?
pro·pri·e·tar·y ( P ) Pronunciation Key (pr-pr-tr)
adj.
Exclusively owned; private: a proprietary hospital.
Owned by a private individual or corporation under a trademark or patent: a proprietary drug.

Nvidia does not own SM3.0, and not only can it be used by anyone, IT WILL BE USED by ATI in their next generation. IT IS THE FUTURE CODE ALL VIDEO CARDS / GAMES WILL BE IN THE NEXT YEAR(approx.) or so.

Calling SM3.0 proprietary and useless, is like calling DX8.0 proprietary & useless back when it first came out and DX7 was the standard. DX8.0 provided a huge leap forward... as should SM3.0 and DX9.0C (shoulda been called DX10)

SM3.0 is proven faster and more efficient. Real programming requires conditions and branching. Carmack (god) has been asking for 2 things: - better programability on GPU's (SM3.0) and FULL precision (32-bit) (6800 is what he asked for, ATI wont have it until the next gen)

Basically, the future is now for Nv, where we still have to wait for ATI to bring the future.

Granted their is very little difference in 24 & 32 currently bu 32 bit will ultimitely lead to much more realistic graphics.
 
chrisf6969 said:
pro·pri·e·tar·y ( P ) Pronunciation Key (pr-pr-tr)
adj.
Exclusively owned; private: a proprietary hospital.
Owned by a private individual or corporation under a trademark or patent: a proprietary drug.

Nvidia does not own SM3.0, and not only can it be used by anyone, IT WILL BE USED by ATI in their next generation. IT IS THE FUTURE CODE ALL VIDEO CARDS / GAMES WILL BE IN THE NEXT YEAR(approx.) or so.

Calling SM3.0 proprietary and useless, is like calling DX8.0 proprietary & useless back when it first came out and DX7 was the standard. DX8.0 provided a huge leap forward... as should SM3.0 and DX9.0C (shoulda been called DX10)

SM3.0 is proven faster and more efficient. Real programming requires conditions and branching. Carmack (god) has been asking for 2 things: - better programability on GPU's (SM3.0) and FULL precision (32-bit) (6800 is what he asked for, ATI wont have it until the next gen)

Basically, the future is now for Nv, where we still have to wait for ATI to bring the future.

Granted their is very little difference in 24 & 32 currently bu 32 bit will ultimitely lead to much more realistic graphics.
Ok, if you can find it in your heart, forgive me. nVidia is the EXCLUSIVE user os SM3.0 right now. Better?

There is very little difference in 24 and 32 bit besides performance. Why waste performance on something that seems so utterly useless at this point in time? Maybe the 6800s will last untill we see the effects of 32 bit precision... but I doubt it.
 
Vagrant Zero said:
OH REALLY? ANANDTECH YOU SAY? The same Anandtech that shows a GT beating an XTPE pro in SM2 [and SM3] at 1600x1200 4x8x?

2826.png

YEAH ANANDTECH. The same Anandtech that, in a PCI-E review, which is what I AM FARKING WELL TALKING ABOUT, produced this:

2540.png


SM3.0 is a whole other issue, and is NOT germaine to the discussion of Tbreaks unusual result in Farcry at 1600x1200 4x8x w/SM2.0.
 
nweibley said:
Ok, if you can find it in your heart, forgive me. nVidia is the EXCLUSIVE user os SM3.0 right now. Better?

There is very little difference in 24 and 32 bit besides performance. Why waste performance on something that seems so utterly useless at this point in time? Maybe the 6800s will last untill we see the effects of 32 bit precision... but I doubt it.

Forgiven, I know the 6800 will last long enough to be useful with SM3.0 b/c thats happening now already with the FarCry patch 1.2. For IQ differences using 24 vs 32 it might, it might not. But look at how well the GF4ti4x00's are holding up considering their age.
 
chrisf6969 said:
Forgiven, I know the 6800 will last long enough to be useful with SM3.0 b/c thats happening now already with the FarCry patch 1.2. For IQ differences using 24 vs 32 it might, it might not. But look at how well the GF4ti4x00's are holding up considering their age.
True, I am running a Ti 4200 and I can play farcry on medium everything, granted it doesn't look flash and I get basically 35fps.
It's a toss up, but I think technology is starting to outpace the 18 month rule... at a bare minimum in the GPU segment.
 
Lezmaka said:
Wrong again.

3DLabs Realizm supports PS3.0.

LOL!! double whammy. Maybe he should have researched before yelling proprietary & exclusive.... etc.. Granted SM3.0 covers VS3.0 & PS3.0 (I believe) so that card supports at least the important part of SM3.0

Years ago people were happy to be playing at 800x600 with 16 bit color vs current 96/128-bit color (24/32 bit precision)

Now everyone is running the newest cards on the newest games at maxed out settings like 1600x1200 with 4xAA/8xAF so these cards should last people a while.

Granted 2 years from now with the newest baddest games coming out... they might have to play at 1024x768 with no AA, etc. But its still good gaming. (like its still good gaming on an old ti4200 or slightly newer 9700pro)
 
The one thing I don't get is that Microsoft and anyone else down have said SM 3.0 won't increase the performance of a vid card but only makes coding shaders easier and adds some functions 2.0 doesn't have. So in that case, where is NV getting this huge performance leap from? Like c'mon, 20-30% increases when even the originator of 3.0 says it doesn't increase card performance.
 
Myrdhinn said:
The one thing I don't get is that Microsoft and anyone else down have said SM 3.0 won't increase the performance of a vid card but only makes coding shaders easier and adds some functions 2.0 doesn't have. So in that case, where is NV getting this huge performance leap from? Like c'mon, 20-30% increases when even the originator of 3.0 says it doesn't increase card performance.

SM3.0 hardware wont increase performance on SM2.0 code. Is probably what you may have read (out of context)

They have also said that you cant really produce better IQ with SM3.0 which is partially true. However, with SM2.0(while it may be possible to make the IQ as good as SM3.0) it may not be efficient enough to do some effects which will be possible with SM3.0

However, SM3.0 code properly written can bypass unnecessary loops(conditional looping) that would be done with SM2.0 code. Branching can make code more efficient also. By resuing parts of code, and jumping around only to the necessary parts needed for different surfaces, etc. It brings so much flexibility to coding shaders, etc. You can also use a lot more light sources with out as much wasted calculations,etc. I'm not a game dev, but I've read a lot on it... and its very exciting what will be possible with sm3.0
 
Myrdhinn said:
The one thing I don't get is that Microsoft and anyone else down have said SM 3.0 won't increase the performance of a vid card but only makes coding shaders easier and adds some functions 2.0 doesn't have. So in that case, where is NV getting this huge performance leap from? Like c'mon, 20-30% increases when even the originator of 3.0 says it doesn't increase card performance.

Below are the results for the 6800 Ultra from the Anandtech sm3.0 review.

settings, fps gain in 3.0, percentage increase in 3.0
mp_airstrip
16x12 0.1fps 0.11%
16x12x4x8 2fps 3.28%

mp_mangoriver
16x12 -0.3fps -0.37%
16x12x4x8 1.4fps 2.39%

research (nvidia supplied)
16x12 9.7 14.29%
16x12x4x8 12.2 25.90%

regulator (nvidia supplied)
16x12 2.8 3.98%
16x12x4x8 3.2 6.23%

training (nvidia supplied)
16x12 0.9 1.24%
16x12x4x8 3.3 6.26%

Volcano (nvidia supplied)
16x12 8.5 12.43%
16x12x4x8 10.9 21.41%

As you can see, in two of the demo's, which appear to be the indoor ones, the nvidia card see's as high as a 26% increase in performance. The outdoor scenes don't seem to benefit as much in Farcry, including an actual drop in one Anandtech made demo. The indoor increase does make a lot of sense because probably almost very single pixel on the screen is being shaded whereas outdoors only a select portion of them are.

The improvements in speed with 3.0 are pretty interesting, particulary in areas where heavy pixel shading is done. Clearly the Nvidia card still has significant issues with handling 2.0 shading but does very well with 3.0 shading. Nvidia's challenge now is to get all the developers who use 2.0 to also use 3.0 in upcoming titles. As long as they get to play to their strength while ATI plays to theirs it will still primarily be a raw power comparison for most games (where Nvidia does seem to have the lead).

Of course if ATI gets people to use 2.0b then they may see some speed improvements too. Who knows how thins will then turn out.
 
oqvist said:
Listen nVidia releases 10 beta drivers and 1 whql.

ATI releases 1 whql 1 whql beta and 1 whql it´s a slightly difference.

Generally people benchmark with whql certified drivers for ATI and beta drivers for nVidia.

It´s not okay ;)

Nvidia never releases beta drivers, all those drivers floating around the net are nvidia development drivers leaked by board partners and whatnot............ Atleast i can't find any Beta drivers on nvidias site..
 
InkSpot said:
Nvidia never releases beta drivers, all those drivers floating around the net are nvidia development drivers leaked by board partners and whatnot............ Atleast i can't find any Beta drivers on nvidias site..

Psst, you'll have better luck with your plastic man action figure collection than some of these guys.
 
Back
Top