Brent_Justice
Moderator
- Joined
- Apr 17, 2000
- Messages
- 17,755
Check out the video, gimme your thoughts. How do you feel about NVIDIA's Hybrid approach, is it still a "hack" as the video calls it, similar to rasterization?
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Well don't worry about full ray tracing you won't get that from Nvidia .
No you are going to keep paying Nvidia premium prices because they promise something they have no intention of delivering. The video made that clear at least to me ...Nor AMD....so your point being?
That we sit down and do nothing...until all parties are ready?
Nor AMD....so your point being?
That we sit down and do nothing...until all parties are ready?
He is right...nVidia is walking a very fine line to move the industry where it needs to go. If nVidia could go from 1SPS @ 2k or 4k and 2x that every 2 years it will be less than a decade where the photorealism will be possible in action based games (e.g. not staring at a screenshot) thus nuking the need to rasterize. After that it is just back to days of incremental improvements. But I can see 50SPS + denoise in the next decade without issue.
How the Nvidia die size is already over 700mm2?
The tools required to go lower in nano meters on the process are currently so hard that Intel the company that lead the fab race for a long long while stagnated for the past 5 years on 14nm FinFet process. Another step from 7nm to 5nm is not going to make things much better the process steps are going to stop progressing unless another method is used that allows an easier path.
And all the current features only work on Nvidia hardware which translates to Nvidia needing to implement all of them. The lack of games currently only shows how Nvidia is not really committed.
Or solved by multiple dedicated smaller RT chips, AI chips etc.You are making the assumption that ray tracing silicon can't be made more efficient nor will it take the place of raster tech. Remember what the 8800 series did to gaming. It was a huge fucking tech shift that it dominated the market for 2+ years. But that was back in the day when a new die development didn't cost you the GDP of many small nations. Now they need to be a bit more nuanced but based upon how much of the die is for ray tracing vs raster...there is massive gains there alone.
hmmm.... you mean like nvlink?The other option is multiple cards hooked up by a very fast interlink.
Didn't everyone know that Ray tracing isn't going to be here with this generation? I thought we all knew that.
or Infinity fabric if reference AMD, QPI if reference Intel with all kinds of other "modern" examples.hmmm.... you mean like nvlink?
it might not ever be here is the point...
Maybe, NVlink does give a higher bandwidth - I wonder what that extra bandwidth Nvidia is planning to use it for or why incorporate it. Can developers use a two card/GPU or more setup where RT is being done by all cards while only one card GPU is being used for the rasterization end? Don't know. Much more than just mGPU or SLI in other words as in added flexibility.hmmm.... you mean like nvlink?
Dedicated Ray tracing card is a financially stupid move. It's already niche enough with RT baked in, imagine what would happen if it's optional? Nothing would change, and every year there'd be 1-2 marquee AAA games + a bunch of indie games using the tech for extra market exposure. Just like what happened with PhysX.
Why do people keep claiming that PhysX is dead? ITS THE MOST POPULAR PHYSICS ENGINE.
You could gather it from the same percentage gain from older generations and see how much die size was used for the increase last time around.You are making the assumption that ray tracing silicon can't be made more efficient nor will it take the place of raster tech. Remember what the 8800 series did to gaming. It was a huge fucking tech shift that it dominated the market for 2+ years. But that was back in the day when a new die development didn't cost you the GDP of many small nations. Now they need to be a bit more nuanced but based upon how much of the die is for ray tracing vs raster...there is massive gains there alone.
You could gather it from the same percentage gain from older generations and see how much die size was used for the increase last time around.
And what you can see from the RTX 2080 TI is that it is the same silicon as the Quadro. It must have been a long time ago that anything like this happened Nvidia tends to never sell their best chip to consumers (not professional).
That also might explain a little of how little headroom there is check the RTX 2080 .
https://hardforum.com/data/attachment-files/2018/09/158254_Nvidia_table.jpg
Nope. They did that way way back but in previous generations they never did it.The RTX 2080Ti is a cut down silicon compared to the Quadro 8000. nvidia does this all the time.
Nope. They did that way way back but in previous generations they never did it.
I would not generalize it as cut down, just lesser binned version with some of the features disabled.
I'm saying that the TU102 is the same die size.Not sure what you’re trying to say. They did the exact same thing with the 1080 Ti and Quadro P6000.
Nope. They did that way way back but in previous generations they never did it.
I would not generalize it as cut down, just lesser binned version with some of the features disabled.
This launch is different.I'm saying that the TU102 is the same die size.
And mistakenly thought they never sold it before but it seems it is the same as the previous Geforce 1080TI but the timing is different.
This RTX launch sees a TI of the top silicon at the start of the cycle rather then in the middle of it.
This launch is different. NVIDIA is not relying on cut-down SKU's to make up their consumer product stack.
They're not "cut down" because they don't have to - they just sell you an inferior GPU, making more money than they did previously. This looks particularly suspicious to me:
GP106 1060
GP104-2 1070
GP104-3 1070ti
GP104-4 1080
GP102 1080ti
TU106 2070
TU104-4 2080
TU102 2080ti
Product segmentation has been clear for a while with Nvidia:
107 low end
106 mid range
104 high end
102 luxury
A 106 core would get you a mid range card. With Turing, you pay high end money for it.
A 104 core would get you a high end card. With Turing, you still pay high end.
A 102 core would get you a luxury card. With Turing, you still pay luxury.
What will happen to 104-2 and 104-3? One of them has to become the 2070ti. I'm guessing 104-2 as it's a good jump from 106 and will remain far from the 104-4 performance. That means 104-3 disappears.
Most worrying, what will happen to the mid range core? A 2060 with a TU107 core, which should belong in the low end x50 range? Notice that the 2070 isn't a 106-4 or 3, but since it's it's own GPU core unrelated from a 2080, it's not in itself a cut-down version. This suggests it can be cut down to a 106-2 to become a 2060ti, which would mimic the 2070ti 104-2 vs 2080 104-4 divide and push the 2060 down to a full TU107-4, with a 107-2 becoming the 2050ti. The structure, though speculative, seems to be shaping up like this.
All the signs point to Nvidia moving all the cores down generation to generation, thereby charging you what would've been the upper model for lower model performance. Can AMD come back in force, please? Because Nvidia is getting increasingly (obviously) greedy.
Look at the dies....not just the pretty PR names...SKU bandwith and size.
I could spoonfeed you, but I found this to be a waste of time.Since when are core names like GP104 "pretty PR names"? And since when do they have to do anything with die size?
As far as I know they've always been related to product tier segmentation.