NVIDIA announces DLSS 3.5 with Ray Reconstruction, launches this Fall

I never claimed they invented the concept, I said they developed new algorithms and processes that were orders of magnitude faster and more efficient than anything anybody else was doing. They hold the patents on those algorithms and processes, those patents are what primarily give them the edge they have over AMD or Intel when it comes to ray tracing performance. At this time, Nvidia's algorithms allow them to be something like 30% faster than AMD transistor for transistor when it comes to accelerating ray tracing effects, which is a huge leap and why Nvidia manages to stay consistently ahead of AMD who has to try and replicate the functionality while dancing around patents.
Why do you guys keep bring up AMD (jesus red vs green on the brain)? We are talking about path-tracing and the improvement in DLSS to deal with path tracing. I'm mentioning that path-tracing has downsides. How does this address that?
Finding Dory, Cars 3, and Coco, were all rendered using Pixar's RenderMan and their supercomputer (currently ranked 25'th fastest), and they are entirely Path Traced, and it uses brute force to implement the entirety of the Reyes algorithms, a topic Nvidia has been consistently researching, publishing, and patenting, going back to at least 2005
Here is a link to one of their many such early works on the topic.
Reyes rendering was created in the 1980s and is NOT unique to Nvida. Renderman was created in 1993 and used Reyes rendering THEN, and it predates ANY work that nVidia has done on the topic. Reyes rendering was completely dropped from Renderman in 2016. Most likely RIS was used for Finding Dory which doesn't include Reyes Rendering at all. Bolting Nvidia patents on speeding up Reyes is crazy to attribute to Renderman.
 
Because it's still more physically correct and plausible. It's the end game any way you slice it. You just cannot express certain things properly in raster or it's completely impractical.
If you're introducing probability then that accuracy is questionable and the human brain cannot decipher the difference on its own. So from my perspective its chasing the dragon because you're selecting rendering that's not all that accurate depending on the hardware you're feeding it through (talking about power not vendor).
There's less and less art hacks and certain things just fall out of the system by virtue of doing it the "right" way. This is why actual real world optical phenomena like pinhole cameras can be observed in Minecraft of all things.
The right way is really subjective.
 
Reyes rendering was created in the 1980s and is NOT unique to Nvida. Renderman was created in 1993 and used Reyes rendering THEN, and it predates ANY work that nVidia has done on the topic. Reyes rendering was completely dropped from Renderman in 2016. Most likely RIS was used for Finding Dory which doesn't include Reyes Rendering at all. Bolting Nvidia patents on speeding up Reyes is crazy to attribute to Renderman.
Never said they invented it, I am saying they dramatically improved upon it, and created a series of algorithms and accelerators that did what it does in a fraction of the time and with a fraction of the energy. And Nvidia leapfrogged the entire industry in the process.
Yes in 2016 Pixar rewrote Renderman to use modern Path Tracing algorithms, as denoted in their paper, RenderMan: An Advanced Path Tracing Architecture for Movie Rendering, and their algorithms are detailed for extreme accuracy render times and costs be damned. Not exactly anything targeted at real-time rendering, so no shortcuts taken, it is an entirely different animal.

But if you want specifics on how Nvidia upended everything with their work on real-time light transport watch this guy, he does it better than I could.

View: https://www.youtube.com/watch?v=NRmkr50mkEE
 
Last edited:
Never said they invented it, I am saying they dramatically improved upon it, and created a series of algorithms and accelerators that did what it does in a fraction of the time and with a fraction of the energy. And Nvidia leapfrogged the entire industry in the process.
Every company joined to SIGGRAPH has improved speed for Reyes rendering and Nvidia by no means created the biggest gains. Imagination by far exceeds the work of nVidia as they were doing tile-based rendering in 1996 and most likely led to up to many of the gains in Reyes rendering. This technique was put into use for the Dreamcast also in 1996. Nvidia didn't come out with tile-rendering until Maxwell in 2014 and AMD in 2017. In 2014 Pixar had already started work on RIS and that was most likely used along with path-tracing for Finding Dory. So bolting Reyes to the discussion seems odd. Reyes rendering isn't even synonymous to path-tracing so I don't know how you've bolted it to the discussion.
 
Last edited:
If you're introducing probability then that accuracy is questionable and the human brain cannot decipher the difference on its own. So from my perspective its chasing the dragon because you're selecting rendering that's not all that accurate depending on the hardware you're feeding it through (talking about power not vendor).

The right way is really subjective.

The randomness of a Monte Carlo integration IMPROVES accuracy.
 
Yes in 2016 Pixar rewrote Renderman to use modern Path Tracing algorithms, as denoted in their paper, RenderMan: An Advanced Path Tracing Architecture for Movie Rendering, and their algorithms are detailed for extreme accuracy render times and costs be damned. Not exactly anything targeted at real-time rendering, so no shortcuts taken, it is an entirely different animal.
Yup I've read that paper. You should read it in it's entirety because it literally talks about what I was saying.
1692743649253.png

As you can see Reyes doesn't come into play at all here. Why you mentioned it before? 🤷‍♂️

Also in the paper the problem of noise is mentioned which was inherent in the technique. Rather than green vs red my point from the beginning was about the technique and it's downsides.
 
Every company joined to SIGGRAPH has improved speed for Reyes rendering and Nvidia by no means created the biggest gains. Imagination by far exceeds the work of nVidia as they were doing tile-based rendering in 1996 and most likely led to up to many of the gains in Reyes rendering. This technique was put into use for the Dreamcast also in 1996. Nvidia didn't come out with tile-rendering until Maxwell in 2014 and AMD in 2017. In 2014 Pixar had already started work on RIS and that was most likely used along with path-tracing for Finding Dory. So bolting Reyes to the discussion seems odd. Reyes rendering isn't even synonymous to path-tracing so I don't know how you've bolted it to the discussion.
You brought Pixar into it, and their Reyes RenderMan tools, literally in the name of the toolset you were trying to cite.
 
You brought Pixar into it, and their Reyes RenderMan tools, literally in the name of the toolset you were trying to cite.
Reyes isn't used with Path-Tracing and Renderman moved to Path-Tracing. Literally I mentioned this posts ago. If you didn't know that happened that's not really my fault.
 
Yup I've read that paper. You should read it in it's entirety because it literally talks about what I was saying.
View attachment 592552
As you can see Reyes doesn't come into play at all here. Why you mentioned it before? 🤷‍♂️

Also in the paper the problem of noise is mentioned which was inherent in the technique. Rather than green vs red my point from the beginning was about the technique and it's downsides.
Maybe instead of taking that one little snippet there you look at the rest of it where they detail all the methods they implement to augment the shortcomings they found with the Reyes methods they were actively using.
 
Maybe instead of taking that one little snippet there you look at the rest of it where they detail all the methods they implement to augment the shortcomings they found with the Reyes methods they were actively using.
That literally changes nothing of what I said. Renderman hasn't used Reyes since 2016 you brought that into the discussion. Again, it looks like you didn't know they moved to path-tracing.
 
That literally changes nothing of what I said. Renderman hasn't used Reyes since 2016 you brought that into the discussion. Again, it looks like you didn't know they moved to path-tracing.
The paper literally describes how they are using it in 2018.

And Pixar has absolutely nothing to do with the work that Nvidia has been pioneering in Ray and Path tracing over the last decade and change, where they have managed to introduce new methods that are in some cases 100x faster than anything anybody else was doing at the time.

Nvidia in the last 10+ years has completely rewritten the book on calculating how light transport is calculated and rendered.
 
Renderman dev notes say it was removed. Don't know what else to tell you.
The paper goes on to detail how they implemented shadow mapping and newer render methods to deal with the depth of field and motion blur problems Reyes was leaving them with.
There could be something I am missing between the paper's publishing date and the new versions of RenderMan.
 
And Pixar has absolutely nothing to do with the work that Nvidia has been pioneering in Ray and Path tracing over the last decade and change, where they have managed to introduce new methods that are in some cases 100x faster than anything anybody else was doing at the time.

Nvidia in the last 10+ years has completely rewritten the book on calculating how light transport is calculated and rendered.
Sure. Would you like me to post Pixar's white paper on image reconstruction using AI?
 
Sure. Would you like me to post Pixar's white paper on image reconstruction using AI?
I would like to read that please, I have a trip this weekend for 5 days on a house boat with the inlaws and I will need something to keep me busy otherwise they are just going to drive me to drink and unfortunately for me there is no escape.
 
I noticed that DLSS 3.5 works on all RTX graphic cards. There's a reason why modders want to be paid to put DLSS 3 into games, because developers won't put technology that only works on GPU's that sold poorly. Nvidia is learning.
 
I noticed that DLSS 3.5 works on all RTX graphic cards. There's a reason why modders want to be paid to put DLSS 3 into games, because developers won't put technology that only works on GPU's that sold poorly. Nvidia is learning.
3.0 through 3.4 worked on all RTX gpu's as well.
They just don't all do Frame Generation, path tracing is an extension of the ray tracing functions, so it is an update on the tools that already exist on all RTX cards for denoising which functions at a driver level.
 
I would like to read that please, I have a trip this weekend for 5 days on a house boat with the inlaws and I will need something to keep me busy otherwise they are just going to drive me to drink and unfortunately for me there is no escape.
Here u go. DLSS was originally DLSR.
Deep Learned Super Resolution for Feature Film Production
(There u go sorry took a minute to fix the link)
Originally done in Pytorch and it predates DLSS.
 
Last edited:
Here go go. DLSS was originally DLSR.
Deep Learned Super Resolution for Feature Film Production
(There were go sorry took a minute to fix the link)
Originally done in Pytorch and it predates DLSS.
If anything it looks more like an expansion of Nvidia's work, they are retraining for HDR citing that existing upscaling technologies don't do HDR content correctly, done in PyTorch on a pair of P6000 to do the initial training, with final work being done on a DGX2.
I know Nvidia works very closely with Disney and Pixar so they certainly lead off each other.
Thanks for that one though its link at https://dl.acm.org/doi/10.1145/3388767.3407334 led to a few other papers, so that should give me enough of a rabbit hole to go down to keep me out of trouble for Sunday through Thursday.
 
If anything it looks more like an expansion of Nvidia's work, they are retraining for HDR citing that existing upscaling technologies don't do HDR content correctly, done in PyTorch on a pair of P6000 to do the initial training, with final work being done on a DGX2.
I know Nvidia works very closely with Disney and Pixar so they certainly lead off each other.
Thanks for that one though its link at https://dl.acm.org/doi/10.1145/3388767.3407334 led to a few other papers, so that should give me enough of a rabbit hole to go down to keep me out of trouble for Sunday through Thursday.
Yes they used Nvidia video cards to accelerate the processing of images but the initial work was started before Nvidia you have to look at the footnotes.

Pytorch predates DLSS by a significant amount of time with regards to image reconstruction using deep learning.
 
Last edited:
If this is not enough to get called 4.0, the rumours that DLSS 4.0 are really audacious and a possible big deal that scaring AMD (Moore laws is death) could be true. Not necessarily because it is clearly under going to general training only or frame gen, but would they be running out of thing to do...
 
Well it's good to see Turing and Ampere haven't been left out for ray reconstruction, at least. I wouldn't be surprised if there were caveats to running it on older cards to shoehorn you into buying an Ada Lovelace card, but it's great if there isn't.
 
Is this the link to the good paper this one is from 2020 (and not for high fps real time like what Nvidia is trying to do).
That's when the paper was submitted to SIGGRAPH. Typically these types of papers talk about the technology AFTER its developed. So that paper is talking about Pixar starting image reconstruction / image upscaling using AI back in 2016/2017. The tech that's used in DLSS isn't unique to nVidia by any means. It's spawned from the work amazingly enough by the likes of big social media companies and colleges then used to determine what an object was being uploaded to their platforms. "That's a bus, that's a milkshake, that's titties! That's a little titty (blows up image) but it's still a titty." That sort of thing. Pixar however used the technology to reduce rendering times (sound familiar ;)). This starts around 2016/2017. Nvidia hadn't even begun training its data sets at that time. Nvidia wouldn't officially launch DLSS 1.0 until 2019. The vast majority of what you're seeing from nVidia with regards to DLSS and ray tracing is coming from their work with Pixar and Pixar has far more patents than most people realize when it comes to ray tracing / CGI movement. Some of these patents are going to sound really familiar to you:

* Methods and apparatus for determining high quality sampling data from low quality sampling data (2005)
* Reorienting properties in hair dynamics (2010)
* Temporal techniques of denoising Monte Carlo renderings using neural networks (2018)
* De-noising images using machine learning (2019)

Pixar's worth isn't the movies it puts out. That's just a way to fund their development. Pixar's worth is the technology that goes into making the movies.
So when someone tells me nVidia is "writing the playbook" or advancing the technology to some unseen heights not only is it NOT accurate it leaves out quite a few companies that really laid the foundation to what everyone else is enjoying.
 
That's when the paper was submitted to SIGGRAPH. Typically these types of papers talk about the technology AFTER its developed
But the paper quote paper from 2019 in it and used movie like Onward and Soul in their training data, PyTorch is quite recent has well, I can easily believe they would have worked like this for a while (even thought considering the nature of what they do, most of their effort must be on high quality final product than high quality quick previews)

So that paper is talking about Pixar starting image reconstruction / image upscaling using AI back in 2016/2017
Maybe misread it, but I doubt Pixar would be interested in a real time type but we use a supercomputer and take 1-2-4 s by image kind of things (15s by image in what they did in the paper) and they know the future frames in advance.

Nvidia wouldn't officially launch DLSS 1.0 until 2019.
yes but it was announced in 2018, could have been working on it when they started to consider having tensor core on Turing around Pascal release, not sure that SIGGRAPH paper make it clear who influencing who.
 
Last edited:
But the paper quote paper from 2019 in it and used movie like Onward and Soul in their training data,
Coco is named as well. That released when? 2017.
PyTorch is quite recent has well, I can easily believe they would have worked like this for a while (even thought considering the nature of what they do, most of their effort must be on high quality final product than high quality quick previews)
Pytorch is older than DLSS. Look up deep learning networks. It's actually pretty comical that people believe Nvidia started it all. But no that couldn't be further from the truth. Welcome to marketing.
Maybe misread it, but I doubt Pixar would be interested in a real time type but we use a supercomputer and take 1-2-4 s by image kind of things (15s by image in what they did in the paper) and they know the future frames in advance.
It's the same tech literally.
yes put it was announced in 2018, not sure that SIGGRAPH paper make it clear who influencing who.
All I can tell you is Nvidia is not the grandfather of this tech. You can not believe me that's fine.
 
All I can tell you is Nvidia is not the grandfather of this tech. You can not believe me that's fine.
I am sure they are just the people making it for real-time affair people yes, I am just saying that paper does not seem to be what you have influenced them (or at least not clear at all) and not actual the first at using deep learning to do upscaling would be my guess, but probably not by that much time.

Pytorch is older than DLSS. Look up deep learning networks. It's actually pretty comical that people believe Nvidia started it all. But no that couldn't be further from the truth. Welcome to marketing.
Pytorch first public release was 2016, obviously deep learning networks is much older than pytorch.

Coco is named as well. That released when? 2017.
Not sure why it would be relevant versus the latest movie used to give an idea of when it occurs because they could use any movie of the past but none of the future, not that clear because of how long it take to make them but 2020 planned releases.
 
Last edited:
That's when the paper was submitted to SIGGRAPH. Typically these types of papers talk about the technology AFTER its developed. So that paper is talking about Pixar starting image reconstruction / image upscaling using AI back in 2016/2017. The tech that's used in DLSS isn't unique to nVidia by any means. It's spawned from the work amazingly enough by the likes of big social media companies and colleges then used to determine what an object was being uploaded to their platforms. "That's a bus, that's a milkshake, that's titties! That's a little titty (blows up image) but it's still a titty." That sort of thing. Pixar however used the technology to reduce rendering times (sound familiar ;)). This starts around 2016/2017. Nvidia hadn't even begun training its data sets at that time. Nvidia wouldn't officially launch DLSS 1.0 until 2019. The vast majority of what you're seeing from nVidia with regards to DLSS and ray tracing is coming from their work with Pixar and Pixar has far more patents than most people realize when it comes to ray tracing / CGI movement. Some of these patents are going to sound really familiar to you:

* Methods and apparatus for determining high quality sampling data from low quality sampling data (2005)
* Reorienting properties in hair dynamics (2010)
* Temporal techniques of denoising Monte Carlo renderings using neural networks (2018)
* De-noising images using machine learning (2019)

Pixar's worth isn't the movies it puts out. That's just a way to fund their development. Pixar's worth is the technology that goes into making the movies.
So when someone tells me nVidia is "writing the playbook" or advancing the technology to some unseen heights not only is it NOT accurate it leaves out quite a few companies that really laid the foundation to what everyone else is enjoying.
This tracks. From the time I first see new graphical techniques in offline CPU farm renders to seeing some form of it in realtime I've noticed there's about a 5-10 year delay, give or take (at least on some of the simpler things - raytracing took far longer as we've known about it since the 70's). And who have I consistently seen push the boundaries farther out in farm renders? Pixar (among others).
 
Last edited:
  • Like
Reactions: kac77
like this

View: https://youtu.be/HwGbQwoMCxM

Seems to significantly improve the light ghosting and splotciness/fizzle. Looks like a pretty good quality bump.


God damn beautiful. The diner scene looks insane and the ray reconstruction seem to help alot.

Say what you want, Nvidia is delivering constantly and the higher price is 100% worth it.
 
Seem to transitioning from PathTracing to Full raytracing marketing wise (which is probably better if it become the norm, RT vs pathtracing made it seem like if pathtracing was not just raytracing or something different)

4k image comp:
https://www.nvidia.com/en-us/geforce/comparisons/alan-wake-2-rtx-comparison-004/

The cup the lowest in the image in the handle hole, not having the strange shadow that should not be there create more an illusion that it is an 3d scene, same for the top left floating plane to grounded ones.
 
Seem to transitioning from PathTracing to Full raytracing (which is probably better, RT vs pathtracing made it seem like if pathtracing was not just raytracing or something different)
Looks better. Raytracing in current games have seemed a bit off to me, since its very visible in some objects, making the raytraced objects seem out of place. This looks more like full scene raytracing and looks leaps and looks less "gimmicy". I look forward to see independent reviews of this. :)
 
Seem to transitioning from PathTracing to Full raytracing marketing wise (which is probably better if it become the norm, RT vs pathtracing made it seem like if pathtracing was not just raytracing or something different)

I also think calling it full ray-tracing makes sense...path tracing always seemed like the ultimate RT but it just causes more confusion for people that aren't versed in all the technical details
 
Certain things kind of just fall out of a full blown path tracer by virtue of you solving the rendering equation. It's very general of an algorithm... because it's simulating light transport.

Like take ray traced ambient occlusion as an example. You have your intersection point, you fire some more rays to estimate occlusion. At this point, you're thinking of it as a distinct operation, like you're deliberately trying to estimate ambient occlusion. You've essentially created a localized technique that solves for one specific thing.

In a path tracer, you're sort of trying to solve the entire light transport - the algorithm is intended to be a comprehensive simulation of light being light. Rays are intended to bounce multiple times and innately capture reflections, refractions, color bleeding, and more.

Though you can get into sort of murky terminology territory, if the ray tracer reaches the point where it's stochastically sampling the full range of potential light paths in a scene, accounting for bounces of both diffuse and specular interactions using Monte Carlo integration, it effectively becomes/is a path tracer.

They have randomness because a deterministic bounce doesn't make much sense in the context of a Monte Carlo path tracer. It just makes it harder to converge because there's less data over time. You'll need to throw even more rays to get sane quality, because you've guaranteed you'll miss paths unless you throw infinite rays at it. It can never converge towards ground truth without a different sampling strategy.
 
Back
Top