Nvidia Says Native Resolution Gaming is Out, DLSS is Here to Stay

Status
Not open for further replies.
My bad, that’s a grab bag. I’ve never had good results running non native resolutions regardless of who made the card, some displays handle it better than others but total crap shoot and I’ve given up trying.

I can do it windowed mode of course, but not full screen.

I used to do this all the time, with both Nvidia and AMD cards. I'd just create a custom resolution, and tell it not to scale, and run it letterboxed.

It was a common trick for me before I got the 4090, to bring otherwise lacking framerate at 4k up a little bit.

Essentially I'd turn my 4k monitor into a 3840x1646 21:9 screen, and picked up some framerate in the process.

This worked well with both AMD and Nvidia for me. Never had a problem.

I havent done it in a while though. Maybe the feature has been somehow broken as of late? maybe a Win11 thing, as I have never used Win11?
 
Still waiting for people to realize "It's better than native!" is actually bad. This is something that cellphone camera reviewers realized long ago.

The only downside with native resolution - IMHO - is the inherent aliasing.

Most AA techniques are imperfect and introduce unwanted blur.

Running native with DLAA is seemingly the best of both worlds right now.
 
The only downside with native resolution - IMHO - is the inherent aliasing.

Most AA techniques are imperfect and introduce unwanted blur.

Running native with DLAA is seemingly the best of both worlds right now.
If DLAA actully worked great in every game it was in then I would use it every time. It infuriates me to turn on a setting I know should give me the best visual experience yet it ghosts like crazy. How in the hell do these game makers not see it and why is not always fixed when its been a known issues for months if not longer?
 
Unless you're playing a multiplayer title and not interested in taking a latency hit.
I've never seen any evidence that competitive players need every Ms of latency to play effectively. In fact, all of the evidence I've seen points to the contrary. Competitive players don't even need 120FPS or any of that, either. Due to marketing gimmicks, people are quick to confuse "nice to have" with "need to have". Those players don't need their 200+ Hz monitors, for example, but they are paid precisely to make you think that you need it
 
I'm honestly okay with this. Temporal Upscaling has been a boon to gaming as a whole. Sure, Native is nice, I won't disagree, but when you can get 90-95% of the IQ of Native while achieving 25-30% more frames per second, what's the issue? It's the same as the argument that running High settings in games nets you "just about" as good as Ultra image quality, but with 25% more performance and no one seemed to really argue it as in most cases it just made sense. Now we have the option to run all Ultra settings, at an upscaled image that's just about as good as native, giving us both all the eye candy, and just about native image quality and people are complaining all of a sudden? Not everyone has the money to go out and buy the absolute best to run games at 4K native with all settings set to Ultra, hell I wish I could be using a 4090, or hell even a 4080, but both of those cards were well outside my spending range, and that's the case for an overwhelming majority of folks out there who game on PC.

But Jedi did get it, and Starfield will get it, right? You cant honestly expect every game to have DLSS day 1. It's clear that DLSS isn't as easy to implement properly as some modder did. To get DLSS working properly the devs need to make sure the game is stable and there's no bugs or artifacts. Assassin's Creed Mirage will only have XeSS support, so you think Intel paid devs not to include FSR and DLSS? Game devs have only so much time to work on games, so it'll likely just be one upscaler for now. I'm sure Assassin's Creed Mirage will get DLSS at some point, with FSR being a maybe since XeSS also works on AMD hardware as well as Nvidia.
AC: Mirage is already confirmed to have DLSS and FSR 2 at launch. Source
 
If DLAA actully worked great in every game it was in then I would use it every time. It infuriates me to turn on a setting I know should give me the best visual experience yet it ghosts like crazy. How in the hell do these game makers not see it and why is not always fixed when its been a known issues for months if not longer?
You keep talking about ghosting, but I've never seen it in any game I've used DLSS or DLAA in.
 
I see DLSS as an option to push graphics beyond what is possible by just using native resolution rendering.
Given the choice of worse graphics with no DLSS or better graphics with some upscaling that's for the most part undetectable by the naked eye, who would choose the former?

And those who have raised their hand defiantly: Good news, you already have that, turn off DLSS and lower graphics settings, turn off RT, voila.
So much negativity in this thread but this is the correct answer. Upscaling is the future, human senses have their limits so there is a lot of power being wasted on things we don't even see especially at the higher resolutions. Smart upscaling frees up resources to enhance aspects of the image that are actually noticeable to the human eye and let us push graphics further than ever before.

It's a bit like how (for example) the good audio compression algorithms work, they cut out elements that we cannot even hear to begin with, so the results can be totally transparent and you end up saving up a lot of storage and/or bandwidth.

But here it's even better, we are getting something back with that freed up power that everyone will notice: much better visuals or much better performance/lower watts (I always use DLSS even in games where I don't need it to hit my refresh rate because it's still very healthy and it's the best anti-aliasing available in so many titles anyway)
 
Last edited:
I would normally say they can stuff it.

But the uncomfortable truth is that I want to play at 4K and I am too cheap to buy a graphics card that does that at native resolution.
 
Still waiting for people to realize "It's better than native!" is actually bad. This is something that cellphone camera reviewers realized long ago.
Some day, I hope. The only option that's "better than native" is SSAA.
 
Still waiting for people to realize "It's better than native!" is actually bad...
I was of this same opinion for years.

DLSS 1.0 reduced image quality, I tried it briefly then turned if off and never used it again.
DLSS 2.0 they completely reinvented how it worked. 1.0 needed to be trained for every game. 2.0 worked differently and got rid of that per game training, and became compatible to all games. DLSS2 was better image quality.
DLSS 3.0 was even better. This is the one where I have seen it myself, that it looks better than native resolution. This is not yet true for every game, so you want to try it and see for each game you play. But the fact that (DLSS, even upscaled) can improve image quality in some games already, means it should be possible to do this for all games eventually. Think about that, it's pretty damn amazing what they have achieved. I only recently changed my mind regarding upscaling after seeing it myself. Note: I have only seen this IQ improvement with DLSS3, FSR2 is somewhere between DLSS 1 and DLSS 2. It might be able to achieve the same IQ eventually, for now it's not something to use, when DLSS is an available choice.

***

Fake frames: You (everyone) do realize that every single frame in a video game is a fake frame? When it's all realtime rendered (exception would be actual video if that is used for a cutscene), it's 100% fake. And rasterization has to fake the lighting, bake in the lightmap so it approximates reality. Being upset over frame gen is silly, when the frames to begin with are 100% computer generated. Of course if it is adding visible artifacting, it needs some work. I tried it in Jedi Survivor and the image was still perfect. Again, this may not yet be true for all games, but I have no doubt that it will get there.
 
Last edited:
You keep talking about ghosting, but I've never seen it in any game I've used DLSS or DLAA in.

I've mentioned I noticed in in Control with DLSS when I did my initial playthrough - never noticed it in any other DLSS titles I've played though (and the ghosting isn't in the unofficial update/mod for Control any more) - plus games and DLSS (the DLL file through the game and/or Geforce Experience) get updated especially by the time I buy and play them - so stuff is probably fixed after launch but don't let that stop them from complaining 😁


Raster is fake frames, truth is truth 🤷 Normal mapping, height maps, shaders, fake fake fake.

Path tracing is the least fake thing we got ATM
 
Last edited:
I've mentioned I noticed in in Control with DLSS when I did my initial playthrough - never noticed it in any other DLSS titles I've played though (and the ghosting isn't in the unofficial update/mod for Control any more) - plus games and DLSS (the DLL file through the game and/or Geforce Experience) get updated especially by the time I buy and play them - so stuff is probably fixed after launch but don't let that stop them from complaining 😁


Raster is fake frames, truth is truth 🤷 Normal mapping, height maps, shaders, fake fake fake.

Path tracing is the least fake thing we got ATM
Did Control ever get DLSS 2, though? It was one of the first games to use DLSS, back when it was not that good.
 
You keep talking about ghosting, but I've never seen it in any game I've used DLSS or DLAA in.
So you and everyone that liked your comment are just blind? Maybe bother to look at the digital foundry videos if for some reason you think I'm lying. But really people like you and the others that like your comment are one of the reasons issues never get fixed.
 
Last edited:
Still waiting for people to realize "It's better than native!" is actually bad. This is something that cellphone camera reviewers realized long ago.
I think there a bit of a difference between reality and the synthetic video game image.

Say anti-aliasied image post-process effect, it can look better than native in a video game in more drastic way than a DVD (even if it is just some 480p affair), perfect math wise straight line with infinite instant contrast does not exist as much in real life, it occurs naturally.

Also lot of 16bit or even better calculation can lead to some noise, even at high resolution native can loose thin hair-wire have line that shimmer, have some z-fighting to know which triangle is the one you see, low resolution texture does not exist in the case of a camera.

If you think upscaling magically renders each pixel to match native I cannot help you

Impossible to predict when it would or not but it can, imagine you have a word document in a standard font, you print it, use a scanner to get a scan of that word document, do you think it is impossible for a AI to magically render the native word document you original had before printing it ? That's one of the easiest case by now, it took decade and decade of work but it could do it well most of the time, the range of case where it could do it could go up and up.
 
So much negativity in this thread but this is the correct answer. Upscaling is the future, human senses have their limits so there is a lot of power being wasted on things we don't even see especially at the higher resolutions. Smart upscaling frees up resources to enhance aspects of the image that are actually noticeable to the human eye and let us push graphics further than ever before.

It's a bit like how (for example) the good audio compression algorithms work, they cut out elements that we cannot even hear to begin with, so the results can be totally transparent and you end up saving up a lot of storage and/or bandwidth.

But here it's even better, we are getting something back with that freed up power that everyone will notice: much better visuals or much better performance/lower watts (I always use DLSS even in games where I don't need it to hit my refresh rate because it's still very healthy and it's the best anti-aliasing available in so many titles anyway)

The technology in and of itself is good as it offers more options to gamers.

The implied concern that not too many are expressing outright is that it will be used as an excuse to write poor titles or as Nvidia has with the sub-4090 4000-series lineup to offer lackluster generation over generation GPU improvements because "we have DLSS anyway, why would we spend the time, effort or money actually improving things?"

We have definitely already seen this in Nvidias offerings. All the performance gains they trumpet are while using DLSS. Without DLSS, anything in the 4000 series below the 4090 is kind of a huge turd when compared to the 3000 series.

I also think we have already seen this developer behavior in Starfield. This game runs like absolute shit for the way it looks. it has the graphics of yesteryear, yet the unjustified system requirements of some sort of modern day Crysis. They could have fixed their shitty engine, or put other additional work into it to make it run well, but why do that when they can just say "FSR is required" and build it into all of the presets?

So, what many people are concerned about when we talk about scaling is not that they don't like that users have an additional option (which generally is a GOOD thing) but rather that it will be a huge enabler for industry to keep giving us shovel-ware all while charging increasingly ridiculous sums for it.
 
Be concerned all you want but it's still the way forward for all 3 GPU manufactures - it's not stopping or changing course that again should be clear to everyone by now

Just don't turn it on in the game if you don't like it is all you can do now at this point
 
The implied concern that not too many are expressing outright is that it will be used as an excuse to write poor titles or as Nvidia has with the sub-4090 4000-series lineup to offer lackluster generation over generation GPU improvements because "we have DLSS anyway, why would we spend the time, effort or money actually improving things?"

We have definitely already seen this in Nvidias offerings. All the performance gains they trumpet are while using DLSS. Without DLSS, anything in the 4000 series below the 4090 is kind of a huge turd when compared to the 3000 series.
Considering how similar DLSS performance boost are from Turing/Ampere to Lovelace I am not sure how much it track, it is more frame generation that was used.

4000series is not that lackluster gen over gen, but price over price

At 1440p, using this https://www.techpowerup.com/review/nvidia-geforce-rtx-4060-ti-16-gb/31.html
3080->4080: +50%
3070ti->4070ti: +47%
3070->4070: +27%

That excellent gen on gen boost to really respectable near 30% mark (double GPU power every 5 years or so rate)

It is only when the price got more reasonable down the stack that the performance gain became terrible, the xx80 gen on gen improvement was similar to going to a 3090 from a 2080TI.

AMD 7800xt was barely an gen over gen improvement in performance and while there will be some exclusivity for those kind of tech the big one will also be available on RDNA 2, 7900xt/xtx were much weaker boost over the 6900xt than the 4080 was over the 3080.

If you look at the size-memory bandwith of the Lovelace generation what it does with what it has to work it is really impressive, I doubt it is a case of not putting time effort or money to improve things, it could have been one of the highest effort R&D product on earth, it just a lot of it went into reducing cost&boosting margin ;)

It is more of a why offer a lot in this crazy GPU marketplace, that stopped to be one when Etherum mining on gpu stopped, but not a big deal, we cannot make 80% of the Hopper gpu we can sales at $30k+ a pop.
 
Just don't turn it on in the game if you don't like it is all you can do now at this point
It will not be necessarily an option (see Unreal 5 games released), almost never is on console.

It is a perfectly fine thing to worry about if you do not like it because it will be more complex than turn it off if you don't want it, dynamic always on upscaling could really well exist soon, if 6/8K monitor and eye tracking become a thing almost certainly.
 
It is a perfectly fine thing to worry about if you do not like it because it will be more complex than turn it off if you don't want it, dynamic always on upscaling could really well exist soon, if 6/8K monitor and eye tracking become a thing almost certainly.

If you can't stop it or turn it off it seems like causing yourself unneeded stress to then worry about it so

I guess then your only option is 'you don't have to like it but get used to it and learn to deal with it' 🤷
 
Be concerned all you want but it's still the way forward for all 3 GPU manufactures - it's not stopping or changing course that again should be clear to everyone by now

In other words "you'll eat it, and you'll like it!"

Don't forget that the market is supposed to ework the other way around. Customer demand is supposed to be what drives things. It feels more and more like industry has too much power than they are supposed to have. In too many cases we get what they want us to get, not the other way around, the way it is supposed to be.
 
In other words "you'll eat it, and you'll like it!"

Don't forget that the market is supposed to ework the other way around. Customer demand is supposed to be what drives things. It feels more and more like industry has too much power than they are supposed to have. In too many cases we get what they want us to get, not the other way around, the way it is supposed to be.
Shareholders have a different view on things than your average consumer, don't forget that :D
 
Don't forget that the market is supposed to ework the other way around. Customer demand is supposed to be what drives things.
Would we disagree about how much "demands" customer had for upscaling since the 1080p TV became popular ?

I am not sure how much they really want it, but back in the days they wanted for console-game to say they were played in 1080p, today many want for them to say they are in 4k on the 4k device they bought, they also want game to look good, they also want them to run at least at 30fps and tend to reward 60, they drive upscaling a lot.

Game console-dev, always pick choose their visual quality/complexity-performance-resolution, the most success among them and their audience have been upscaling the last 15 years, that Unreal Engine 5 and the internal engine are made with upscaling tech in mind are not a surprise.

When PC were both quite more powerful than console and played game on monitor with less resolution, it was less a need for PC gamers, but maybe that situation will never happen again, today how many PC gamers would be ok to play Jedi Survivor at 40 fps on their 6700xt on their 1440p monitor ?

Will see, but upscaling has been extremely popular among customers, nothing special commercially was achieved by the last gen games that went for native 4k vs the upscaled one and when 60 fps performance mode is offered vs 30fps quality one, I suspect the performance mode to be popular.

The market force made people buy monitor/tv with much higher resolution than tv-movie encoding-bandwith or gaming platform were able to drive for the maximum subjective quality and tech to make up for it advanced.
 
Last edited:
You can also only buy game that run at 120fps in native well

For as long as upscale isn't mandatory, yes, don't buy games with it or turn it off - I've said that too, absolutely 👍

In other words "you'll eat it, and you'll like it!"

Don't forget that the market is supposed to ework the other way around. Customer demand is supposed to be what drives things. It feels more and more like industry has too much power than they are supposed to have. In too many cases we get what they want us to get, not the other way around, the way it is supposed to be.

I'm not AMD, Nvidia or Intel - write them all emails asking them to stop implementing upscaling or to keep it optional and stop buying games with these features in them if you want - I really can't help you at all here 🤷

I like DLSS and XeSS in that order so I'm not going to stop buying games with them in it or send off any letters - nor will FSR prevent me from buying a game even if that's the only upscale option in the game (like the Resident Evil titles) - I'll just leave it turned off. If FSR doesn't improve in IQ by the time it becomes mandatory (if it doesn't get replaced/superseded by one of the other upscalers or even a new one from AMD) - I'll start composing my own email for whatever good that will do or not.

Shareholders have a different view on things than your average consumer, don't forget that :D

Plus not all customers get their way - imagine if companies had to issue individual versions of every product for every single variation every single customer demanded 😲🤯😱

Edit: Shit if it were up to me, Nvidia wouldn't have been allowed to cancel local GameStream 🤬
 
Last edited:
Be concerned all you want but it's still the way forward for all 3 GPU manufactures - it's not stopping or changing course that again should be clear to everyone by now

Just don't turn it on in the game if you don't like it is all you can do now at this point
Are you just going to continue to be the gatekeeper for this topic or would you mind letting everyone offer their opinion? I won't mind if you don't reply.
 
I envision that one day soon DLSS/FSR/XeSS or what ever upscaller they want sits on the end of the render path as a default thing. And instead of graphic settings dramatically changing what lighting or textures or other graphic features are turned on or off it changes the base render resolution instead.

So Ultra is native resolution render with DLAA or what ever they are calling their 1:1 filter.
And as you turn it down it lowers base resolution and upscales with low being 50% or something.
 
Are you just going to continue to be the gatekeeper for this topic or would you mind letting everyone offer their opinion? I won't mind if you don't reply.

Got to give him at least 10 more pages of saying the exact same thing, because someone disagreed with him.

Not having to upscale a image was one of the main reasons people paid the extra for computer gaming hardware, if everything has to be upscaled then it's really not different from consoles. Will just have to see how this supposed next generation hardware sells, since we know the current gen stuff is not flying off the shelves.
 
Got to give him at least 10 more pages of saying the exact same thing, because someone disagreed with him.

I don't know how else for you to take the hint - no one here can not make upscale the way of the future for you - you're complaining about dishware in the home and garden section - you don't like it - we got it - what else do you want done?

Do you want me to try explaining with colors and shapes? I could give it a go?
 
Gamers Nexus review with Cyberpunk and Ray Reconstruction DLSS 3.5 shows a lot of ghosting. The amount of ghosting can be downright broken.

View: https://youtu.be/zZVv6WoUl4Y?si=_AbMe1O2nFhxgUXD


That's a pretty weird interpretation of that video. His conclusion at the end of it was that Ray Reconstruction mostly lowered ghosting and smoothed out artifacts, and was (for the majority of the time) at least as good or better than the input. It just had a few (like one or two?) areas where it increased ghosting. Namely that tower lighting. Occasionally it also got rid of things like dirt and grit because it saw it as noise.
 
Shareholders have a different view on things than your average consumer, don't forget that :D

Well, shareholders should study some basic economic theory.

It is the purpose of any business to satisfy the need of their customers at the most efficient price possible.

If the price is not efficient, then there has been a breakdown of competition, and if there is no competition, you don't have a free market, so the whole capitalist model falls apart.

When businesses try to "shape" the marketplace to their advantage, it is really sketchy, and probably should be illegal.

Businesses roles are to be neutral observers of consumer desires, and then try to fill those desires in ways that they out compete their opponents. That is it.
 
My thoughts with regards to the future of GPUs might be controversial. They came together I was thinking about them while he talked about how the ray reconstruction got rid of some of the dirt and grit on the chairs and furniture as "noise".


What if in the future AI technology replaces a lot of mundane texturing within the scene? That is to say, it constructs some of reality due to some meta clues provided from the developers? For instance "a chair", a "metal chair", or a "round chair in a bar". These are mundane, boring objects that normally still require artists and developers to do things to get them to look right. But some rather rough texturing, and then a context clue AI technology, they could theoretically be reconstructed on the fly rather than provided at the source. Playing the long game, I wonder if that's the future. That way, the GPU power can be reserved for things like lighting calculation and such, and theoretically we still get a better picture with reconstruction of these mundane objects. That way talent time could be saved even further because they can focus even more on the game and its more fantastical elements. I'm sure all of the stodgy old people on here are going to say "no way it's not going to happen" because that's the standard response to everything (like it was in my similar post in the Starfield topic)... but it's food for thought.


The future is pretty scary due to AI tech. So many ways it can put people out of work. For instance, you need less artists if the AI simply reconstructs the scene without needing in depth textures. The future is also coming faster than anyone likes, whether they try to stay in denial or not.

Still waiting for people to realize "It's better than native!" is actually bad. This is something that cellphone camera reviewers realized long ago.

If you're talking about the S23 Ultra's (which I have one as a side note; decent phone) reconstruction of moon craters, that's kind of a disingenuous example. People were annoyed because the camera blatantly misrepresented how good the camera on the S23 Ultra actually was, and how it added details to real life that weren't there. Pictures are meant for the preservation of what a person sees. Embellishment with falsehoods is not the intention of the camera (unless explicitly desired, ie by sticker apps or something). Samsung should have just limited the AI work to smoothing out details, not going an extra 9 yards and adding ones that never existed in anyone's eyes.

Video games on the other hand already provide nothing but a fully fabricated experience. Acting like the source textures are a "source of truth" for the scene and being a purist is fine; that's your opinion... but a game is a game. Developers and people will be happy as long as it subjectively looks good and plays well. If something subjectively looks "better than native" in an blind A/B test, it's just better period, because the overall experience is better. Being a purist is fine, but I'll note that purists are also extremely rare in the audiophile space. If it hits the neurons better, monkey will be happier.

I think people need to separate "ray tracing" from "RT", because RT is one implementation of it and is designed specifically to work with RTX, so no wonder it favours Nvidia. Technologies like Lumen look fantastic and AMD hardware is every bit as good as Nvidia's in benchmarks that I've seen with it. RT might be technically "better", but from what I've seen, not enough to matter, especially with the associated performance hit.

Of course, Nvidia won't tell you that, because they want you to pay up for their exclusive technology, and they have done a magnificent marketing job convincing many people that they have to do just that.

On a personal note, as good as ray tracing looks in games, particularly highlighted in screenshots, I haven't personally found it's revolutionized my playing experience the way I hoped it would, and I suspect that's because when I'm playing most games, particularly fast-paced ones, I'm concentrating on playing the objective, and rasterization looks really good as far as immersion goes. Ray tracing is better, sure, but so far I've been satisfied with my play experience either way.

Okay, has anything been made with Lumen? How many games support it and has it been trialed side by side in any of them? Legitimately curious as this might be the first time I'm hearing of it.

I would say if Hogwarts Legacy is anything to go by, RT(/Lumen/whatever) is going to keep getting more and more traction as time goes on. It looked pretty good in that game.
 
Well, shareholders should study some basic economic theory.

It is the purpose of any business to satisfy the need of their customers at the most efficient price possible.

If the price is not efficient, then there has been a breakdown of competition, and if there is no competition, you don't have a free market, so the whole capitalist model falls apart.

When businesses try to "shape" the marketplace to their advantage, it is really sketchy, and probably should be illegal.

Businesses roles are to be neutral observers of consumer desires, and then try to fill those desires in ways that they out compete their opponents. That is it.

And in the end cheap doesn't always win, sometimes open-source doesn't win, sometimes good and quality edit: and expensive win out over cheap and/or open-source

Nvidia provides good products and services compared to the rest for their customers in retail and commercial - some might not agree but again that's what the consensus is that's why people buy them

Competitors need to do more than talk the talk they need to walk the walk if they want to win

Customers vote with their wallets - you want to send a business a message hit them where it hurts - their pocket book edit: or @ them on twitter
 
Last edited:
My thoughts with regards to the future of GPUs might be controversial. They came together I was thinking about them while he talked about how the ray reconstruction got rid of some of the dirt and grit on the chairs and furniture as "noise".


What if in the future AI technology replaces a lot of mundane texturing within the scene? That is to say, it constructs some of reality due to some meta clues provided from the developers? For instance "a chair", a "metal chair", or a "round chair in a bar". These are mundane, boring objects that normally still require artists and developers to do things to get them to look right. But some rather rough texturing, and then a context clue AI technology, they could theoretically be reconstructed on the fly rather than provided at the source. Playing the long game, I wonder if that's the future. That way, the GPU power can be reserved for things like lighting calculation and such, and theoretically we still get a better picture with reconstruction of these mundane objects. That way talent time could be saved even further because they can focus even more on the game and its more fantastical elements. I'm sure all of the stodgy old people on here are going to say "no way it's not going to happen" because that's the standard response to everything (like it was in my similar post in the Starfield topic)... but it's food for thought.
Nvidia did a tech demo of a game where all of the visuals were AI generated----back in like 2018 or something. It of course looked weird and bad, at the time. But it was also amazing.

Now, I'm sure it would look a lot better. But, being able to do it on an affordable GPU, is probably still 5 - 10 years away.
 
I wonder what they might try to DLSS accelerate next after particles/volumetrics 🤔

Edit: Grass/vegetation? Would that include like garbage on the ground in a city? - hair could go along with that 🤔
 
Last edited:
I wonder what they might try to DLSS accelerate next after particles/volumetrics 🤔
IMO, they need to focus on refining what they already have. Fix issues with fine details being lost (rain, some textures, film grain, some transparent effects). And make the Anti-Aliasing more consistent.
They have already improved the response time for certain effects. Such as lights flickering.

IMO, I think part of the problem, is that their training dataset actually seems kind of limited. In the Digital Foundry interview, they said that most of their dataset is basically a couple of custom test environments. IMO, they should get some data from actual games in there. Those test environments sounded like they present effects in pretty idealic ways. Actual games are lot messier than that. But also more creative, at the same time.
 
Last edited:
IMO, they should get some data from actual games in there.

Yeah he explained they dropped that from v1 because creating a general overall one was more streamlined (no pun) and efficient to getting games out with it. I don't know what they could besides 'git gud improve the algo more' if they don't go back to that?

Edit: If they improve mod tools like they said then they can just sit back and 'git gud' for a while - seems like the mod community would spread DLSS
 
Yeah he explained they dropped that from v1 because creating a general overall one was more streamlined (no pun) and efficient to getting games out with it. I don't know what they could besides 'git gud improve the algo more' if they don't go back to that?

Edit: If they improve mod tools like they said then they can just sit back and 'git gud' for a while - seems like the mod community would spread DLSS
I don't even mean do specific data for every game. That undoubtedly takes time/money. Although I think that some developers with bigger budgets, should seriously consider doing it. On console, I have no idea why Microsoft isn't using their Azure cloud to train AI upscaling specifically for every single first party game. It could give them a big quality advantage over Sony.

I just mean get some data from at least a few actual games. and apply that to their otherwise general algorithm, used for all games. I.E. they said that DLSS 3.5 has been tested on a substantially larger dataset than ever before-------and the first title with the results from that, has obvious visual problems. It would seem that only getting data from a couple of idealic test environments, is hindering the quality of new features.

And a direct example of that is----they went to the trouble of creating a tech demo of a bar which-----looks a whole lot like Cyberpunk. But....isn't Cyberpunk. And I am sure that demo's visuals are tuned to be ideal for DLSS and Ray Reconstruction. As opposed to simply partnering with CDPR and featuring a bar scene from Cyberpunk and getting DLSS + RR properly tuned for that. A real game.
 
Status
Not open for further replies.
Back
Top