NVIDIA DLSS 3 “Frame Generation” Lock Reportedly Bypassed

polonyc2

Fully [H]
Joined
Oct 25, 2004
Messages
25,779
A feature that nVIdia locked behind new hardware with no real reason other than greed? Must be a Wednesday.
It's FUD. Nothing was actually "bypassed" because nothing was actually "locked" to begin with. Reddit dude with a beta Cyberpunk build played around with a new video setting and made a thread. In fairness to him, it was WCCF that tried to stir up a controversy over it.

The reality is Nvidia never said Frame Gen would be locked to 40-series, or that 20-series & 30-series would be locked out. They only said it wouldn't work as well on older GPU's since the OFA (Optical Flow Accelerator) in hardware isn't as advanced, and there would probably be stuttering and performance issues.
 
Last edited:
It's FUD. Nothing was actually "bypassed" because nothing was actually "locked" to begin with.

WCCF was always bad, but now they don't even pretend to verify anymore. Verify nothing, just copy-paste whatever stupidity someone says on Reddit or Twitter.
*ahem*

https://www.nvidia.com/en-us/geforce/news/dlss3-ai-powered-neural-graphics-innovations/

" DLSS 3 is powered by the new fourth-generation Tensor Cores and Optical Flow Accelerator of the NVIDIA Ada Lovelace architecture, which powers GeForce RTX 40 Series graphics cards."

Where can i get me an RTX 2000 card with 4th-generation Tensor cores?

"For CPU-limited games, such as those that are physics-heavy or involve large worlds, DLSS 3 allows GeForce RTX 40 Series graphics cards to render at up to twice the frame rate over what the CPU is able to compute."

DLSS 3 technology is supported on GeForce RTX 40 Series GPUs.
 
*ahem*

https://www.nvidia.com/en-us/geforce/news/dlss3-ai-powered-neural-graphics-innovations/

Where can i get me an RTX 2000 card with 4th-generation Tensor cores?

You may have missed the point. At issue is the implication that Nvidia created an arbitrary software lock, to lock previous gen cards out of running a feature. Because the internet loves a good software lock whodunnit.

They'd have no business reason to do so. Just like Nvidia opened RTX to 10-series and said "it'll work, just not well", Nvidia too already said about Frame Gen on older GPU's "It'll work, just not that well".
 
Last edited:
You may have missed the point. At issue is the implication that Nvidia created an arbitrary software lock, to lock previous gen cards out.

I'm shocked!
I never would have expected Nvidia of all companies to do something like this...
 
I'm shocked!
I never would have expected Nvidia of all companies to do something like this...
For sure. Nvidia is definitely more than capable of being petty, shitty and conniving, but in this particular instance it's FUD. Nvidia wants brand lock-in, not generational lock-in. They want as many people running RTX-specific features as possible.
 
I think Turing and up card being capable was said by Nvidia people, the claim was bad result, the news would be, result actually really good on the older Gen2-3 card, pure artificially lock and not an artificial luck to protect the DLSS brand name.
 
One of the key features of DLSS 3 is the use and integration of Optical Flow analysis, on the 2000 and 3000 series cards this is a CUDA accelerated software function, on the 4000 series it has a dedicated hardware accelerator so it's not exactly arbitrarily locked. The Optical flow analysis was part of the DLSS 2 implementation but was not enforced to be on, because in CPU or GPU-bound scenarios it can result in a performance decrease, or at least it did in the early stages of the release of DLSS 2. Nvidia very likely has been making changes and upgrades to their Optical Flow algorithms over the past 2 years so it very well could be at a stage now where the CUDA accelerated software implementation is fast enough that it no longer causes the hiccups it did in the past, and it's not like Nvidia is abandoning DLSS 2, they have made a number of public statements that they intend to roll improvements down to the DLSS 2 versions in updates.

I very much suspect they have just delayed those updates being rolled down to DLSS 2 to showcase the current difference and pad some benchmarks to make DLSS 3 look far better in comparison.
 
"DLSS 3 technology is supported on GeForce RTX 40 Series GPUs."
"Supported" can mean more than one thing.

It can be a way to communicate that the hardware can nor cannot do (or support) something

But it can also mean that Nvidia supports the use on 40 series hardware, but does not support it on older hardware.

In other words "it might work, but we aren't going to guarantee good results".

I tend to believe they intended the latter based on them saying "supported on" instead of "supported by", but maybe I am reading too much into the preposition on this one.

I'm always a little skeptical of Nvidia's intents because they have a long history of market manipulative behavior, but in this case it may just be a matter of verbiage.
 
Last edited:
Nvidia stated that Frame Generation couldn't be done on non-40 series GPU's: "DLSS Frame Generation uses RTX 40 Series high-speed Optical Flow Accelerator to calculate the motion flow that is used for the AI network, then executes the network on 4th Generation Tensor Cores. Support for previous GPU architectures would require further innovation and optimization for the optical flow algorithm and AI model"
 
Nvidia stated that Frame Generation couldn't be done on non-40 series GPU's
That special dedicated part of hardware acceleration use by it not done on non-40 series GPU's, it is different that frame generation that is more general and should somewhat work for sure with the previous generation, the claim being that it did hurt than help too often to make it available on those.

Optical flow SDK is available on github and used since at least Turing I think.

On twitter Nvidia people were quite clear that it would be possible to make it work on 2xxx-3xxx models, just without being good enough the claim, that the only thing of interest and unknown about it, does it work good enough in the previous gen that it was removed just to push the new models or not.
 
Support for previous GPU architectures would require further innovation and optimization for the optical flow algorithm and AI model
Optical Flow is and has been in the CUDA toolset for 5 years now, until the release of the 4000 series it was software that was being hardware accelerated by the GPU, but in a situation where you are already GPU or CPU bound it would lead to noticeable visual glitches and decreased or erratic performance. So it overall hampered game performance instead of improving on it. They improve the algorithms for it frequently and offer updates to the toolkit on a regular basis, there are non-gaming applications where it really comes in handy but for gaming on those generations, it is ultimately too slow to keep up and detracts from the gaming experience in its current form. Algorithm updates could very well improve things and make it usable in a gaming environment, but that would be one of those future promises it's best not to make and just surprise us with at a later date.
 
Lock it for the 40-series and there's an uproar. Enable it on older gen hardware but it looks/performs bad and people will shit on DLSS3 as a whole. Damned if they do, damned if they don't.
There wasn't much of an uproar when Nvidia unlocked dx12 raytracing on 10 series. People realised how much compute horsepower these RTX cards were streamlining.
 
There wasn't much of an uproar when Nvidia unlocked dx12 raytracing on 10 series. People realised how much compute horsepower these RTX cards were streamlining.
DLSS is a controversial feature from the get-go. Remember the vaseline smeared on the screen comments? The dismissive attitude towards tensor cores and how upscaling is "cheating". It wasn't until FSR came out that the perception for upscaling changed with these same people.

Much like how it was with hardware G-Sync, there are folks who want to do away with those to give way to more open technologies, particularly those initiatives of AMD and will find any way to discredit these innovations.

Up until the consoles got raytracing working, there was a very vocal skcepticism around raytracing and that it isn't worthit and that the industry should just stick with rasterization.

I guess that's how the community is and the companies have to deal with it.
 
Last edited:
DLSS is a controversial feature from the get-go. Remember the vaseline smeared on the screen comments? The dismissive attitude towards tensor cores and how upscaling is "cheating". It wasn't until FSR came out that the perception for upscaling changed with these same people.

Much like how it was with hardware G-Sync, there are folks who want to do away with those to give way to more open technologies, particularly those initiatives of AMD and will find any way to discredit these innovations.

Up until the consoles got raytracing working, there was a very vocal skcepticism around raytracing and that it isn't worthit and that the industry should just stick with rasterization.

I guess that's how the community is and the companies have to deal with it.
Luddites and sheep are the reality of this world. Such is life.
 
There wasn't much of an uproar when Nvidia unlocked dx12 raytracing on 10 series. People realised how much compute horsepower these RTX cards were streamlining.
Good point, if it is true that it just does not work well enough on the older gen, once the tech is establish they could unlock it just to show the actual boost gen on gen
 
The reality is Nvidia never said Frame Gen would be locked to 40-series, or that 20-series & 30-series would be locked out.
It seems like that is what they are saying here:

Over 216 games and apps have benefited from DLSS, accelerating performance by up to 2X. And now, DLSS 3 is available, multiplying performance by up to 4X on GeForce RTX 40 Series graphics cards. If you own a previous-generation GeForce RTX graphics card or laptop, you can use DLSS 2 Super Resolution in each DLSS 3 game.

https://www.nvidia.com/en-us/geforc...&ranSiteID=TnL5HPStwNw-n2JW2kcObsG.hv4BeHxl8Q

Also this:

1665637249270.png
 
They improve the algorithms for it frequently and offer updates to the toolkit on a regular basis, there are non-gaming applications where it really comes in handy but for gaming on those generations, it is ultimately too slow to keep up and detracts from the gaming experience in its current form.
Could you link the testing results?
 
Much like how it was with hardware G-Sync, there are folks who want to do away with those to give way to more open technologies, particularly those initiatives of AMD and will find any way to discredit these innovations.
Good. I'm glad Gsync was ultimately replaced by hardware agnostic solutions.

I hope the same happens to DLSS.
 
Up until the consoles got raytracing working, there was a very vocal skcepticism around raytracing and that it isn't worthit and that the industry should just stick with rasterization.
No, the main point was that raytracing was at the time a useless feature to base purchasing decisions on.
 
No, the main point was that raytracing was at the time a useless feature to base purchasing decisions on.
Chicken and egg scenario. It has to start somewhere and if the market doesn't pick it up, then it fades away to irrelevance. Just like those 3D TVs.
 
It was clear to me that at launch the 3000 and 2000 would not have access to it, like they do not seem to have it without some tricks. Also clear (not from Nvidia marketing-website, but from Nvidia answering question) that it was possible for them to run it and a choice to not let them run it.
 
I hope the same happens to DLSS.

There's room for all three, DLSS, FSR, XeSS. Why take away that choice. AFAIK, as long as the game has TAA, devs can implement all three. There's the streamline initiative for this with NV and Intel onboard but only AMD wants out. I also noticed that DLSS games can also have FSR (and recently XeSS) as well but not the other way around with AMD sponsored titles.
 
Chicken and egg scenario. It has to start somewhere and if the market doesn't pick it up, then it fades away to irrelevance.
People are free to act as a charity, but the majority won't. Nvidia designed good cards regardless of RT, and people gave them R&D money by purchasing them. A shitty product doesn't deserve that just for a promise of future benefits.
 
People are free to act as a charity, but the majority won't. Nvidia designed good cards regardless of RT, and people gave them R&D money by purchasing them. A shitty product doesn't deserve that just for a promise of future benefits.
Which is why I had a concluding statement in my earlier post that it's up to the companies how to deal with it. Same with Intel and their ARC cards. It's their responsiblility to be in this for the long game and push their products and features to the market. Nvidia seems to have had some success with RTX adoption because they persevered despite the cold reception of Turing. That and working with MS and the Kronos Group to get their tech in DX12 and Vulkan respectively.
 
Could you link the testing results?
I've never seen it used on those outside their server lineup for animation rendering and machine-learning training suites. So I don't really have benchmarks for those suites. I can say anecdotally though it runs the jobs faster on the Jetson Nanos we have in the labs than it used to.
 
Last edited:
I've never seen it used on those outside their server lineup for animation rendering and machine-learning training suites. So I don't really have benchmarks for those suites. I can say anecdotally though it runs the jobs faster on the Jetson Nanos we have in the labs than it used to.
Maybe you shouldn't spread Nvidia's PR as fact until we have some results.
 
DLSS is a controversial feature from the get-go. Remember the vaseline smeared on the screen comments? The dismissive attitude towards tensor cores and how upscaling is "cheating". It wasn't until FSR came out that the perception for upscaling changed with these same people.

Much like how it was with hardware G-Sync, there are folks who want to do away with those to give way to more open technologies, particularly those initiatives of AMD and will find any way to discredit these innovations.

Up until the consoles got raytracing working, there was a very vocal skcepticism around raytracing and that it isn't worthit and that the industry should just stick with rasterization.

I guess that's how the community is and the companies have to deal with it.

FSR didn't change peoples opinion on the tech. It meshed up with what people would actually use the tech for.

Nvidia has been trying to sell it as a visual enhancement of some sort that is desirable to have on HIGH end hardware. Which is and always will be stupid. If I'm spending now thousands of dollars on a GPU I don't want it to have to cheat the resolution to be usable.

FSR isn't marketed as a technology for the upper class brand newest card only. Its marketed as an upscaling tech which WILL lower your visual quality slightly (which at a high quality setting is probably acceptable to most people) however it will give you more FPS. This isn't marketed as a feature to turn on your $1000 flagship AMD card. Its marketed as a feature to turn on when your on a mid range or aging flagship to achieve decent FPS in newer titles.

FSR destroys DLSS not because its higher IQ... or provides more FPS. It destroys DLSS because it works on EVERYTHING in EVERYTHING. FSR is GPU anti aging cream. DLSS is Fake AI screenseline... and now I guess its also making entire frames up. Doesn't sound desirable to me.
DLSS is one of the handful of half baked ideas Nvidia whipped up to try and sell Tensor core AI hardware in Consumer GPUs.

Its all in the implementation and the marketing. Upscaling is great if your trying to squeeze another year out of your 3-4 year old GPU... or your monitor dies and your previously fine at 1080 or 1440 setup is now struggling at 4k. Upscaling as a selling feature for a Super Flag ship... to the point the CEO only talks about it at a product launch is STUPID.
 
Maybe you shouldn't spread Nvidia's PR as fact until we have some results.
It's not really PR, it's a 3-year-old library they issue updates to every couple of months that is very widely used. Available in CUDA and OpenCV
https://forums.developer.nvidia.com...sualization/video-processing-optical-flow/189
https://github.com/opencv/opencv_contrib
you can read through its contents if you like.
There have been semi-regular performance enhancements and improvements to the library over its lifetime, it's based on the KITTI libraries they launched in 2015, renamed Optical flow for the 1.0 SDK in 2019, and with the 4000 series cards, it updated the library to 4.0. There are years of usage metrics to measure from.
I don't know if the improvements to 4.0 will make it usable in a gaming environment on the 2000 and 3000 series parts, would be nice, but I wouldn't hold my breath.
 
Last edited:
FSR didn't change peoples opinion on the tech. It meshed up with what people would actually use the tech for.

Nvidia has been trying to sell it as a visual enhancement of some sort that is desirable to have on HIGH end hardware. Which is and always will be stupid. If I'm spending now thousands of dollars on a GPU I don't want it to have to cheat the resolution to be usable.

FSR isn't marketed as a technology for the upper class brand newest card only. Its marketed as an upscaling tech which WILL lower your visual quality slightly (which at a high quality setting is probably acceptable to most people) however it will give you more FPS. This isn't marketed as a feature to turn on your $1000 flagship AMD card. Its marketed as a feature to turn on when your on a mid range or aging flagship to achieve decent FPS in newer titles.

FSR destroys DLSS not because its higher IQ... or provides more FPS. It destroys DLSS because it works on EVERYTHING in EVERYTHING. FSR is GPU anti aging cream. DLSS is Fake AI screenseline... and now I guess its also making entire frames up. Doesn't sound desirable to me.
DLSS is one of the handful of half baked ideas Nvidia whipped up to try and sell Tensor core AI hardware in Consumer GPUs.

Its all in the implementation and the marketing. Upscaling is great if your trying to squeeze another year out of your 3-4 year old GPU... or your monitor dies and your previously fine at 1080 or 1440 setup is now struggling at 4k. Upscaling as a selling feature for a Super Flag ship... to the point the CEO only talks about it at a product launch is STUPID.
Yeah I think what people may not realize is that FSR is basically using and improving on decades of university research into general upscaling algorithms. It’s basically talking ye-olde “turn 1 pixel into 4 pixels” and accelerating this on a GPU in “real-time”. Obviously I’m over simplifying the actual algorithms and complexity here, but this is why FSR “just works” everywhere.

Nvidias DLSS requires that an ML model is pre-trained by the game developer, basically (this is really over simplified) you’re pre-generating a set of patterns and situations which get processed at runtime based on your current frame and so you can “predict” possible outcomes and pull them from your model. Obviously much more complicated and smarter, but fundamentally the tensor cores are just “generating” potential frames and then selecting the best-fit. This is why you get some strange glitching around reflections in water and non-deterministic scenes (or the devs just didn’t create and train a proper model).

I personally think nvidia went this route because they have tensor cores to sell (these are much more lucrative in the compute AI enterprise markets) and proper gfx cores are becoming less and less important. I also think it’s overall stupid to try to merge AI compute and gaming product lines like this but whatever - I’m a lowly engineer and not a CEO so what do I know.
 
It's what happens when you push proprietary solutions as the only way. You're gonna burn someone.
Not so much proprietary as exclusive, Nvidia does offer support for their frame generation technology in OpenCV, https://github.com/opencv/opencv_contrib
AMD simply doesn't have the resources to dedicate to these sorts of extra features.
But the reality is there isn't anything out there that does what CUDA does, it's being developed but it's still years away from catching up in terms of features, and it's highly fragmented.
 
"DLSS 3 doesn't typically look as good as DLSS 2 quality or even performance mode"
Not much else to say really.


One is just upscaling under a brand name, the other is interpolation. Neither of them can look as good as native unless the native's AA is shit, in which case DLSS 2 works as a decent AA replacement.

Yes, they are a little more complex than that, but under the hood its just iteration.
 
Back
Top