Nvidia Says Native Resolution Gaming is Out, DLSS is Here to Stay

Status
Not open for further replies.
I'm sorry, but this is just depressing. Nearing the end of a hobby for me. What the industry is tell me is we're an afterthought now while this tech has better things to do at higher margins. It was a fun 25 years or so.
Agreed, its more exciting to build older machines at this point. I built an old Zen + with Vega 56 super cheap. Had to clean the parts up and deep clean the old GPU that looks like it was pulled out of the dirt but it's quite fun to get these older machines a new lease on life and surprisingly they do well for 1080p High/1440p mid settings.
 
I dunno if you put your brand loyalty and emotions and sticker shock aside for either or any company just for a second - just look at the tech and only the tech - it's a pretty exciting time just in terms of the technological progress and evolution/revolution happening. It's definitely a paradigm shift. It's exciting IMO even if AMD was the one leading the charge with all this here I'd be just as excited for the tech.

Price will come down with time as things mature and become old news and your lowly GT 7030 no PCIe power connector card is more powerful than a 4090 and such
 
If this was the best Nvidia can do, then the 1080 card I had might be the last Nvidia card I owned. Software gimmicks have always been a crutch to real hardware performance. It also seems clear that Nvidia has put gaming on the back burner of importance, well until AI hardware sales get oversaturated then they will remember this market again.
 
How do you extract performance out of hardware without software? Firmware, games, AI assisted upscale - all software.
Ask AMD. AKA Brute force method. Don't get me wrong, DLSS was great during the summer months when I needed to keep my room from reaching uncomfortable temperatures.
 
Interesting.
Improvements to raster pipeline= brute force.
Upscaling= elegant and efficient.
 
They still need software to do that

They go hand and hand there is no one without the other

It sounds to me that Nvidia wants to keep the large GPU dies and give us the small itty bitty ones and expects us to use DLSS to achieve the higher frames. They will call it innovation. Who knows at this point. The technology is good but a lot of us like the raw power without the use of DLSS as well.
 
Interesting.
Improvements to raster pipeline= brute force.
Upscaling= elegant and efficient.

Yes, and when you hit a brick wall because you've been riding the raster train for so long now you'd need a nuclear powered desktop external GPU to keep the ride going as previously joked - you maybe start using software to squeeze all the last raster tricks and blood from that stone you can before setting down nukes on people's desks - but this then leads to other entire new SW avenues you can exploit on the HW as well

That's where they say in the video we currently are in time, whether anyone wants to believe it or not
 
The technology is good but a lot of us like the raw power without the use of DLSS as well.

So don't turn on DLSS upscaling - that's what I do when only FSR is in the game

If it becomes mandatory don't buy new games/software that require it or just change with the times however you see fit whether that be just building old systems or going console or whatever one chooses and is left available (consoles might not be an escape as pointed out)

Things aren't going to remain the same though that should be clear whether anyone's happy about that or not, this is where Nvidia Intel and even AMD are and are having to go.
 
Yes, and when you hit a brick wall because you've been riding the raster train for so long now you'd need a nuclear powered desktop external GPU to keep the ride going as previously joked - you maybe start using software to squeeze all the last raster tricks and blood from that stone you can before setting down nukes on people's desks - but this then leads to other entire new SW avenues you can exploit on the HW as well

That's where they say in the video we currently are in time, whether anyone wants to believe it or not
the hyperbole is fantastic; there has been no brick wall. Generational gains have gone down, but then again, when that started happening was about the time Nvidia started replacing cuda cores with Tensor cores in die space. would be interesting to see what would happen with generational gains if that die space was reclaimed by cuda...
 
the hyperbole is fantastic; there has been no brick wall. Generational gains have gone down, but then again, when that started happening was about the time Nvidia started replacing cuda cores with Tensor cores in die space. would be interesting to see what would happen with generational gains if that die space was reclaimed by cuda...

Well then again, let's assume it's not needed because it's Nvidia intentionally changing the course of history and the trajectory of tech and gaming here for it's own benefit with its own technology, as I assume is what you're getting at here

Why isn't AMD putting out 'pure raster only no FSR or any upscale features or anything just pure raster' cinderblock GPUs and wiping the floor with Nvidia then?

Why are AMD developing all the same technologies (or as much of and as equivalent as they can)?

Why is Intel following suit too?

Why wouldn't Nvidia for that matter just put out energy guzzling raster cinder blocks without any of this SW/AI they've had to develop?

Nvidia was at the forefront before all this when it was still just raster only (oh I know how someone will scream bloody murder at that I can just hear it now), so they hit the wall first or saw it coming first, and thus were able to plan/change trajectory before all the laggers down the line then saw what was coming and saw what they also had to do
 
Nvidia was at the forefront before all this when it was still just raster only (oh I know how someone scream bloody murder at that I can just hear it now), so they hit the wall first or saw it coming first, and thus were able to plan/change trajectory before all the laggers down the line then saw what was coming and saw what they also had to do
alternatively, Nvidia created a narrative that fed into their proprietary hardware (e.g. tensor cores, DLSS) - thereby creating software/hardware lock-in benefiting ONLY nvidia.
pretty sure AMD saw challenges ahead, which is why they went with gpu chiplets when they did. likewise, intel looks to be going this way as well.
 
alternatively, Nvidia created a narrative that fed into their proprietary hardware (e.g. tensor cores, DLSS) - thereby creating software/hardware lock-in benefiting ONLY nvidia.
pretty sure AMD saw challenges ahead, which is why they went with gpu chiplets when they did. likewise, intel looks to be going this way as well.

OK so if Nvidia did it for SW/HW lock in only for Nvidia - then again why isn't AMD wiping the floor with Nvidia with some 'raster only no other type of feature' cinder block?

How have chiplets worked out so far for AMD? You understand Nvidia has chiplets and blueprints of their own for retail consumer GPUs ready to roll out when they feel they nailed it and/or need it and ready to go now next gen (Blackwell) probably in DC right?

Will chiplets make AI and all these SW features obsolete and not needed?

Or is that the part of the 'hand in hand HW and SW will progress together' that Nvidia was talking about in the same video there?
 
Let's not forget who holds the REAL power here.

TSMC. Without their technology, both Nvidia and AMD would immediately collapse.
TSMC is both a blessing and a scourge, for if AMD or Nvidia could only fabricate their own chips, then not only would supply issues be solved, but potentially, they could make increasingly custom, powerful chips in a way they can't do today.
 
DLSS 3.5 just means taking the truest form of native and making something else out of it.
 
OK so if Nvidia did it for SW/HW lock in only for Nvidia - then again why isn't AMD wiping the floor with Nvidia with some raster only no other type of feature cinder block?

How have chiplets worked out so far for AMD? You understand Nvidia has chiplets and blueprints of their own for retail consumer GPUs ready to roll out when they feel they nailed it and/or need it and ready to go now next gen (Blackwell) probably in DC right?

Will chiplets make AI and all these SW features obsolete and not needed?

Or is that the part of the hand in hand 'HW and SW will progress together' that Nvidia was talking about in the same video there?
AMDs issues are AMDs issues. but interestingly, you do point out that Nvidia is going down the same route in relation to chiplets. Which indicates they at least see the same issues that AMD and intel see. Ironically, neither intel or AMD believe that lock-in features (tied to their respective hardware) are necessary to keep progress going, as they produced open source/standard examples.

And yes, you dont need AI logic on the die to upscale. AMD has already demonstrated that tensor cores/DLSS are not necessary to produce upscaling of an acceptable output.

Think about if Nvidia, dumped the tensor cores for more CUDA/raster die space, dumped the software effort for DLSS - and cooperated on making FSR better. You would get MORE raster AND upscaling benefiting everyone with no lock in.
 
AMDs issues are AMDs issues. but interestingly, you do point out that Nvidia is going down the same route in relation to chiplets. Which indicates they at least see the same issues that AMD and intel see.

And then AMD and Intel saw the same problems Nvidia did by developing all these AI/SW features no?

Ironically, neither intel or AMD believe that lock-in features (tied to their respective hardware) are necessary to keep progress going, as they produced open source/standard examples.

So it's not just about the tech for you and also open source ideology. OK. Nvidia's way seems to be working for them though if we look at market caps and market shares. The only better thing than an ideal is a result.

And yes, you dont need AI logic on the die to upscale. AMD has already demonstrated that tensor cores/DLSS are not necessary to produce upscaling of an acceptable output.

And the general consensus (not what you or I think) of FSR IQ compared to the alternative method of doing it with dedicated AI HW such as Intel and Nvidia is?

Think about if Nvidia, dumped the tensor cores for more CUDA/raster die space, dumped the software effort for DLSS - and cooperated on making FSR better. You would get MORE raster AND upscaling benefiting everyone with no lock in.

Why do you need Nvidia to make FSR better why can't AMD make FSR better?
 
Chiplets are the answer to a manufacturing and supply problem.
Make one huge chip on a mature node and between the chips on that wafer you can have one that’s 100% and you could have one thats 80% and a small stack that are useless.

With big chips that’s a big delta, but what if you split that big chip into 4? For each chip that 20% max gap isn’t so big comparatively, and put 4 of them together and they average out.

So then you just need to work on your binning and now your one chiplet design makes like 40 different CPU SKUs.

Easier and cheaper than designing 10 different CPU’s and using binning to launch 40 different SKUs.

What Intel has brought to the table is fab independent chiplet (tile) packaging. That lets Intel or those who want to use their packaging get the best nodes from their chosen producers for their product and Intel can slap them together using a variety of interposers.

But Interposers are just another chip, so you trade one huge complex chip, for a bunch of small moderately complex chips, and one huge dumbass of a chip. And connect them all up and use some fancy branch predictions and scheduling algorithms to mask the latency in moving instructions and data between them.

Looking forward to playing with the L4 cache that should really reduce that latency.
 
Chiplets are the answer to a manufacturing and supply problem.
Make one huge chip on a mature node and between the chips on that wafer you can have one that’s 100% and you could have one thats 80% and a small stack that are useless.

With big chips that’s a big delta, but what if you split that big chip into 4? For each chip that 20% max gap isn’t so big comparatively, and put 4 of them together and they average out.

So then you just need to work on your binning and now your one chiplet design makes like 40 different CPU SKUs.

Easier and cheaper than designing 10 different CPU’s and using binning to launch 40 different SKUs.

What Intel has brought to the table is fab independent chiplet (tile) packaging. That lets Intel or those who want to use their packaging get the best nodes from their chosen producers for their product and Intel can slap them together using a variety of interposers.

But Interposers are just another chip, so you trade one huge complex chip, for a bunch of small moderately complex chips, and one huge dumbass of a chip. And connect them all up and use some fancy branch predictions and scheduling algorithms to mask the latency in moving instructions and data between them.

Looking forward to playing with the L4 cache that should really reduce that latency.

No I think AMD shows it's a better/good at the very least actual product design principal (edit: or was, for a time) in desktop and especially DC with CPUs

Problem is it doesn't always scale - or scale well - as you can see in lower powered monolith CPUs like laptop arena and now with AMD attempting it on GPUs

No one's claiming anything is perfect anywhere here nothing ever is - some people/places/things just get damn close 😙🤌
 
Think about if Nvidia, dumped the tensor cores for more CUDA/raster die space, dumped the software effort for DLSS - and cooperated on making FSR better. You would get MORE raster AND upscaling benefiting everyone with no lock in.
The problem here is look at the amount of real estate that the Raster cores take up then look at how much the Tensor cores take up.
In Lovelace it’s about 60/40 so there is 1.5x as much space dedicated to the raster cores than there is the tensor ones.

So even if NVidia stripped 100% of the tensor cores they could do at best 40% more raster assuming 100% scaling.

The tensor cores and the software tomfoolery they are providing is giving more than a 40% uplift.

And besides CUDA runs more on Tensor than it does in the rest at this point.

The FSR/DLSS divide won’t be solved by the hands of either AMD nor Nvidia, nor will one abandon their efforts to work on the other. They are very different approaches in how to solve the same problem.

Personally I would prefer to see AMD putting out their version of a tensor core than see Nvidia abandoning the DLSS work.

The Tensor cores should get worked into the pipeline for the next generation of graphics API, DX13 or 12.5X or whatever they want to call it.
 
Would cpu try to go on on single thread and higher frequency brute force their way to bigger and bigger performance, it would have maybe hit a brick wall, been slow since 2003-2005, fancy solution requiring a lot of software to distribute workload have been the way.

GPU die size/power maybe peaked, a bit on a unknown how much can be gain past TSMC 1-2 level, they could have to rely on die stacking-chiplets-software to continue to double every 5 years or so.

There is giant amount of work done for little reason, with how similar frame to frame data can be, how little the human eyes see clearly (fovea is 3 degree or so), look how much the audio signal can be cheated with and being impossible to discern with the original one
 
Personally I would prefer to see AMD putting out their version of a tensor core than see Nvidia abandoning the DLSS work.

1695314061536.png


There is giant amount of work done for little reason


Buddy if you need to process or create large amounts of data or code quickly, for an important reason or stupid reason, have I got just the thing for you:

1695314275079.png
 
https://www.tomshardware.com/news/n...olutio-gaming-thing-of-past-dlss-here-to-stay



In other words, they've given up on improving actual performance.

View: https://youtu.be/Qv9SLtojkTU?t=1933

This video is bookmarked at the chapter titled 'Will Nvidia still focus on performance without DLSS at native resolutions?'

Bunch of stuff said that will probably make a few people bang their heads against their monitor over and over and over in rage so just be sure you want to watch it before you click :)

NVIDIA DLSS 3.5 Ray Reconstruction Review - Better Than Native

REVIEW GRAPHICS CARDS
NVIDIA DLSS 3.5 launches today, supporting GeForce 20 and newer. The new algorithm improves the looks ray traced lighting, reflections, shadows and more. In our DLSS 3.5 Ray Reconstruction review we test image quality, but also discovered that VRAM usage actually goes down and performance goes up.
 
The problem here is look at the amount of real estate that the Raster cores take up then look at how much the Tensor cores take up.
In Lovelace it’s about 60/40 so there is 1.5x as much space dedicated to the raster cores than there is the tensor ones.

So even if NVidia stripped 100% of the tensor cores they could do at best 40% more raster assuming 100% scaling.
60/40 would be giant, Turing and Ampere were big die obviously but RT and Tensor combined were more 8-15% of the die.
 
DLSS will always be a compromise.

Not going to lie, Nvidia's DLAA filter is pretty awesome, and I would like to see more games offer utilizing it at native resolution, as it is by far the best AA I have used from a balance of performance, effectiveness at removing AA, and sharpness perspective.

Scaling the resolution up - however - is always a compromise. I don't for a second buy their claim that DLSS looks better than native, because - well - I have eyes. I have seen what it looks like in game.

This can go one of two ways though:

A) Developers can go the way of Bethesda and make games like Starfield which look mediocre, but still demand ourageous system resources, and just put scaling on it as a band-aid,

--OR--

B) They could actually make games that have higher polygon counts, more RT and really look quite awesome, but at the time of launch the hardware to run them natively doesn't exist yet. It would be sortof as if when Crysis launched, you could run the outrageously heavy game at scaling on lesser hardware. It wouldn't be as good as native, but you would understand why, because the game really looks great, and you would be willing to put up with it.


Unfortunately doing a shit job, and using scaling as a band-aid seems more up the alley of most developers, so unfortunately that is probably what we'll get.
Unfortunately my crystal ball says option A) as well. I feel like this is gonna be the PS3/XBox 360 generation all over again where we get shit ports and are told to be glad we even got anything at all...

I didn't buy a 3080 just to use DLSS, XeSS, etc...
 
I think eventually even further down the line this same problem that necessitated AI/ML SW and HW features will still compound over time even with AI/ML SW and HW features and that will necessitate everything moves to cloud gaming cause you wouldn't be able to just physically store the HW needed in your house let alone power to run it

But maybe by then we'll have reached singularity so we'll be in the cloud too and then latency won't matter/that's one way to skin a cat 😁
 
So Jensen first says Moore's Law is Dead to justify pricing of the then newly release 40-series, and then goes on stage a few months ago saying Moore's Law is alive and well, and now we are back to this guy saying it is dead again.

Here's a thought Nvidia...pick one.
It's running at 2X when talking to the press and shareholders, sure. But to us gamers? It's dead. It's obvious they want to fleece their core market who made them what they are today on their way out to AI and super-computing - Jensen wants that fifthyacht money, and that boat ain't gonna pay for itself...
 
https://www.tomshardware.com/news/n...olutio-gaming-thing-of-past-dlss-here-to-stay



In other words, they've given up on improving actual performance.
I said this 2 months ago:

The people that worried that instead of upscaling just being used to get better performance on older hardware, but that instead it would be used to push the sale of more expensive GPUs to get the 'real visuals' or worse that devs would get lazy and let upscaling cover for badly optimized games might be correct.

https://hardforum.com/threads/remnant-2-devs-designed-game-with-upscaling-in-mind.2029556/
 
Scaling is obviously always a compromise, like LOD is almost always a compromise, pre bake shadows-occlusion map, simpler physic-IK, skeleton, non z-brush level mesh, re-use of texture, non-deformable entity, imperfect collision box, not having really small voxel type light/physic simulation for everything.

Some performance boost like some occlusion come free, but usually it is always a compromise between performance and quality
 
60/40 would be giant, Turing and Ampere were big die obviously but RT and Tensor combined were more 8-15% of the die.
That was true 5 years ago for the 2000 series, but now we are probably closer to 30% of the space being the cores themselves and another 10% or so being the logic needed to feed them.
But if you are correct and the tensor cores only account for 8-15% of the die then would adding 15% more raster offset the losses in other areas, if anything that makes my argument stronger.
 
That was true 5 years ago for the 2000 series
Those drawing are schematic and not at scale actual design, RT cores are not in reality by their own, they are closely embeded (the yellow rectangle) :

es%2Fdcc2794c-0990-4910-92d6-4c7bf66d9a05_1024x597.jpg


But yes it make your point stronger, look how small lovelace silicon tend to be versus Intel-AMD, if 40% was spent on Tensor core, after cache,io streaming, RT, it would be little left for raster, them keeping up would be quite exceptional.
 
Scaling is obviously always a compromise, like LOD is almost always a compromise, pre bake shadows-occlusion map, simpler physic-IK, skeleton, non z-brush level mesh, re-use of texture, non-deformable entity, imperfect collision box, not having really small voxel type light/physic simulation for everything.

Some performance boost like some occlusion come free, but usually it is always a compromise between performance and quality

Exactly, it's not the devil - it's just another, new, component of the same thing people claim they're fighting for - feature/method/graphical option > game
 
It's running at 2X when talking to the press and shareholders, sure. But to us gamers? It's dead. It's obvious they want to fleece their core market who made them what they are today on their way out to AI and super-computing - Jensen wants that fifthyacht money, and that boat ain't gonna pay for itself...
Yep for sure. Nvidia is now an AI company who just happens to have some gaming cards.
 
I've been PC gaming since the mid-90's, and while the past details are fuzzy (or maybe I did not care back then), but was there always this much uproar on new graphical features being added to games? Why is everyone clamoring for rasterization when NVidia is actually trying to make it where that does not even matter AND you will get better graphics? I find it all confusing, I'm all about new tech that can make games look much better.
 
Those drawing are schematic and not at scale actual design, RT cores are not in reality by their own, they are closely embeded (the yellow rectangle) :

View attachment 600276

But yes it make your point stronger, look how small lovelace silicon tend to be versus Intel-AMD, if 40% was spent on Tensor core, after cache,io streaming, RT, it would be little left for raster, them keeping up would be quite exceptional.
From that same article where you lifted that screen shot.
1695316805457.png


So look at the supposed scaling of the tensor cores compared to the raster ones to the left.
 
Status
Not open for further replies.
Back
Top