AMD Could Tease DLSS 3-rivaling FSR 3.0 at GDC 2023

Going to stick my neck out and ask what may seem like an obvious question. What is the difference between FSR3 Frame Generation and AMD Fluid Motion Frames?
FSR3 generation inserts a single frame between each rendered one by using in-engine motion vectors. This results in better IQ and won't affect the UI generally much. It has to be implemented by the developer, and is AMD's competitor to DLSS3 frame generation.

Afmf is AMD's driver based frame generation that works only on dx11/12 games, but needs no developer intervention to use. It does multiple frames generated varying by camera movement, and has lower image quality.
 
Last edited:
Has anyone compared AFMF to FSR 3 in Forspoken or Immortals?With AFMF you don’t need to use FSR, where significant issues persist.
 
FSR3 generation inserts a single frame between each rendered one by using in-engine motion vectors. This results in better IQ and won't affect the UI generally much. It has to be implemented by the developer, and is AMD's competitor to DLSS3 frame generation.

Afmf is AMD's driver based frame generation that works only on dx11/12 games, but needs no developer intervention to use. It does multiple frames generated varying by camera movement, and has lower image quality.
Ah so AFMF is application agnostic, for the most part, and purely optical flow. FSR3+FG has more data (motion vectors AND optical flow?), but its implementation is dependent on the individual game's devs. From what I've seen with FSR3+FG, it hasn't looked superior compared to AFMF in terms of image quality, which wasn't expected.
 
I am able to get FreeSync to work with Metro Exodus Enhance using AFMF. Well in the benchmark so far, launching the benchmark FreeSync is off even if turned on but during the benchmark if I flip the LG C2 42" FreeSync switch to off or on it will then use FreeSync with Frame Gen. The TV has two settings for variable refresh rate, VRR and Geforce and separate Freesync Premium Pro as a note. So FreeSync can work with AFMF it appears. Also if framerate is above 100 as others also expressed, artifacting is not bad and hard to notice, under 100 particulary 80 or less they get signifcantly worst and noticeable.
 
Ah so AFMF is application agnostic, for the most part, and purely optical flow. FSR3+FG has more data (motion vectors AND optical flow?), but its implementation is dependent on the individual game's devs. From what I've seen with FSR3+FG, it hasn't looked superior compared to AFMF in terms of image quality, which wasn't expected.
AFMF is basically the same thing Samsung and LG have been doing for their TV’s for years to bring 30 fps content to 60 fps, or 60 to 120, it simply interpolates a frame in between 2 frames. TV’s have been doing it for a long ass time, AMD just brought that to the gaming world. It’s a proven tech cable companies have been using for nearly 20 years.
 
Has anyone compared AFMF to FSR 3 in Forspoken or Immortals?With AFMF you don’t need to use FSR, where significant issues persist.
I'm wondering about this too since this was a pretty big joke to Nvidia users for a while. Has the joke landed or does it end?
 
I'm wondering about this too since this was a pretty big joke to Nvidia users for a while. Has the joke landed or does it end?
Well I am looking at Nvidia more as a used cars salesmen, over exaggerating the real value to the ninth degree. Of course the less informed, bright and so on fall for it bragging to others what a deal they got and your car is junk.
 
Well I am looking at Nvidia more as a used cars salesmen, over exaggerating the real value to the ninth degree. Of course the less informed, bright and so on fall for it bragging to others what a deal they got and your car is junk.
I'm looking at Nvidia as a company that tries to push locked in standards built around open standards so they can push consumers to buy their products since only they have these features. I'm not a fan of FSR and DLSS but I would like to see if DLSS did really need to be exclusive to Nvidia hardware, let alone DLSS3 for RTX-40.
 
I'm looking at Nvidia as a company that tries to push locked in standards built around open standards so they can push consumers to buy their products since only they have these features. I'm not a fan of FSR and DLSS but I would like to see if DLSS did really need to be exclusive to Nvidia hardware, let alone DLSS3 for RTX-40.
They don't really need to "push" anybody towards their product, they already have like 90% of the market.
FSR, DLSS, and AFMF all work to solve the same problem, playing the game at a lower resolution.

AMD and Nvidia for whatever market reasons have priced themselves out of the common market, they know it, we know it, and developers know it. With how games are progressing visually there is a looming problem in the next year maybe less there are a lot of GPUs in a lot of systems that just can't run what is coming at native for the screens people have. So they are going to need to either run potato mode, which nobody really enjoys, and developers have to put time and resources into making work as it is actually somewhat problematic to turn things down and keep things stable. Or those users have to run non-native resolutions so 1080p screens at 720p, 1440p screens at 1080, etc... So developers then have to build their own scaling engines to accommodate that case. We can go back and forth about the accuracy of the Steam charts, but developers have their own hardware results, and if they were wildly different from the Steam ones somebody official would have spoken up.
So here we are with new engines coming out, with new games and new features, and half the PC market in a situation where they need to turn things way down to play the games, and that means they may pass on the game because they need to save up for a new machine, or because they know they are going to get the short end of the stick maybe they pirate it to play on potato mode so they can save for that new machine and buy it when they can actually enjoy it... In either event, the developer is looking at a significant portion of their market potentially passing on the game for an unspecified time and they don't want that.

And yeah it's easy to say well they need to buy a new GPU or a new system, but with how things are priced that is another thing to do entirely for a lot of people. Well see how things look when AMD releases the 7700G which will sell like hotcakes, but they are going to rely heavily on AFMF and FSR to maintain decent FPS numbers at 1080p.
 
Last edited:
They don't really need to "push" anybody towards their product, they already have like 90% of the market.
FSR, DLSS, and AFMF all work to solve the same problem, playing the game at a lower resolution.

AMD and Nvidia for whatever market reasons have priced themselves out of the common market, they know it, we know it, and developers know it. With how games are progressing visually there is a looming problem in the next year maybe less there are a lot of GPUs in a lot of systems that just can't run what is coming at native for the screens people have. So they are going to need to either run potato mode, which nobody really enjoys, and developers have to put time and resources into making work as it is actually somewhat problematic to turn things down and keep things stable. Or those users have to run non-native resolutions so 1080p screens at 720p, 1440p screens at 1080, etc... So developers then have to build their own scaling engines to accommodate that case. We can go back and forth about the accuracy of the Steam charts, but developers have their own hardware results, and if they were wildly different from the Steam ones somebody official would have spoken up.
So here we are with new engines coming out, with new games and new features, and half the PC market in a situation where they need to turn things way down to play the games, and that means they may pass on the game because they need to save up for a new machine, or because they know they are going to get the short end of the stick maybe they pirate it to play on potato mode so they can save for that new machine and buy it when they can actually enjoy it... In either event, the developer is looking at a significant portion of their market potentially passing on the game for an unspecified time and they don't want that.

And yeah it's easy to say well they need to buy a new GPU or a new system, but with how things are priced that is another thing to do entirely for a lot of people. Well see how things look when AMD releases the 7700G which will sell like hotcakes, but they are going to rely heavily on AFMF and FSR to maintain decent FPS numbers at 1080p.
The wonders of real time RT effects, a kind of snakes oil if you ask me. Being sold for how many generations and still runs like crap on most hardware at higher resolutions.
 
The wonders of real time RT effects, a kind of snakes oil if you ask me. Being sold for how many generations and still runs like crap on most hardware at higher resolutions.
RT is a double edged sword, it exists to cut developer costs, upscaling exists to lower target resolutions to make it usable now and not later on hardware the average Joe can afford.
But it looks good.
 
RT is a double edged sword, it exists to cut developer costs, upscaling exists to lower target resolutions to make it usable now and not later on hardware the average Joe can afford.
But it looks good.
It's one of those thing that's in the early stages. For those of us old enough, we can remember when programmable shaders were similar. They launched in the GeForce 3 times and while it was neat... it was of limited utility. The cards just didn't have a lot of shader power so you had to use them sparingly. I remember DK2 used them to do dot3 bump mapping on lava, couple games used them to make water shiny, things like that.

However with each generation, their power grew. Eventually you had cards so powerful you could do all textures with a programmable shader path, and these days that's more or less always how it is done.

RT is similar. The 2000 series, it was almost entirely a novelty. Just too slow for real use. Since then it has gotten better and now we see it in more games, and being used better, with decent frame rates, at least on the high end. It is still a long way out from "Sure just ray trace the whole game, no rasterized mode," but the generational improvement is clear. It'll take time, and not every early implementation of it will be worthwhile, but in the long run I'm hopeful it leads to prettier games, with less work for the artists.
 
They don't really need to "push" anybody towards their product, they already have like 90% of the market.
They have 90% of the market because they pushed for features that only work on their cards. If it wasn't a feature that worked exclusively on their cards, it's a feature that works best on their cards.
FSR, DLSS, and AFMF all work to solve the same problem, playing the game at a lower resolution.
These are to solve Ray-Tracing performance and the lack of VRAM. Both issues that AMD and Nvidia created during the past several years due to crypto mining. AMD's creation of FSR was just to stay competitive to Nvidia's creation of DLSS. A lot of modern games make DLSS look good because they use TAA which looks bad. Instead of improving TAA or making a new standard altogether, Nvidia created DLSS which again didn't need to be locked to their hardware. So instead of one good open standard, we now have three competing standards, and four if you include Apple's MetalFX.
AMD and Nvidia for whatever market reasons have priced themselves out of the common market, they know it, we know it, and developers know it.
This is because AMD and Nvidia don't want to tell their investors that they can no longer produce the same level of financial success as they did for the past several years, because crypto mining is dead. With the boom of AI, Nvidia doesn't plan to change this, and AMD wants a piece of the AI pie so they too don't need to tell investors that the overpriced GPU market is no longer viable.
With how games are progressing visually there is a looming problem in the next year maybe less there are a lot of GPUs in a lot of systems that just can't run what is coming at native for the screens people have. So they are going to need to either run potato mode, which nobody really enjoys, and developers have to put time and resources into making work as it is actually somewhat problematic to turn things down and keep things stable. Or those users have to run non-native resolutions so 1080p screens at 720p, 1440p screens at 1080, etc... So developers then have to build their own scaling engines to accommodate that case.
I believe the problem is more to do with users unable to max graphic settings while still getting a respectable frame rate.
We can go back and forth about the accuracy of the Steam charts, but developers have their own hardware results, and if they were wildly different from the Steam ones somebody official would have spoken up.
I don't believe developers have results different from Steam, because if that were true then GTX 1060 owners are screwed. A GTX 1060 can certainly play modern games with low settings at 30 fps, and with FSR they can continue to extend their hardware's use. The problem is with people who bought GPU's in the past 3 years, and finding out that new games run not as well as they had hoped. Which is made worse when a feature like DLSS or FSR isn't included upon initial release, making owners of these graphic cards feel cheated out of their hardware's potential. Especially with DLSS, as we've seen on this forum.
And yeah it's easy to say well they need to buy a new GPU or a new system, but with how things are priced that is another thing to do entirely for a lot of people. Well see how things look when AMD releases the 7700G which will sell like hotcakes, but they are going to rely heavily on AFMF and FSR to maintain decent FPS numbers at 1080p.
Unless AMD puts in some serious effort into their APU's for desktops, I can't see many people jumping for joy having to buy a CPU with an under performing GPU when they could just get a better GPU. Prices for GPU's are still bad, but not bad enough to pass on a dedicated GPU. I imagine an A750 would still outperform a 7700G, as well as a RTX 3060 and RX 6600. If the 7700G would perform as well as a RX 7700 XT while being offered for around $350, they may have something. Really doubt it.
 
Unless AMD puts in some serious effort into their APU's for desktops, I can't see many people jumping for joy having to buy a CPU with an under performing GPU when they could just get a better GPU. Prices for GPU's are still bad, but not bad enough to pass on a dedicated GPU. I imagine an A750 would still outperform a 7700G, as well as a RTX 3060 and RX 6600. If the 7700G would perform as well as a RX 7700 XT while being offered for around $350, they may have something. Really doubt it.
The issue here is for the huge swaths out there still actively gaming on Intel 4-6th Gen with 1060-1660 class GPUs.
So sure an RX 7700 would out perform it, the CPU and their old hardware is going to be what’s holding them back significantly. So they are going to be actively looking for new PC’s and the R7 7700G would be a solid gateway into something new that they could reasonably put a better GPU into later.

The rest of what you say isn’t exactly wrong but nothing we can do about it.
 
That's a side effect

It exists to be (more) realistic
Anything you can do with ray tracing you can do in Raster with a good engine team and one hell of an art department.
Publishers and Studios have been harping for years about how the cost of development has skyrocketed and how price increases for games has not kept pace.

So you can either go one of two ways here Nvidia sat down with Microsoft for a roadmap on how the Ray Tracing bits of DX12 could be implemented in a way that would trick gamers into spending 2x more for a GPU than they need to.
Or that Microsoft and Nvidia took the increasing concerns of their publishers and partners and hammered out a plan that will save them 10’s of millions on development on future projects.

Business decisions come first, giving us cooler things is a byproduct of those decisions.
 
Anything you can do with ray tracing you can do in Raster with a good engine team and one hell of an art department.

Ray tracing is technologically better than raster and has been a holy grail of computer graphics for a long time, long before Nvidia was even capable of it, you're arguing against 3D games because 2D games exist(ed) - or why have a car when a horse can get you everywhere?

So you can either go one of two ways here Nvidia sat down with Microsoft for a roadmap on how the Ray Tracing bits of DX12 could be implemented in a way that would trick gamers into spending 2x more for a GPU than they need to.
Or that Microsoft and Nvidia took the increasing concerns of their publishers and partners and hammered out a plan that will save them 10’s of millions on development on future projects.

Business decisions come first, giving us cooler things is a byproduct of those decisions.

Sometimes introducing cool things (that either work only on or just better on your products) are the business decisions...

Reports of ray tracing being easier on devs didn't even come out/into play until after RTX was introduced and devs actually started making games with real time ray tracing cause finally there were products capable of doing so lol - believe it was around Metro Exodus/2019-2020 we started hearing about it
 
Ray tracing is technologically better than raster and has been a holy grail of computer graphics for a long time, long before Nvidia was even capable of it, you're arguing against 3D games because 2D games exist(ed) - or why have a car when a horse can get you everywhere?



Sometimes introducing cool things (that either work only on or just better on your products) are the business decisions...

Reports of ray tracing being easier on devs didn't even come out/into play until after RTX was introduced and devs actually started making games with real time ray tracing cause finally there were products capable of doing so lol - believe it was around Metro Exodus/2019-2020 we started hearing about it
I’m not arguing against it, I was making a point. You can do it, but those sorts of Art and Development talents are expensive, unless we want $500 games it’s not viable.

But first you need an API, then you need Hardware that can run it, then you need development software that can make use of it. Then you need projects to start using that software.

AAA games spend 4’ish years in development. And they don’t often switch development sets mid stream, so it’s the whole if you build it they will come.

These things just take a lot of time, it’s a game of years not months, it’s going to be 2025’ish before we really start seeing the Ray Traced titles being the mainstream because that’s when the projects build using the tools designed for it will be coming out.

Mainstream isn’t the right word but with the exception of Cyberpunk RayTracing is a bolt on after effect, it’s not the mainstay of the engine.
 
Last edited:
I’m not arguing against it, I was making a point. You can do it, but those sorts of Art and Development talents are expensive, unless we want $500 games it’s not viable.

But first you need an API, then you need Hardware that can run it, then you need development software that can make use of it. Then you need projects to start using that software.

AAA games spend 4’ish years in development. And they don’t often switch development sets mid stream, so it’s the whole if you build it they will come.

These things just take a lot of time, it’s a game of years not months, it’s going to be 2025’ish before we really start seeing the Ray Traced titles being the mainstream because that’s when the projects build using the tools designed for it will be coming out.

Mainstream isn’t the right word but with the exception of Cyberpunk RayTracing is a bolt on after effect, it’s not the mainstay of the engine.

Yeah but still the argument "well raster can do it" still doesn't make sense here, even if ray tracing did cost more on/to the devs and company for sake of argument.

No one in the first 3D gaming generation/early 1990s onward were going "Wow, these 3D games suck we should play 2D games only!" or going "Wow costs a lot more to develop, we're never making a 3D game again!"

People business side to developer side to consumer/player side knew it was the future and the past was the past once it hit the scene.
 
Ray tracing is technologically better than raster and has been a holy grail of computer graphics for a long time, long before Nvidia was even capable of it, you're arguing against 3D games because 2D games exist(ed) - or why have a car when a horse can get you everywhere?



Sometimes introducing cool things (that either work only on or just better on your products) are the business decisions...

Reports of ray tracing being easier on devs didn't even come out/into play until after RTX was introduced and devs actually started making games with real time ray tracing cause finally there were products capable of doing so lol - believe it was around Metro Exodus/2019-2020 we started hearing about it
Ray tracing can solve light problems effectively, the whole gamut, reflections, shadows, caustics, refraction, light bounce, color bleed and so on. You need to process 1000 rays per pixel or better for photographic level effects. At least 100 rays per pixel with some good denoisers.

None of the hardware out there can do that in real time, denoisers that lead to texture smudging or corruption, crawling unstable textures, light response delays for changes, sometimes 100 of frames later to correct. Rendering at lower resolution and up sampling which also have limits that will lead to ghosting and lost of detail then add on top of that frame generation which also have its own limitations. While Nvidia does it better, it still soar to the eyes for many. Still performance is not there.

Metro Exodus uses DX lighting for direct lighting and RT lighting for bounce lighting, pretty smart and effective. It is not a totally RT solution but a hybrid.

I would love totally RT lighted games with complex effects, excellent quality, meaning 100 or higher rays per pixel, at fluid frame rates and not artificially manipulated to try to look good without gimmicks to get performance up. Looking at roughly 20x 4090 performance my estimation to achieve that holy grail of RT.
 
Ray tracing can solve light problems effectively, the whole gamut, reflections, shadows, caustics, refraction, light bounce, color bleed and so on. You need to process 1000 rays per pixel or better for photographic level effects. At least 100 rays per pixel with some good denoisers.

None of the hardware out there can do that in real time, denoisers that lead to texture smudging or corruption, crawling unstable textures, light response delays for changes, sometimes 100 of frames later to correct. Rendering at lower resolution and up sampling which also have limits that will lead to ghosting and lost of detail then add on top of that frame generation which also have its own limitations. While Nvidia does it better, it still soar to the eyes for many. Still performance is not there.

Metro Exodus uses DX lighting for direct lighting and RT lighting for bounce lighting, pretty smart and effective. It is not a totally RT solution but a hybrid.

I would love totally RT lighted games with complex effects, excellent quality, meaning 100 or higher rays per pixel, at fluid frame rates and not artificially manipulated to try to look good without gimmicks to get performance up. Looking at roughly 20x 4090 performance my estimation to achieve that holy grail of RT.

So why does that mean we should drop 'hackish' ray tracing for raster?

How do you propose we learn to run and get to the promised land of real time full ray tracing/path tracing without walking crutched hacked together real time ray tracing first? How do we get to the promised land if we just use raster?

Why argue for technological hinderance?
 
Yeah but still the argument "well raster can do it" still doesn't make sense here, even if ray tracing did cost more on/to the devs and company for sake of argument.

No one in the first 3D gaming generation/early 1990s onward were going "Wow, these 3D games suck we should play 2D games only!" or going "Wow costs a lot more to develop, we're never making a 3D game again!"

People business side to developer side to consumer/player side knew it was the future and the past was the past once it hit the scene.
Trying to do them in raster doesn't make sense, that is the point I was making.

But game studios didn't go to Nvidia with the idea of "Hey we want to make cooler lighting effects so go build us a card that can do that please".
Nvidia started working on Ray Traced consumer effects because they were complaining that doing all that lighting and reflection work with raster methods was slow and expensive and they wanted a solution that would give them better results for less money. And it took Nvidia years of work before they even got things to the RTX 2000 cards, GPU's are generally started 3 generations out, so the groundwork for the RTX 2000 cards started basically the week after the GT 800 series launched. Microsoft and Nvidia started working on it to get it into DX12 sometime in 2012 during the early stages of its development because similarly developers were hot on Microsoft for a Low-Level API shortly after DX10 came out and they didn't like all the confining API calls.

I am not arguing against Ray Traced, it is the future of where things are going, my original comment was a statement on the reason they started on the work, it saves studios money by the buckets, the fact it delivers more consistent results that can look better is an added bonus.
 
  • Like
Reactions: noko
like this
Trying to do them in raster doesn't make sense, that is the point I was making.

But game studios didn't go to Nvidia with the idea of "Hey we want to make cooler lighting effects so go build us a card that can do that please".
Nvidia started working on Ray Traced consumer effects because they were complaining that doing all that lighting and reflection work with raster methods was slow and expensive and they wanted a solution that would give them better results for less money. And it took Nvidia years of work before they even got things to the RTX 2000 cards, GPU's are generally started 3 generations out, so the groundwork for the RTX 2000 cards started basically the week after the GT 800 series launched. Microsoft and Nvidia started working on it to get it into DX12 sometime in 2012 during the early stages of its development because similarly developers were hot on Microsoft for a Low-Level API shortly after DX10 came out and they didn't like all the confining API calls.

I am not arguing against Ray Traced, it is the future of where things are going, my original comment was a statement on the reason they started on the work, it saves studios money by the buckets, the fact it delivers more consistent results that can look better is an added bonus.

It didn't need to save studios money or devs time for it to happen. Those are all side effects not the reason ray tracing is happening now.

It being a holy grail, it was gonna happen at some point in some time by some company once capable of being done (even if hackishly) in real time. Cost to studios or developers be damned.

Just like the transition to 3D, another holy grail (and very hackishly done back then by today's standards, for the others). Whether devs found it harder or studios more expensive, tough shit still the future and it's happening now.

We were just at that time where the HW companies were finally capable of it in (hackishly good enough) real time so started doing so. Just like 3D back then.

That's why you have ray tracing now (and will continue to, for the others).
 
It didn't need to save studios money or devs time for it to happen. Those are all side effects not the reason ray tracing is happening now.

It being a holy grail, it was gonna happen at some point in some time by some company once capable of being done (even if hackishly) in real time. Cost to studios or developers be damned.

Just like the transition to 3D, another holy grail (and very hackishly done back then by today's standards, for the others). Whether devs found it harder or studios more expensive, tough shit still the future and it's happening now.

We were just at that time where the HW companies were finally capable of it in (hackishly good enough) real time so started doing so. Just like 3D back then.

That's why you have ray tracing now (and will continue to, for the others).
It didn't need to but it did, that is the order of events.
 
It didn't need to but it did, that is the order of events.

HW being capable was the trigger pull, not devs or studios. Ray tracing wasn't waiting in the wings, HW companies had it working in real time long before, just waiting for devs and studios to give the thumbs up whenever the books showed it would be profitable and less dev time. Once the HW was capable HW went to devs/companies and said 'it's time, figure that shit out on your end, let's go'
 
Last edited:
HW being capable was the trigger pull, not devs or studios. Ray tracing wasn't waiting in the wings, HW companies had it working in real time long before, just waiting for devs and studios to give the thumbs up whenever the books showed it would be profitable and less dev time. Once the HW was capable HW went to devs/companies and said 'it's time, figure that shit out on your end, let's go'
Microsoft started working on the API’s for it when they started working on DX12 back in 2011, which is around the same time that Nvidia started working on the Turing architecture.
Nvidia did not throw the cards into a vacuum and say have fun, they had been working on it for a long time and was very bold in their promises of how it would let studios save time on development and testing. And how that would save them money while delivering a better product.
At that stage I’m not sure even they could imagine something like Cuberpunk 2077 being an actual consumer product.
 
  • Like
Reactions: noko
like this
It still didn't and doesn't exist to cut developer costs. You ask how it got here yet still insist on how lol. That only ever was a side effect. Even if in back in 1999 for some reason Nvidia had changed the name of the company to 'It Cuts Developer Time LLC'.

Just like Ozempycs a diabetes medicine not a weight loss medicine. Even if you can use it to lose weight.

As soon as the HW was capable it was gonna happen and the ball start rolling from there. Just like 3D. Devs and studios either get with it or get the fuck out of the way for the one who will. Because it's the technological better and more accurate and cooler way and the future. And was so long before it was even possible. Simple as that and only that. HW always the starter once capable.
 
Ray tracing is technologically better than raster and has been a holy grail of computer graphics for a long time, long before Nvidia was even capable of it, you're arguing against 3D games because 2D games exist(ed) - or why have a car when a horse can get you everywhere?
I wouldn't call it the holy grail of computer graphics. It's computational and power cost (for developers too) is insane and even with three generations of nVidia cards it still requires a lot of tricks for it even to become usuable. So many in fact that I doubt the accuracy of lighting is maintained once they have been implemented. Ray tracing is the most accurate rendering technique we have that's it. But it's no panacea.
 
I wouldn't call it the holy grail of computer graphics
A* holy grail - like 3D there wasn't/isn't only one in this realm (and your 3D raster still requires a lot of tricks to be usable, and took more than 3 generations to get where it is today)

And if you don't understand ray tracing has been a holy grail of computer graphics for some time, then you just weren't paying attention or know what you were playing with all this time, whether one likes ray tracing or not

You're complaining about 3D graphics when the PlayStation launched (or PS3 launched if you want 3 generations later - or go with 3rd generation 3D capable GPUs if you want)

We know it will only get better from there on out, just as ray tracing from here on out

You don't like the speed at which, tough. Neat, but tough.

Edit: VR is another holy grail, and the people who like to argue against ray tracing would have never allowed the Index to come to be because they tried a Virtua Boy once. Luckily these people and their arguments and attitudes aren't the deciders. Only neverending technological progress towards more and better is.

Also the same reason we have stories about Terminators killing us. Because technological progress, like ray tracing (is an example of), is happening whether you like it or not or want it or not.
 
Last edited:
A* holy grail - like 3D there wasn't/isn't only one in this realm (and your 3D raster still requires a lot of tricks to be usable, and took more than 3 generations to get where it is today)

And if you don't understand ray tracing has been a holy grail of computer graphics for some time, then you just weren't paying attention or know what you were playing with all this time, whether one likes ray tracing or not

You're complaining about 3D graphics when the PlayStation launched (or PS3 launched if you want 3 generations later - or go with 3rd generation 3D capable GPUs if you want)

We know it will only get better from there on out, just as ray tracing from here on out

You don't like the speed at which, tough. Neat, but tough.
Did you address the computational cost? No. Until then it's no holy grail if a majority of gamers can't really use it.
 
Did you address the computational cost? No. Until then it's no holy grail if a majority of gamers can't really use it.

It is and has been whether you want to call it one or not lol

It's computationally expensive. Holy shit stop the presses. So is 3D over 2D. Ray tracing is simulating photons, I'd be suprised if it wasn't computationally expensive. You don't actually make a point here.

That's why we have upscaling, whether you like that too or not.
 
The issue here is for the huge swaths out there still actively gaming on Intel 4-6th Gen with 1060-1660 class GPUs.
So sure an RX 7700 would out perform it, the CPU and their old hardware is going to be what’s holding them back significantly. So they are going to be actively looking for new PC’s and the R7 7700G would be a solid gateway into something new that they could reasonably put a better GPU into later.

The rest of what you say isn’t exactly wrong but nothing we can do about it.
People still use Intel 4th and 6th gen CPU's because CPU's don't age as badly as GPU's. You could pick up a R7 7700G and use that as a basis so when you do want better graphics performance you can just add a faster GPU. To justify the price, AMD would need decently fast GPU performance because traditionally their G products are slower than X products. If the GPU performance is bad enough that you need to buy a discrete GPU, so then why go for a G product other than you can't currently afford a card?
 
It is and has been whether you want to call it one or not lol

It's computationally expensive. Holy shit stop the presses. So is 3D over 2D. Ray tracing is simulating photons, I'd be suprised if it wasn't computationally expensive. You don't actually make a point here.

That's why we have upscaling, whether you like that too or not.
Except that everyone can run most rasterized 3D games, even APUs. The same cannot be said of ray tracing. You can pretend I'm not but DLSS wouldn't be so important if I wasn't making a very salient point.
 
  • Like
Reactions: noko
like this
Except that everyone can run most rasterized 3D games, even APUs. The same cannot be said of ray tracing. You can pretend I'm not but DLSS wouldn't be so important if I wasn't.

So what? Tough shit for now and for you and for them and for APUs. Literally all there is to it lol.

In time, it will come to all those. Till then, tough. None of that is stopping it and it's not going anywhere. You'll just have to learn to deal with it. Get a second job to afford it, contribute code to make it run more efficient, don't buy GPU s that support it, don't buy games that support it. You have options if you don't like it. Stopping it isn't an option though unfortunately.
 
Last edited:
Except that everyone can run most rasterized 3D games, even APUs. The same cannot be said of ray tracing.

If your hardware can't run it acceptably then don't enable it. What exactly is the problem here?

Did you address the computational cost? No. Until then it's no holy grail if a majority of gamers can't really use it.

That's why it's real time renderering's holy grail. I don't know what you're even attempting to argue here.
 
I don't know what you're even attempting to argue here.

We do though, they never hide it well. It seeps through the space in-between every letter, word and line as much as they try to hide it. The biggest giveaway is when it's always the same people over and over who always oppose these certain things, whether that be a technology, or a company. If someone else came out the gate with ray tracing first, we'd never hear the end of it. If someone else was better at ray tracing and upscale, neither would be a problem and we'd never hear the end of it. But here we are. Not hard to notice the patterns and connect the dots.
 
IMG_0148.gif
 
Back
Top